03 Feb 2026

feedPlanet Python

Django Weblog: Django security releases issued: 6.0.2, 5.2.11, and 4.2.28

In accordance with our security release policy, the Django team is issuing releases for Django 6.0.2, Django 5.2.11, and Django 4.2.28. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.

CVE-2025-13473: Username enumeration through timing difference in mod_wsgi authentication handler

The django.contrib.auth.handlers.modwsgi.check_password() function for authentication via mod_wsgi allowed remote attackers to enumerate users via a timing attack.

Thanks to Stackered for the report.

This issue has severity "low" according to the Django security policy.

CVE-2025-14550: Potential denial-of-service vulnerability via repeated headers when using ASGI

When receiving duplicates of a single header, ASGIRequest allowed a remote attacker to cause a potential denial-of-service via a specifically created request with multiple duplicate headers. The vulnerability resulted from repeated string concatenation while combining repeated headers, which produced super-linear computation resulting in service degradation or outage.

Thanks to Jiyong Yang for the report.

This issue has severity "moderate" according to the Django security policy.

CVE-2026-1207: Potential SQL injection via raster lookups on PostGIS

Raster lookups on GIS fields (only implemented on PostGIS) were subject to SQL injection if untrusted data was used as a band index.

As a reminder, all untrusted user input should be validated before use.

Thanks to Tarek Nakkouch for the report.

This issue has severity "high" according to the Django security policy.

CVE-2026-1285: Potential denial-of-service vulnerability in django.utils.text.Truncator HTML methods

django.utils.text.Truncator.chars() and Truncator.words() methods (with html=True) and truncatechars_html and truncatewords_html template filters were subject to a potential denial-of-service attack via certain inputs with a large number of unmatched HTML end tags, which could cause quadratic time complexity during HTML parsing.

Thanks to Seokchan Yoon for the report.

This issue has severity "moderate" according to the Django security policy.

CVE-2026-1287: Potential SQL injection in column aliases via control characters

FilteredRelation was subject to SQL injection in column aliases via control characters, using a suitably crafted dictionary, with dictionary expansion, as the **kwargs passed to QuerySet methods annotate(), aggregate(), extra(), values(), values_list(), and alias().

Thanks to Solomon Kebede for the report.

This issue has severity "high" according to the Django security policy.

CVE-2026-1312: Potential SQL injection via QuerySet.order_by and FilteredRelation

QuerySet.order_by() was subject to SQL injection in column aliases containing periods when the same alias was, using a suitably crafted dictionary, with dictionary expansion, used in FilteredRelation.

Thanks to Solomon Kebede for the report.

This issue has severity "high" according to the Django security policy.

Affected supported versions

  • Django main
  • Django 6.0
  • Django 5.2
  • Django 4.2

Resolution

Patches to resolve the issue have been applied to Django's main, 6.0, 5.2, and 4.2 branches. The patches may be obtained from the following changesets.

CVE-2025-13473: Username enumeration through timing difference in mod_wsgi authentication handler

CVE-2025-14550: Potential denial-of-service vulnerability via repeated headers when using ASGI

CVE-2026-1207: Potential SQL injection via raster lookups on PostGIS

CVE-2026-1285: Potential denial-of-service vulnerability in django.utils.text.Truncator HTML methods

CVE-2026-1287: Potential SQL injection in column aliases via control characters

CVE-2026-1312: Potential SQL injection via QuerySet.order_by and FilteredRelation

The following releases have been issued

The PGP key ID used for this release is Jacob Walls: 131403F4D16D8DC7

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum. Please see our security policies for further information.

03 Feb 2026 2:13pm GMT

Real Python: Getting Started With Google Gemini CLI

This video course will teach you how to use Gemini CLI to bring Google's AI-powered coding assistance directly into your terminal. After you authenticate with your Google account, this tool will be ready to help you analyze code, identify bugs, and suggest fixes-all without leaving your familiar development environment.

Imagine debugging code without switching between your console and browser, or picture getting instant explanations for unfamiliar projects. Like other command-line AI assistants, Google's Gemini CLI brings AI-powered coding assistance directly into your command line, allowing you to stay focused in your development workflow.

Whether you're troubleshooting a stubborn bug, understanding legacy code, or generating documentation, this tool acts as an intelligent pair-programming partner that understands your codebase's context.

You're about to install Gemini CLI, authenticate with Google's free tier, and put it to work on an actual Python project. You'll discover how natural language queries can help you understand code faster and catch bugs that might slip past manual review.


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

03 Feb 2026 2:00pm GMT

PyBites: Coding can be super lonely

I hate coding solo.

Not in the moment or when I'm in the zone, I mean in the long run.

I love getting into that deep focus where I'm locked in and hours pass by in a second!

But I hate snapping out of it and not having anyone to chat with about it. (I'm lucky that's not the case anymore though - thanks Bob!)

So it's no surprise that many of the devs I chat with on Zoom calls or in person share the same sentiment.

Not everyone has a Bob though. Many people don't have anyone in their circle that they can talk to about code.

It can be a lonely experience.

And just as bad, it leads to stagnation. You can spend years coding in a silo and feel like you haven't grown at all. That feeling of being a junior dev becomes unshakable.

When you work in isolation, you're operating in a vacuum. Without external input, your vacuum becomes an echo chamber.

As funny as it sounds, as devs I think we all need other devs around us who will create friction. Without the friction of other developers looking at your work, you don't grow.

Some of my most memorable learning experiences in my first dev job were with my colleague, sharing ideas on a whiteboard and talking through code. (Thanks El!)

If you haven't had the experience of this kind of community and support, then you're missing out. Here's what I want you to do this week:

  1. Go seek out a Code Review: Find someone more senior than you and ask them to give you their two cents on your coding logic. Note I'm suggesting logic and not your syntax. Let's target your thought process!
  2. Build for Someone Else: Go build a tool for a colleague or a friend. The second another person uses your code it breaks the cycle/vacuum because you're now accountable for the bugs, suggestions and UX.
  3. Public Accountability: Join our community, tell us what you're going to build and document your progress! If no one is watching, it's too easy to quit when the engineering gets hard (believe me, I know!).

At the end of the day, you don't become a Senior Developer and break through to the next level of your Python journey by typing in a dark room alone (as enjoyable as that may be sometimes 😅)

You become one by engaging with the community, sharing what you're doing and learning from others.

If you're stuck in a vacuum, join the community, reply to my welcome DM, and check out our community calendar.

Julian

This was originally sent to our email list. Join here.

03 Feb 2026 11:02am GMT

02 Feb 2026

feedDjango community aggregator: Community blog posts

ModelRenderers, a possible new component to Django...

Towards the end of last year I was working with form renderers in my startup to provide a consistent interface for forms across the project. I also used template partials to override widgets, delivering consistent rendering all in one file, a nice win. This made me wonder what other common components in a project get rendered to HTML that could benefit from a single, reusable place to avoid repeating myself.

Carlton's Neapolitan package already has this to some degree. There are two template tag types: one for object detail and one for object list. We also have FormRenderers in Django which already cascade from project down to an individual form, so perhaps we could apply the same logic to render models in a dry, configurable way rather than duplicating templates and logic. This made me wonder, could we have a python class whose role is to define how a model get's rendered? Let's be clear, we're not getting into serializers here and the validation or logic that comes with them, it's similar to the separation of Forms and FormRenders.

I'm thinking that this idea allows the rendering of an object of list of objects in a template like so:


{{ object }}

{{ object_list }}

This can be controlled via a class described above:


class MyModelRenderer(ModelRenderer):
    list_template = ''
    detail_template = ''
    form_renderer = MyFormRenderer

The above class controls how a list of objects would be rendered along with a single object and the form renderer to use. The form side opens up the idea of chaining renderers together in order to find the correct template to use. This then links to the idea of having a template snippet for rendering related models. If you have a foreign key or a many-to-many relationship, your renderer could specify how to render itself as a related field. You could chain model renderers together so that, when rendering a related field, it looks up the appropriate snippet instead of rendering the entire detail or the entire list.

This obviously would be an optional API, but a potentially interesting one. It would certainly alter the look of a Django project and of course nothing stops you from rendering by hand. To me this leans into a different approach to having shared components at the template level, pushing towards not repeating yourself where possible.

Does this peak your interest or does this scream of nothing like Django at all? Let me know your thoughts!

02 Feb 2026 6:00am GMT

31 Jan 2026

feedDjango community aggregator: Community blog posts

Django's test runner is underrated

Every podcast, blog post, Reddit thread, and every conference talk seems to agree: "just use pytest". Real Python says most developers prefer it. Brian Okken's popular book calls it "undeniably the best choice". It's treated like a rite of passage for Python developers: at some point you're supposed to graduate from the standard library to the "real" testing framework.

I never made that switch for my Django projects. And after years of building and maintaining Django applications, I still don't feel like I'm missing out.

What I actually want from tests

Before we get into frameworks, let me be clear about what I need from a test suite:

  1. Readable failures. When something breaks, I want to understand why in seconds, not minutes.

  2. Predictable setup. I want to know exactly what state my tests are running against.

  3. Minimal magic. The less indirection between my test code and what's actually happening, the better.

  4. Easy onboarding. New team members should be able to write tests on day one without learning a new paradigm.

Django's built-in test framework delivers all of this. And honestly? That's enough for most projects.

Django tests are just Python's unittest

Here's something that surprises a lot of developers: Django's test framework isn't some exotic Django-specific system. Under the hood, it's Python's standard unittest module with a thin integration layer on top.

TestCase extends unittest.TestCase. The assertEqual, assertRaises, and other assertion methods? Straight from the standard library. Test discovery, setup and teardown, skip decorators? All standard unittest behavior.

What Django adds is integration: Database setup and teardown, the HTTP client, mail outbox, settings overrides.

This means when you choose Django's test framework, you're choosing Python's defaults plus Django glue. When you choose pytest with pytest-django, you're replacing the assertion style, the runner, and the mental model, then re-adding Django integration on top.

Neither approach is wrong. But it's objectively more layers.

The self.assert* complaint

A common argument I hear against unittest-style tests is: "I can't remember all those assertion methods". But let's be honest. We're not writing tests in Notepad in 2026. Every editor has autocomplete. Type self.assert and pick from the list.

And in practice, how many assertion methods do you actually use? In my tests, it's mostly assertEqual and assertRaises. Maybe assertTrue, assertFalse, and assertIn once in a while. That's not a cognitive burden.

Here's the same test in both styles:

# Django / unittest
self.assertEqual(total, 42)
with self.assertRaises(ValidationError):
    obj.full_clean()
# pytest
assert total == 42
with pytest.raises(ValidationError):
    obj.full_clean()

Yes, pytest's assert is shorter. It's a bit easier on the eyes. And I'll be honest: pytest's failure messages are better too. When an assertion fails, pytest shows you exactly what values differed with nice diffs. That's genuinely useful.

But here's what makes that work: pytest rewrites your code. It hooks into Python's AST and transforms your test files before they run so it can produce those detailed failure messages from plain assert statements. That's not necessarily bad - it's been battle-tested for over a decade. But it is a layer of transformation between what you write and what executes, and I prefer to avoid magic when I can.

For me, unittest's failure messages are good enough. When assertEqual fails, it tells me what it expected and what it got. That's usually all I need. Better failure messages are nice, but they're not worth adding dependencies and an abstraction layer for.

The missing piece: parametrized tests

If there's one pytest feature people genuinely miss when using Django's test framework, it's parametrization. Writing the same test multiple times with different inputs feels wasteful.

But you really don't need to switch to pytest just for that. The parameterized package solves this cleanly:

from django.test import SimpleTestCase
from parameterized import parameterized

class SlugifyTests(SimpleTestCase):
    @parameterized.expand([
        ("Hello world", "hello-world"),
        ("Django's test runner", "djangos-test-runner"),
        ("  trim  ", "trim"),
    ])
    def test_slugify(self, input_text, expected):
        self.assertEqual(slugify(input_text), expected)

Compare that to pytest:

import pytest

@pytest.mark.parametrize("input_text,expected", [
    ("Hello world", "hello-world"),
    ("Django's test runner", "djangos-test-runner"),
    ("  trim  ", "trim"),
])
def test_slugify(input_text, expected):
    assert slugify(input_text) == expected

Both are readable. Both work well. The difference is that parameterized is a tiny, focused library that does one thing. It doesn't replace your test runner, introduce a new fixture system, or bring an ecosystem of plugins. It's a decorator, not a paradigm shift.

Once I added parameterized, I realized pytest no longer solved a problem I actually had.

Side by side: common test patterns

Let's look at how typical Django tests compare to pytest's approach.

Database tests

# Django
from django.test import TestCase
from myapp.models import Article

class ArticleTests(TestCase):
    def test_article_str(self):
        article = Article.objects.create(title="Hello")
        self.assertEqual(str(article), "Hello")
# pytest + pytest-django
import pytest
from myapp.models import Article

@pytest.mark.django_db
def test_article_str():
    article = Article.objects.create(title="Hello")
    assert str(article) == "Hello"

With Django, database access simply works. TestCase wraps every test in a transaction and rolls it back afterward, giving you a clean slate without extra decorators. pytest-django takes the opposite approach: database access is opt-in. Different philosophies, but I find theirs annoying since most of my tests touch the database anyway, so I'd end up with @pytest.mark.django_db on almost every test.

View tests

# Django
from django.test import TestCase
from django.urls import reverse

class ViewTests(TestCase):
    def test_home_page(self):
        response = self.client.get(reverse("home"))
        self.assertEqual(response.status_code, 200)
# pytest + pytest-django
from django.urls import reverse

def test_home_page(client):
    response = client.get(reverse("home"))
    assert response.status_code == 200

In Django, self.client is right there on the test class. If you want to know where it comes from, follow the inheritance tree to TestCase. In pytest, client appears because you named your parameter client. That's how fixtures work: injection happens by naming convention. If you didn't know that, the code would be puzzling. And if you want to find where a fixture is defined, you might be hunting through conftest.py files across multiple directory levels.

What about fixtures?

Pytest's fixture system is the other big feature people bring up. Fixtures compose, they handle setup and teardown automatically, and they can be scoped to function, class, module, or session.

But the mechanism is implicit. You've already seen the implicit injection in the view test example: name a parameter client and it appears, add db to your function signature and you get database access. Powerful, but also magic you need to learn.

For most Django tests, you need some objects in the database before your test runs. Django gives you two ways to do this:

class ArticleTests(TestCase):
    @classmethod
    def setUpTestData(cls):
        cls.author = User.objects.create(username="kevin")
    
    def test_article_creation(self):
        article = Article.objects.create(title="Hello", author=self.author)
        self.assertEqual(article.author.username, "kevin")

If you need more sophisticated object creation, factory-boy works great with either framework.

The fixture system solves a real problem - complex cross-cutting setup that needs to be shared and composed. My projects just haven't needed that level of sophistication. And I'd rather not add the indirection until I do.

The hidden cost of flexibility

Pytest's flexibility is a feature. It's also a liability.

In small projects, pytest feels lightweight. But as projects grow, that flexibility can accumulate into complexity. Your conftest.py starts small, then grows into its own mini-framework. You add pytest-xdist for parallel tests (Django has --parallel built-in). You write custom fixtures for DRF's APIClient (Django's APITestCase just works). You add a plugin for coverage, another for benchmarking. Each one makes sense in isolation.

Then a test fails in CI but not locally, and you're debugging the interaction between three plugins and a fixture that depends on two other fixtures.

Django's test framework doesn't have this problem because it doesn't have this flexibility. There's one way to set up test data. There's one test client. There's one way to run tests in parallel. Boring, but predictable.

When I'm debugging a test failure, I want to debug my code, not my test infrastructure.

When I would recommend pytest

I'm not anti-pytest. If your team already has deep pytest expertise and established patterns, switching to Django's runner would be a net negative. Switching costs are real. If I join a project that uses pytest? I use pytest. This is a preference for new projects, not a religion.

It's also worth noting that pytest can run unittest-style tests without modification. You don't have to rewrite everything if you want to try it. That's a genuinely nice feature.

But if you're starting fresh, or you're the one making the decision? Make it a conscious choice. "Everyone uses pytest" can be a valid consideration, but it shouldn't be the whole argument.

My rule of thumb

Start with Django's test runner. It's boring, it's stable, and it works.

Add parameterized when you need parametrized tests.

Switch to pytest only when you can name the specific problem Django's framework can't solve. Not because a podcast told you to, but because you've hit an actual wall.

I've been building Django applications for a long time. I've tried both approaches. And I keep choosing boring.

Boring is a feature in test infrastructure.

31 Jan 2026 2:21am GMT

30 Jan 2026

feedDjango community aggregator: Community blog posts

Django News - Python Developers Survey 2026 - Jan 30th 2026

News

Python Developers Survey 2026

This is the ninth iteration of the official Python Developers Survey. It is run by the PSF (Python Software Foundation) to highlight the current state of the Python ecosystem and help with future goals.

Note that the official Django Developers Survey is currently being finalized and will come out hopefully in March or April.

jetbrains.com

The French government is building an entire productivity ecosystem using Django

In a general push for removing Microsoft, Google and any US or other non-EU dependency, the French government has been rapidly creating an open source set of productivity tools called "LaSuite", in collaboration with the Netherlands & Germany.

reddit.com

Django Packages : 🧑‍🎨 A Fresh, Mobile-Friendly Look with Tailwind CSS

As we announced last week, Django Packages released a new design, and Maksudul Haque, who led the effort, wrote about the changes.

djangopackages.org

Python Software Foundation

Dispatch from PyPI Land: A Year (and a Half!) as the Inaugural PyPI Support Specialist

A look back on the first year and a half as the inaugural PyPI Support Specialist.

pypi.org

Django Fellow Reports

Fellows Report - Natalia

By far, the bulk of my week went into integrating the checklist-generator into djangoproject.com, which required a fair amount of coordination and follow-through. Alongside that, security work ramped up again, with a noticeable increase in incoming reports that needed timely triage and prioritization. Everything else this week was largely in support of keeping those two tracks moving forward.

djangoproject.com

Fellows Report - Jacob

Engaged in a fair number of security reports this week. Release date and number of issues for 6.0.2 to be finalized and publicized tomorrow.

djangoproject.com

Wagtail CMS News

Wagtail's new Security Announcements Channel

Wagtail now publishes security release notifications via a dedicated GitHub Security Announcements discussion category, with early alerts, RSS feed, and advisory links.

wagtail.org

40% smaller images, same quality

Wagtail 7.3 ships with smarter image compression defaults that deliver roughly 40% smaller images with no visible quality loss, improving page speed, SEO, and reducing energy use out of the box.

wagtail.org

Updates to Django

Today, "Updates to Django" is presented by Hwayoung from Djangonaut Space! 🚀

Last week we had 11 pull requests merged into Django by 7 different contributors - including 2 first-time contributors! Congratulations to Sean Helvey🚀 and James Fysh for having their first commits merged into Django - welcome on board!

Django Newsletter

Sponsored Link 1

Sponsor Django News

Reach 4,300 highly engaged Django developers!

django-news.com

Articles

Django: profile memory usage with Memray

Use Memray to profile Django startup, identify heavy imports like numpy, and reduce memory by deferring, lazy importing, or replacing dependencies.

adamj.eu

Some notes on starting to use Django

Julia Evans explains why Django is well-suited to small projects, praising its explicit structure, built-in admin, ORM, automatic migrations, and batteries-included features.

jvns.ca

Quirks in Django's template language part 3

Lily explores Django template edge cases: now tag format handling, numeric literal parsing, and the lorem tag with negative counts, proposing stricter validation and support for format variables.

lilyf.org

Testing: exceptions and caches

Nicer ways to test exceptions and to test cached function results.

nedbatchelder.com

I run a server farm in my closet (and you can too!)

One woman's quest to answer the question: does JIT go brrr?

savannah.dev

Speeding up Pillow's open and save

Not strictly Django but from Python 3.15 release manager Hugo playing around with Tachyon, the new "high-frequency statistical sampling profiler" coming in Python 3.15.

hugovk.dev

Events

DjangoCon Europe CFP Closes February 8

DjangoCon Europe 2026 opens CFP for April 15 to 19 in Athens; submit technical and community-focused Django and Python talks by February 8, 2026.

djangocon.eu

Opportunity Grants Application for DjangoCon US 2026

Opportunity Grants Application for DjangoCon US 2026 is open through March 16, 2026, at 11:00 am Central Daylight Time (America/Chicago). Decision notifications will be sent out by July 1, 2026.

google.com

Videos

django-bolt - Rust-powered API Framework for Django

From the BugBytes channel, an 18-minute look at django-bolt including how and why you might use it.

youtu.be

Podcasts

Django Chat #194: Inverting the Testing Pyramid - Brian Okken

Brian is a software engineer, podcaster, and author. We discuss recent tooling changes in Python, using AI effectively, inverting the traditional testing pyramid, and more.

djangochat.com

Django Job Board

Two new Django roles this week, ranging from hands-on backend development to senior-level leadership, building scalable web applications.

Backend Software Developer at Chartwell Resource Group Ltd. 🆕

Senior Django Developer at SKYCATCHFIRE

Django Newsletter

Django Codebase

Django Features

This was first introduced last year, but it's worth bringing renewed attention to this: a new feature proposals for Django and the third-party ecosystem.

github.com

Projects

FarhanAliRaza/django-rapid

Msgspec based serialization for Django.

github.com

adamghill/dj-toml-settings

Load Django settings from a TOML file.

github.com


This RSS feed is published on https://django-news.com/. You can also subscribe via email.

30 Jan 2026 6:00pm GMT

22 Jan 2026

feedPlanet Plone - Where Developers And Integrators Write

Maurits van Rees: Mikel Larreategi: How we deploy cookieplone based projects.

We saw that cookieplone was coming up, and Docker, and as game changer uv making the installation of Python packages much faster.

With cookieplone you get a monorepo, with folders for backend, frontend, and devops. devops contains scripts to setup the server and deploy to it. Our sysadmins already had some other scripts. So we needed to integrate that.

First idea: let's fork it. Create our own copy of cookieplone. I explained this in my World Plone Day talk earlier this year. But cookieplone was changing a lot, so it was hard to keep our copy updated.

Maik Derstappen showed me copier, yet another templating language. Our idea: create a cookieplone project, and then use copier to modify it.

What about the deployment? We are on GitLab. We host our runners. We use the docker-in-docker service. We develop on a branch and create a merge request (pull request in GitHub terms). This activates a piple to check-test-and-build. When it is merged, bump the version, use release-it.

Then we create deploy keys and tokens. We give these access to private GitLab repositories. We need some changes to SSH key management in pipelines, according to our sysadmins.

For deployment on the server: we do not yet have automatic deployments. We did not want to go too fast. We are testing the current pipelines and process, see if they work properly. In the future we can think about automating deployment. We just ssh to the server, and perform some commands there with docker.

Future improvements:

  • Start the docker containers and curl/wget the /ok endpoint.
  • lock files for the backend, with pip/uv.

22 Jan 2026 9:43am GMT

Maurits van Rees: Jakob Kahl and Erico Andrei: Flying from one Plone version to another

This is a talk about migrating from Plone 4 to 6 with the newest toolset.

There are several challenges when doing Plone migrations:

  • Highly customized source instances: custom workflow, add-ons, not all of them with versions that worked on Plone 6.
  • Complex data structures. For example a Folder with a Link as default page, with pointed to some other content which meanwhile had been moved.
  • Migrating Classic UI to Volto
  • Also, you might be migrating from a completely different CMS to Plone.

How do we do migrations in Plone in general?

  • In place migrations. Run migration steps on the source instance itself. Use the standard upgrade steps from Plone. Suitable for smaller sites with not so much complexity. Especially suitable if you do only a small Plone version update.
  • Export - import migrations. You extract data from the source, transform it, and load the structure in the new site. You transform the data outside of the source instance. Suitable for all kinds of migrations. Very safe approach: only once you are sure everything is fine, do you switch over to the newly migrated site. Can be more time consuming.

Let's look at export/import, which has three parts:

  • Extraction: you had collective.jsonify, transmogrifier, and now collective.exportimport and plone.exportimport.
  • Transformation: transmogrifier, collective.exportimport, and new: collective.transmute.
  • Load: Transmogrifier, collective.exportimport, plone.exportimport.

Transmogrifier is old, we won't talk about it now. collective.exportimport: written by Philip Bauer mostly. There is an @@export_all view, and then @@import_all to import it.

collective.transmute is a new tool. This is made to transform data from collective.exportimport to the plone.exportimport format. Potentially it can be used for other migrations as well. Highly customizable and extensible. Tested by pytest. It is standalone software with a nice CLI. No dependency on Plone packages.

Another tool: collective.html2blocks. This is a lightweight Python replacement for the JavaScript Blocks conversion tool. This is extensible and tested.

Lastly plone.exportimport. This is a stripped down version of collective.exportimport. This focuses on extract and load. No transforms. So this is best suited for importing to a Plone site with the same version.

collective.transmute is in alpha, probably a 1.0.0 release in the next weeks. Still missing quite some documentation. Test coverage needs some improvements. You can contribute with PRs, issues, docs.

22 Jan 2026 9:43am GMT

Maurits van Rees: Fred van Dijk: Behind the screens: the state and direction of Plone community IT

This is a talk I did not want to give.

I am team lead of the Plone Admin team, and work at kitconcept.

The current state: see the keynotes, lots happening on the frontend. Good.

The current state of our IT: very troubling and daunting.

This is not a 'blame game'. But focussing on resources and people this conference should be a first priority. We are a real volunteer organisation, nobody is pushing anybody around. That is a strength, but also a weakness. We also see that in the Admin team.

The Admin team is 4 senior Plonistas as allround admin, 2 release managers, 2 CI/CD experts. 3 former board members, everyone overburdened with work. We had all kinds of plans for this year, but we have mostly been putting out fires.

We are a volunteer organisation, and don't have a big company behind us that can throw money at the problems. Strength and weakness. In all society it is a problem that volunteers are decreasing.

Root causes:

  • We failed to scale down in time in our IT landscape and usage.
  • We have no clean role descriptions, team descriptions, we can't ask a minimum effort per week or month.
  • The trend is more communication channels, platforms to join and promote yourself, apps to use.

Overview of what have have to keep running as admin team:

  • Support main development process: github, CI/CD, Jenkins main and runners, dist.plone.org.
  • Main communication, documentation: pone.org, docs.plone.org, training.plone.org, conf and country sites, Matomo.
  • Community office automation: Google docds, workspacae, Quaive, Signal, Slack
  • Broader: Discourse and Discord

The first two are really needed, the second we already have some problems with.

Some services are self hosted, but also a lot of SAAS services/platforms. In all, it is quite a bit.

The Admin team does not officially support all of these, but it does provide fallback support. It is too much for the current team.

There are plans for what we can improve in the short term. Thank you to a lot of people that I have already talked to about this. 3 areas: GitHub setup and config, Google Workspace, user management.

On GitHub we have a sponsored OSS plan. So we have extra features for free, but it not enough by far. User management: hard to get people out. You can't contact your members directly. E-mail has been removed, for privacy. Features get added on GitHub, and no complete changelog.

Challenge on GitHub: we have public repositories, but we also have our deployments in there. Only really secure would be private repositories, otherwise the danger is that credentials or secret could get stolen. Every developer with access becomes an attack vector. Auditing is available for only 6 months. A simple question like: who has been active for the last 2 years? No, can't do.

Some actionable items on GitHub:

  • We will separate the contributor agreement check from the organisation membership. We create a hidden team for those who signed, and use that in the check.
  • Cleanup users, use Contributors team, Developers
  • Active members: check who has contributed the last years.
  • There have been security incidents. Someone accidentally removed a few repositories. Someone's account got hacked, luckily discovered within a few hours, and some actions had already been taken.
  • More fine grained teams to control repository access.
  • Use of GitHub Discussions for some central communication of changes.
  • Use project management better.
  • The elephant in the room that we have practice on this year, and ongoing: the Collective organisation. This was free for all, very nice, but the development world is not a nice and safe place anymore. So we already needed to lock down some things there.
  • Keep deployments and the secrets all out of GitHub, so no secrets can be stolen.

Google Workspace:

  • We are dependent on this.
  • No user management. Admins have had access because they were on the board, but they kept access after leaving the board. So remove most inactive users.
  • Spam and moderation issues
  • We could move to Google docs for all kinds of things. Use Google workspace drives for all things. But the Drive UI is a mess, so docs can be in your personal account without you realizing it.

User management:

  • We need separate standalone user management, but implementation is not clear.
  • We cannot contact our members one on one.

Oh yes, Plone websites:

  • upgrade plone.org
  • self preservation: I know what needs to be done, and can do it, but have no time, focusing on the previous points instead.

22 Jan 2026 9:43am GMT

05 Jan 2026

feedPlanet Twisted

Glyph Lefkowitz: How To Argue With Me About AI, If You Must

As you already know if you've read any of this blog in the last few years, I am a somewhat reluctant - but nevertheless quite staunch - critic of LLMs. This means that I have enthusiasts of varying degrees sometimes taking issue with my stance.

It seems that I am not going to get away from discussions, and, let's be honest, pretty intense arguments about "AI" any time soon. These arguments are starting to make me quite upset. So it might be time to set some rules of engagement.

I've written about all of these before at greater length, but this is a short post because it's not about the technology or making a broader point, it's about me. These are rules for engaging with me, personally, on this topic. Others are welcome to adopt these rules if they so wish but I am not encouraging anyone to do so.

Thus, I've made this post as short as I can so everyone interested in engaging can read the whole thing. If you can't make it through to the end, then please just follow Rule Zero.

Rule Zero: Maybe Don't

You are welcome to ignore me. You can think my take is stupid and I can think yours is. We don't have to get into an Internet Fight about it; we can even remain friends. You do not need to instigate an argument with me at all, if you think that my analysis is so bad that it doesn't require rebutting.

Rule One: No 'Just'

As I explained in a post with perhaps the least-predictive title I've ever written, "I Think I'm Done Thinking About genAI For Now", I've already heard a bunch of bad arguments. Don't tell me to 'just' use a better model, use an agentic tool, use a more recent version, or use some prompting trick that you personally believe works better. If you skim my work and think that I must not have deeply researched anything or read about it because you don't like my conclusion, that is wrong.

Rule Two: No 'Look At This Cool Thing'

Purely as a productivity tool, I have had a terrible experience with genAI. Perhaps you have had a great one. Neat. That's great for you. As I explained at great length in "The Futzing Fraction", my concern with generative AI is that I believe it is probably a net negative impact on productivity, based on both my experience and plenty of citations. Go check out the copious footnotes if you're interested in more detail.

Therefore, I have already acknowledged that you can get an LLM to do various impressive, cool things, sometimes. If I tell you that you will, on average, lose money betting on a slot machine, a picture of a slot machine hitting a jackpot is not evidence against my position.

Rule Two And A Half: Engage In Metacognition

I specifically didn't title the previous rule "no anecdotes" because data beyond anecdotes may be extremely expensive to produce. I don't want to say you can never talk to me unless you're doing a randomized controlled trial. However, if you are going to tell me an anecdote about the way that you're using an LLM, I am interested in hearing how you are compensating for the well-documented biases that LLM use tends to induce. Try to measure what you can.

Rule Three: Do Not Cite The Deep Magic To Me

As I explained in "A Grand Unified Theory of the AI Hype Cycle", I already know quite a bit of history of the "AI" label. If you are tempted to tell me something about how "AI" is really such a broad field, and it doesn't just mean LLMs, especially if you are trying to launder the reputation of LLMs under the banner of jumbling them together with other things that have been called "AI", I assure you that this will not be convincing to me.

Rule Four: Ethics Are Not Optional

I have made several arguments in my previous writing: there are ethical arguments, efficacy arguments, structuralist arguments, efficiency arguments and aesthetic arguments.

I am happy to, for the purposes of a good-faith discussion, focus on a specific set of concerns or an individual point that you want to make where you think I got something wrong. If you convince me that I am entirely incorrect about the effectiveness or predictability of LLMs in general or as specific LLM product, you don't need to make a comprehensive argument about whether one should use the technology overall. I will even assume that you have your own ethical arguments.

However, if you scoff at the idea that one should have any ethical boundaries at all, and think that there's no reason to care about the overall utilitarian impact of this technology, that it's worth using no matter what else it does as long as it makes you 5% better at your job, that's sociopath behavior.

This includes extreme whataboutism regarding things like the water use of datacenters, other elements of the surveillance technology stack, and so on.


Consequences

These are rules, once again, just for engaging with me. I have no particular power to enact broader sanctions upon you, nor would I be inclined to do so if I could. However, if you can't stay within these basic parameters and you insist upon continuing to direct messages to me about this topic, I will summarily block you with no warning, on mastodon, email, GitHub, IRC, or wherever else you're choosing to do that. This is for your benefit as well: such a discussion will not be a productive use of either of our time.

05 Jan 2026 5:22am GMT

02 Jan 2026

feedPlanet Twisted

Glyph Lefkowitz: The Next Thing Will Not Be Big

The dawning of a new year is an opportune moment to contemplate what has transpired in the old year, and consider what is likely to happen in the new one.

Today, I'd like to contemplate that contemplation itself.


The 20th century was an era characterized by rapidly accelerating change in technology and industry, creating shorter and shorter cultural cycles of changes in lifestyles. Thus far, the 21st century seems to be following that trend, at least in its recently concluded first quarter.

The early half of the twentieth century saw the massive disruption caused by electrification, radio, motion pictures, and then television.

In 1971, Intel poured gasoline on that fire by releasing the 4004, a microchip generally recognized as the first general-purpose microprocessor. Popular innovations rapidly followed: the computerized cash register, the personal computer, credit cards, cellular phones, text messaging, the Internet, the web, online games, mass surveillance, app stores, social media.

These innovations have arrived faster than previous generations, but also, they have crossed a crucial threshold: that of the human lifespan.

While the entire second millennium A.D. has been characterized by a gradually accelerating rate of technological and social change - the printing press and the industrial revolution were no slouches, in terms of changing society, and those predate the 20th century - most of those changes had the benefit of unfolding throughout the course of a generation or so.

Which means that any individual person in any given century up to the 20th might remember one major world-altering social shift within their lifetime, not five to ten of them. The diversity of human experience is vast, but most people would not expect that the defining technology of their lifetime was merely the latest in a progression of predictable civilization-shattering marvels.

Along with each of these successive generations of technology, we minted a new generation of industry titans. Westinghouse, Carnegie, Sarnoff, Edison, Ford, Hughes, Gates, Jobs, Zuckerberg, Musk. Not just individual rich people, but entire new classes of rich people that did not exist before. "Radio DJ", "Movie Star", "Rock Star", "Dot Com Founder", were all new paths to wealth opened (and closed) by specific technologies. While most of these people did come from at least some level of generational wealth, they no longer came from a literal hereditary aristocracy.

To describe this new feeling of constant acceleration, a new phrase was coined: "The Next Big Thing". In addition to denoting that some Thing was coming and that it would be Big (i.e.: that it would change a lot about our lives), this phrase also carries the strong implication that such a Thing would be a product. Not a development in social relationships or a shift in cultural values, but some new and amazing form of conveying salted meat paste or what-have-you, that would make whatever lucky tinkerer who stumbled into it into a billionaire - along with any friends and family lucky enough to believe in their vision and get in on the ground floor with an investment.

In the latter part of the 20th century, our entire model of capital allocation shifted to account for this widespread belief. No longer were mega-businesses built by bank loans, stock issuances, and reinvestment of profit, the new model was "Venture Capital". Venture capital is a model of capital allocation explicitly predicated on the idea that carefully considering each bet on a likely-to-succeed business and reducing one's risk was a waste of time, because the return on the equity from the Next Big Thing would be so disproportionately huge - 10x, 100x, 1000x - that one could afford to make at least 10 bad bets for each good one, and still come out ahead.

The biggest risk was in missing the deal, not in giving a bunch of money to a scam. Thus, value investing and focus on fundamentals have been broadly disregarded in favor of the pursuit of the Next Big Thing.

If Americans of the twentieth century were temporarily embarrassed millionaires, those of the twenty-first are all temporarily embarrassed FAANG CEOs.

The predicament that this tendency leaves us in today is that the world is increasingly run by generations - GenX and Millennials - with the shared experience that the computer industry, either hardware or software, would produce some radical innovation every few years. We assume that to be true.

But all things change, even change itself, and that industry is beginning to slow down. Physically, transistor density is starting to brush up against physical limits. Economically, most people are drowning in more compute power than they know what to do with anyway. Users already have most of what they need from the Internet.

The big new feature in every operating system is a bunch of useless junk nobody really wants and is seeing remarkably little uptake. Social media and smartphones changed the world, true, but… those are both innovations from 2008. They're just not new any more.

So we are all - collectively, culturally - looking for the Next Big Thing, and we keep not finding it.

It wasn't 3D printing. It wasn't crowdfunding. It wasn't smart watches. It wasn't VR. It wasn't the Metaverse, it wasn't Bitcoin, it wasn't NFTs1.

It's also not AI, but this is why so many people assume that it will be AI. Because it's got to be something, right? If it's got to be something then AI is as good a guess as anything else right now.

The fact is, our lifetimes have been an extreme anomaly. Things like the Internet used to come along every thousand years or so, and while we might expect that the pace will stay a bit higher than that, it is not reasonable to expect that something new like "personal computers" or "the Internet"3 will arrive again.

We are not going to get rich by getting in on the ground floor of the next Apple or the next Google because the next Apple and the next Google are Apple and Google. The industry is maturing. Software technology, computer technology, and internet technology are all maturing.

There Will Be Next Things

Research and development is happening in all fields all the time. Amazing new developments quietly and regularly occur in pharmaceuticals and in materials science. But these are not predictable. They do not inhabit the public consciousness until they've already happened, and they are rarely so profound and transformative that they change everybody's life.

There will even be new things in the computer industry, both software and hardware. Foldable phones do address a real problem (I wish the screen were even bigger but I don't want to carry around such a big device), and would probably be more popular if they got the costs under control. One day somebody's going to crack the problem of volumetric displays, probably. Some VR product will probably, eventually, hit a more realistic price/performance ratio where the niche will expand at least a little more.

Maybe there will even be something genuinely useful, which is recognizably adjacent to the current "AI" fad, but if it is, it will be some new development that we haven't seen yet. If current AI technology were sufficient to drive some interesting product, it would already be doing it, not using marketing disguised as science to conceal diminishing returns on current investments.

But They Will Not Be Big

The impulse to find the One Big Thing that will dominate the next five years is a fool's errand. Incremental gains are diminishing across the board. The markets for time and attention2 are largely saturated. There's no need for another streaming service if 100% of your leisure time is already committed to TikTok, YouTube and Netflix; famously, Netflix has already considered sleep its primary competitor for close to a decade - years before the pandemic.

Those rare tech markets which aren't saturated are suffering from pedestrian economic problems like wealth inequality, not technological bottlenecks.

For example, the thing preventing the development of a robot that can do your laundry and your dishes without your input is not necessarily that we couldn't build something like that, but that most households just can't afford it without wage growth catching up to productivity growth. It doesn't make sense for anyone to commit to the substantial R&D investment that such a thing would take, if the market doesn't exist because the average worker isn't paid enough to afford it on top of all the other tech which is already required to exist in society.

The projected income from the tiny, wealthy sliver of the population who could pay for the hardware, cannot justify an investment in the software past a fake version remotely operated by workers in the global south, only made possible by Internet wage arbitrage, i.e. a more palatable, modern version of indentured servitude.

Even if we were to accept the premise of an actually-"AI" version of this, that is still just a wish that ChatGPT could somehow improve enough behind the scenes to replace that worker, not any substantive investment in a novel, proprietary-to-the-chores-robot software system which could reliably perform specific functions.

What, Then?

The expectation for, and lack of, a "big thing" is a big problem. There are others who could describe its economic, political, and financial dimensions better than I can. So then let me speak to my expertise and my audience: open source software developers.

When I began my own involvement with open source, a big part of the draw for me was participating in a low-cost (to the corporate developer) but high-value (to society at large) positive externality. None of my employers would ever have cared about many of the applications for which Twisted forms a core bit of infrastructure; nor would I have been able to predict those applications' existence. Yet, it is nice to have contributed to their development, even a little bit.

However, it's not actually a positive externality if the public at large can't directly benefit from it.

When real world-changing, disruptive developments are occurring, the bean-counters are not watching positive externalities too closely. As we discovered with many of the other benefits that temporarily accrued to labor in the tech economy, Open Source that is usable by individuals and small companies may have been a ZIRP. If you know you're gonna make a billion dollars you're not going to worry about giving away a few hundred thousand here and there.

When gains are smaller and harder to realize, and margins are starting to get squeezed, it's harder to justify the investment in vaguely good vibes.

But this, itself, is not a call to action. I doubt very much that anyone reading this can do anything about the macroeconomic reality of higher interest rates. The technological reality of "development is happening slower" is inherently something that you can't change on purpose.

However, what we can do is to be aware of this trend in our own work.

Fight Scale Creep

It seems to me that more and more open source infrastructure projects are tools for hyper-scale application development, only relevant to massive cloud companies. This is just a subjective assessment on my part - I'm not sure what tools even exist today to measure this empirically - but I remember a big part of the open source community when I was younger being things like Inkscape, Themes.Org and Slashdot, not React, Docker Hub and Hacker News.

This is not to say that the hobbyist world no longer exists. There is of course a ton of stuff going on with Raspberry Pi, Home Assistant, OwnCloud, and so on. If anything there's a bit of a resurgence of self-hosting. But the interests of self-hosters and corporate developers are growing apart; there seems to be far less of a beneficial overflow from corporate infrastructure projects into these enthusiast or prosumer communities.

This is the concrete call to action: if you are employed in any capacity as an open source maintainer, dedicate more energy to medium- or small-scale open source projects.

If your assumption is that you will eventually reach a hyper-scale inflection point, then mimicking Facebook and Netflix is likely to be a good idea. However, if we can all admit to ourselves that we're not going to achieve a trillion-dollar valuation and a hundred thousand engineer headcount, we can begin to consider ways to make our Next Thing a bit smaller, and to accommodate the world as it is rather than as we wish it would be.

Be Prepared to Scale Down

Here are some design guidelines you might consider, for just about any open source project, particularly infrastructure ones:

  1. Don't assume that your software can sustain an arbitrarily large fixed overhead because "you just pay that cost once" and you're going to be running a billion instances so it will always amortize; maybe you're only going to be running ten.

  2. Remember that such fixed overhead includes not just CPU, RAM, and filesystem storage, but also the learning curve for developers. Front-loading a massive amount of conceptual complexity to accommodate the problems of hyper-scalers is a common mistake. Try to smooth out these complexities and introduce them only when necessary.

  3. Test your code on edge devices. This means supporting Windows and macOS, and even Android and iOS. If you want your tool to help empower individual users, you will need to meet them where they are, which is not on an EC2 instance.

  4. This includes considering Desktop Linux as a platform, as opposed to Server Linux as a platform, which (while they certainly have plenty in common) they are also distinct in some details. Consider the highly specific example of secret storage: if you are writing something that intends to live in a cloud environment, and you need to configure it with a secret, you will probably want to provide it via a text file or an environment variable. By contrast, if you want this same code to run on a desktop system, your users will expect you to support the Secret Service. This will likely only require a few lines of code to accommodate, but it is a massive difference to the user experience.

  5. Don't rely on LLMs remaining cheap or free. If you have LLM-related features4, make sure that they are sufficiently severable from the rest of your offering that if ChatGPT starts costing $1000 a month, your tool doesn't break completely. Similarly, do not require that your users have easy access to half a terabyte of VRAM and a rack full of 5090s in order to run a local model.

Even if you were going to scale up to infinity, the ability to scale down and consider smaller deployments means that you can run more comfortably on, for example, a developer's laptop. So even if you can't convince your employer that this is where the economy and the future of technology in our lifetimes is going, it can be easy enough to justify this sort of design shift, particularly as individual choices. Make your onboarding cheaper, your development feedback loops tighter, and your systems generally more resilient to economic headwinds.

So, please design your open source libraries, applications, and services to run on smaller devices, with less complexity. It will be worth your time as well as your users'.

But if you can fix the whole wealth inequality thing, do that first.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!


  1. These sorts of lists are pretty funny reads, in retrospect.

  2. Which is to say, "distraction".

  3. ... or even their lesser-but-still-profound aftershocks like "Social Media", "Smartphones", or "On-Demand Streaming Video" ... secondary manifestations of the underlying innovation of a packet-switched global digital network ...

  4. My preference would of course be that you just didn't have such features at all, but perhaps even if you agree with me, you are part of an organization with some mandate to implement LLM stuff. Just try not to wrap the chain of this anchor all the way around your code's neck.

02 Jan 2026 1:59am GMT

11 Nov 2025

feedPlanet Twisted

Glyph Lefkowitz: The “Dependency Cutout” Workflow Pattern, Part I

Tell me if you've heard this one before.

You're working on an application. Let's call it "FooApp". FooApp has a dependency on an open source library, let's call it "LibBar". You find a bug in LibBar that affects FooApp.

To envisage the best possible version of this scenario, let's say you actively like LibBar, both technically and socially. You've contributed to it in the past. But this bug is causing production issues in FooApp today, and LibBar's release schedule is quarterly. FooApp is your job; LibBar is (at best) your hobby. Blocking on the full upstream contribution cycle and waiting for a release is an absolute non-starter.

What do you do?

There are a few common reactions to this type of scenario, all of which are bad options.

I will enumerate them specifically here, because I suspect that some of them may resonate with many readers:

  1. Find an alternative to LibBar, and switch to it.

    This is a bad idea because a transition to a core infrastructure component could be extremely expensive.

  2. Vendor LibBar into your codebase and fix your vendored version.

    This is a bad idea because carrying this one fix now requires you to maintain all the tooling associated with a monorepo1: you have to be able to start pulling in new versions from LibBar regularly, reconcile your changes even though you now have a separate version history on your imported version, and so on.

  3. Monkey-patch LibBar to include your fix.

    This is a bad idea because you are now extremely tightly coupled to a specific version of LibBar. By modifying LibBar internally like this, you're inherently violating its compatibility contract, in a way which is going to be extremely difficult to test. You can test this change, of course, but as LibBar changes, you will need to replicate any relevant portions of its test suite (which may be its entire test suite) in FooApp. Lots of potential duplication of effort there.

  4. Implement a workaround in your own code, rather than fixing it.

    This is a bad idea because you are distorting the responsibility for correct behavior. LibBar is supposed to do LibBar's job, and unless you have a full wrapper for it in your own codebase, other engineers (including "yourself, personally") might later forget to go through the alternate, workaround codepath, and invoke the buggy LibBar behavior again in some new place.

  5. Implement the fix upstream in LibBar anyway, because that's the Right Thing To Do, and burn credibility with management while you anxiously wait for a release with the bug in production.

    This is a bad idea because you are betraying your users - by allowing the buggy behavior to persist - for the workflow convenience of your dependency providers. Your users are probably giving you money, and trusting you with their data. This means you have both ethical and economic obligations to consider their interests.

    As much as it's nice to participate in the open source community and take on an appropriate level of burden to maintain the commons, this cannot sustainably be at the explicit expense of the population you serve directly.

    Even if we only care about the open source maintainers here, there's still a problem: as you are likely to come under immediate pressure to ship your changes, you will inevitably relay at least a bit of that stress to the maintainers. Even if you try to be exceedingly polite, the maintainers will know that you are coming under fire for not having shipped the fix yet, and are likely to feel an even greater burden of obligation to ship your code fast.

    Much as it's good to contribute the fix, it's not great to put this on the maintainers.

The respective incentive structures of software development - specifically, of corporate application development and open source infrastructure development - make options 1-4 very common.

On the corporate / application side, these issues are:

But there are problems on the open source side as well. Those problems are all derived from one big issue: because we're often working with relatively small sums of money, it's hard for upstream open source developers to consume either money or patches from application developers. It's nice to say that you should contribute money to your dependencies, and you absolutely should, but the cost-benefit function is discontinuous. Before a project reaches the fiscal threshold where it can be at least one person's full-time job to worry about this stuff, there's often no-one responsible in the first place. Developers will therefore gravitate to the issues that are either fun, or relevant to their own job.

These mutually-reinforcing incentive structures are a big reason that users of open source infrastructure, even teams who work at corporate users with zillions of dollars, don't reliably contribute back.

The Answer We Want

All those options are bad. If we had a good option, what would it look like?

It is both practically necessary3 and morally required4 for you to have a way to temporarily rely on a modified version of an open source dependency, without permanently diverging.

Below, I will describe a desirable abstract workflow for achieving this goal.

Step 0: Report the Problem

Before you get started with any of these other steps, write up a clear description of the problem and report it to the project as an issue; specifically, in contrast to writing it up as a pull request. Describe the problem before submitting a solution.

You may not be able to wait for a volunteer-run open source project to respond to your request, but you should at least tell the project what you're planning on doing.

If you don't hear back from them at all, you will have at least made sure to comprehensively describe your issue and strategy beforehand, which will provide some clarity and focus to your changes.

If you do hear back from them, in the worst case scenario, you may discover that a hard fork will be necessary because they don't consider your issue valid, but even that information will save you time, if you know it before you get started. In the best case, you may get a reply from the project telling you that you've misunderstood its functionality and that there is already a configuration parameter or usage pattern that will resolve your problems with no new code. But in all cases, you will benefit from early coordination on what needs fixing before you get to how to fix it.

Step 1: Source Code and CI Setup

Fork the source code for your upstream dependency to a writable location where it can live at least for the duration of this one bug-fix, and possibly for the duration of your application's use of the dependency. After all, you might want to fix more than one bug in LibBar.

You want to have a place where you can put your edits, that will be version controlled and code reviewed according to your normal development process. This probably means you'll need to have your own main branch that diverges from your upstream's main branch.

Remember: you're going to need to deploy this to your production, so testing gates that your upstream only applies to final releases of LibBar will need to be applied to every commit here.

Depending on your LibBar's own development process, this may result in slightly unusual configurations where, for example, your fixes are written against the last LibBar release tag, rather than its current5 main; if the project has a branch-freshness requirement, you might need two branches, one for your upstream PR (based on main) and one for your own use (based on the release branch with your changes).

Ideally for projects with really good CI and a strong "keep main release-ready at all times" policy, you can deploy straight from a development branch, but it's good to take a moment to consider this before you get started. It's usually easier to rebase changes from an older HEAD onto a newer one than it is to go backwards.

Speaking of CI, you will want to have your own CI system. The fact that GitHub Actions has become a de-facto lingua franca of continuous integration means that this step may be quite simple, and your forked repo can just run its own instance.

Optional Bonus Step 1a: Artifact Management

If you have an in-house artifact repository, you should set that up for your dependency too, and upload your own build artifacts to it. You can often treat your modified dependency as an extension of your own source tree and install from a GitHub URL, but if you've already gone to the trouble of having an in-house package repository, you can pretend you've taken over maintenance of the upstream package temporarily (which you kind of have) and leverage those workflows for caching and build-time savings as you would with any other internal repo.

Step 2: Do The Fix

Now that you've got somewhere to edit LibBar's code, you will want to actually fix the bug.

Step 2a: Local Filesystem Setup

Before you have a production version on your own deployed branch, you'll want to test locally, which means having both repositories in a single integrated development environment.

At this point, you will want to have a local filesystem reference to your LibBar dependency, so that you can make real-time edits, without going through a slow cycle of pushing to a branch in your LibBar fork, pushing to a FooApp branch, and waiting for all of CI to run on both.

This is useful in both directions: as you prepare the FooApp branch that makes any necessary updates on that end, you'll want to make sure that FooApp can exercise the LibBar fix in any integration tests. As you work on the LibBar fix itself, you'll also want to be able to use FooApp to exercise the code and see if you've missed anything - and this, you wouldn't get in CI, since LibBar can't depend on FooApp itself.

In short, you want to be able to treat both projects as an integrated development environment, with support from your usual testing and debugging tools, just as much as you want your deployment output to be an integrated artifact.

Step 2b: Branch Setup for PR

However, for continuous integration to work, you will also need to have a remote resource reference of some kind from FooApp's branch to LibBar. You will need 2 pull requests: the first to land your LibBar changes to your internal LibBar fork and make sure it's passing its own tests, and then a second PR to switch your LibBar dependency from the public repository to your internal fork.

At this step it is very important to ensure that there is an issue filed on your own internal backlog to drop your LibBar fork. You do not want to lose track of this work; it is technical debt that must be addressed.

Until it's addressed, automated tools like Dependabot will not be able to apply security updates to LibBar for you; you're going to need to manually integrate every upstream change. This type of work is itself very easy to drop or lose track of, so you might just end up stuck on a vulnerable version.

Step 3: Deploy Internally

Now that you're confident that the fix will work, and that your temporarily-internally-maintained version of LibBar isn't going to break anything on your site, it's time to deploy.

Some deployment heritage should help to provide some evidence that your fix is ready to land in LibBar, but at the next step, please remember that your production environment isn't necessarily emblematic of that of all LibBar users.

Step 4: Propose Externally

You've got the fix, you've tested the fix, you've got the fix in your own production, you've told upstream you want to send them some changes. Now, it's time to make the pull request.

You're likely going to get some feedback on the PR, even if you think it's already ready to go; as I said, despite having been proven in your production environment, you may get feedback about additional concerns from other users that you'll need to address before LibBar's maintainers can land it.

As you process the feedback, make sure that each new iteration of your branch gets re-deployed to your own production. It would be a huge bummer to go through all this trouble, and then end up unable to deploy the next publicly released version of LibBar within FooApp because you forgot to test that your responses to feedback still worked on your own environment.

Step 4a: Hurry Up And Wait

If you're lucky, upstream will land your changes to LibBar. But, there's still no release version available. Here, you'll have to stay in a holding pattern until upstream can finalize the release on their end.

Depending on some particulars, it might make sense at this point to archive your internal LibBar repository and move your pinned release version to a git hash of the LibBar version where your fix landed, in their repository.

Before you do this, check in with the LibBar core team and make sure that they understand that's what you're doing and they don't have any wacky workflows which may involve rebasing or eliding that commit as part of their release process.

Step 5: Unwind Everything

Finally, you eventually want to stop carrying any patches and move back to an official released version that integrates your fix.

You want to do this because this is what the upstream will expect when you are reporting bugs. Part of the benefit of using open source is benefiting from the collective work to do bug-fixes and such, so you don't want to be stuck off on a pinned git hash that the developers do not support for anyone else.

As I said in step 2b6, make sure to maintain a tracking task for doing this work, because leaving this sort of relatively easy-to-clean-up technical debt lying around is something that can potentially create a lot of aggravation for no particular benefit. Make sure to put your internal LibBar repository into an appropriate state at this point as well.

Up Next

This is part 1 of a 2-part series. In part 2, I will explore in depth how to execute this workflow specifically for Python packages, using some popular tools. I'll discuss my own workflow, standards like PEP 517 and pyproject.toml, and of course, by the popular demand that I just know will come, uv.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!


  1. if you already have all the tooling associated with a monorepo, including the ability to manage divergence and reintegrate patches with upstream, you already have the higher-overhead version of the workflow I am going to propose, so, never mind. but chances are you don't have that, very few companies do.

  2. In any business where one must wrangle with Legal, 3 hours is a wildly optimistic estimate.

  3. c.f. @mcc@mastodon.social

  4. c.f. @geofft@mastodon.social

  5. In an ideal world every project would keep its main branch ready to release at all times, no matter what but we do not live in an ideal world.

  6. In this case, there is no question. It's 2b only, no not-2b.

11 Nov 2025 1:44am GMT