24 Apr 2026

feedDjango community aggregator: Community blog posts

Issue 334: New look, new home, same everything else

News

uv is now supported natively - Read the Docs

Read the Docs now natively supports uv, bringing faster and simpler Python dependency installs to your docs builds.

Support the Django Software Foundation by buying PyCharm at a 30% Discount

JetBrains and the Django Software Foundation team up again to offer 30% off PyCharm while matching donations to fund Django's core development and community programs.


Django Software Foundation

DSF member of the month - Rob Hudson

Rob Hudson, creator of Django Debug Toolbar, reflects on his open source journey, Django's community spirit, and bringing Content Security Policy support into Django core.


Python Software Foundation

Announcing Python Software Foundation Fellow Members for Q1 2026! πŸŽ‰

The Python Software Foundation has announced its first 2026 class of Fellows, recognizing community leaders and contributors from around the world.


Wagtail CMS News

Save the 🌎 : Delete your Stuff!

For Earth Day, Wagtail makes the case that deleting old emails, files, and forgotten drafts is a simple way to cut digital clutter and lighten your carbon footprint.


Updates to Django

Today, "Updates to Django" is presented by Raffaella from Djangonaut Space! πŸš€

Last week we had 26 pull requests, into Django by 13 different contributors - including a first-time contributor! Congratulations to Gary Badwal for having their first commits merged into Django - welcome on board!

A huge congratulations on the work done at DjangoCon Europe 2026's sprint, because 4 of those PRs were merged during the sprints 🏰

News in Django:


Sponsored Link

You know @login_required. Now meet @app.reasoner(). AgentField turns Python functions into production AI agents, structured output, async execution, agent discovery. Every decorator becomes a REST endpoint. Open source, Apache 2.0. Python, Go & TypeScript SDKs.

AgentField.jpeg

Articles

DjangoCon Europe 2026 - A Brief Report

A DjangoCon Europe talk explores using transpiled Django templates in the browser to power dynamic forms without constant server requests.

How to Safely Update Your Dependencies

A practical guide to safer dependency updates, covering hashes, GitHub Action pinning, cooldown windows, and automated upgrades to reduce supply chain risk.

Django: fixing a memory "leak" from Python 3.14's incremental garbage collection

Adam Johnson explains how a Django migration memory "leak" on Python led to a clever workaround and helped expose issues with its new garbage collector.

PDM Rocks!| stuartm.nz

PDM is winning over Django developers with an easy switch from uv, smarter dependency controls, and a polished project workflow.

DjangoCon Europe 2026 Review

A first-time speaker's perspective on DjangoCon Europe 2026 in Athens, packed with standout talks, community energy, and inspiration for the year ahead.

Djangocon Europe: Django templates on the frontend? - Christophe Henry

A DjangoCon Europe talk explores using transpiled Django templates in the browser to power dynamic forms without constant server requests.


Events

Django on the Med is back!

Django on the Med returns this September with three free days of focused Django sprints in beautiful Pescara, with registration now open.

PyBeach 2026

PyBeach returns on October 24, 2026 in Santa Monica, with tickets on sale now and its call for speakers open through June 7.


Django Job Board

Django Job Board

Junior Software Developer (Apprentice) at UCS Assist πŸ†•

Technical Lead at UCS Assist πŸ†•

Web Developer at Crossway πŸ†•

PyPI Sustainability Engineer at Python Software Foundation πŸ†•


Projects

wemake-services/django-modern-rest

Modern REST framework for Django with types and async support!

24 Apr 2026 3:00pm GMT

feedPlanet Python

Real Python: The Real Python Podcast – Episode #292: Becoming a Better Python Developer Through Learning Rust

How can learning Rust help make you a better Python Developer? How do techniques required by a compiled language translate to improving your Python code? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks πŸ’Œ - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

24 Apr 2026 12:00pm GMT

Real Python: Quiz: AI Coding Agents Guide: A Map of the Four Workflow Types

In this quiz, you'll test your understanding of AI Coding Agents Guide: A Map of the Four Workflow Types.

By working through this quiz, you'll revisit the four common workflow types for AI coding agents: IDE, terminal, pull request, and cloud. You'll also get a chance to review how to match the right workflow to the task in front of you.


[ Improve Your Python With 🐍 Python Tricks πŸ’Œ - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

24 Apr 2026 12:00pm GMT

The Python Coding Stack: Doubling Down on Python in The Age of AI

If you've been wondering where I've been, yes, it's been quieter than usual around here. No dramatic reason. Just life, work, the usual stuff.

But one thing kept catching my attention: all the noise outside. Everyone's talking about AI writing code, agents shipping products, the death of programming. And every so often, someone asks me - usually with that slightly guilty look of someone who thinks they're about to insult my livelihood - "but do you really still need to learn Python? In this age?"

So I've been thinking about it. Properly. And I wanted to share where I've landed, because I think the answer matters and it's different from what some hot takes suggest.

Do I still need to learn to code?

The way we write computer programs is changing. I don't have a crystal ball for what programming looks like in five or ten years.

But here's the thing: neither does anyone else.

Here's what I know from watching others and experimenting with AI tools in my own work: right now, the people getting the most out of AI are the ones who already know how to code.

Here's the hierarchy as I see it today:

Where we stand today:

We're in an era where some coding knowledge takes you much further than it could have a few years ago. That's not an argument against learning to code. It's an argument for it.

"But what about those vibe coding people? They seem to be shipping things." Some are. I'll be honest about that.

The projects I've seen from pure vibe coders tend to be smaller, tend to follow well-trodden patterns, and often hit a ceiling when something goes slightly wrong or slightly off-piste. Which is fine for a side project. I just created a useful dashboard to help me organise my day the way I want to using this approach.

But it tells you something: the AI does the heavy lifting on the known stuff. The moment something needs genuine thinking, you need the human who knows what's going on beneath the surface.

AI-Assisted Human Coding and Human-Assisted AI Coding

Most serious work right now is a partnership.

Sometimes it's AI-assisted human coding. The human drives, AI assists.

Sometimes it's human-assisted AI programming. The AI writes most of the code, but the human knows what to ask for, how to steer it toward good design, how to evaluate whether the output actually makes sense.

Even when the coding looks like it's done by AI, the person prompting and reviewing it is generally an experienced programmer. They've learned enough Python to know what's reasonable, what's a red flag, and when the AI is confidently wrong.

I've been experimenting with agentic AI over the past few weeks. I've tackled side projects I'd never have had the time for before. I didn't have the time to start then, let alone finish them. Some of that output will show up in other places - stay tuned! But not here. The Python Coding Stack is the place for my writing.

The Programming Mindset When Talking to AI Agents

Here's one thing I noticed. Talking to AI agents isn't like talking to humans. But it isn't like talking to computers (a.k.a. programming) either.

You need both qualities at the same time.

You need the clarity and communication skills you'd use with a person - explaining context, setting direction, knowing what matters, using clear language.

And you need the precision you'd use when programming - no ambiguity, clear intent, structure.

A good programmer who's also a good communicator is the best human to work with AI agents.

That's not coincidental. The same thought habits that make you an effective programmer also make you an effective prompter and reviewer when AI is involved.

I'll share some of the prompts I'm using in a future post and analyse them to discuss why I wrote what I wrote. Learning to code well gives you an unfair advantage in this new world.

There's Never Been a Better Time to Learn Python

Here's what I've convinced myself of after all this. There's never been a better time to learn Python.

A few years ago, you needed to reach an intermediate-to-advanced level before you could do something genuinely useful with Python. The bar was high. Now, with AI assistance, the bar is lower. Less Python knowledge takes you further than ever.

What used to need expertise can now be explored with curiosity and a bit of intermediate-ish-level Python.

That's not replacing deeper learning. It's making the entry point more accessible. And once you're in, you can go as deep as you want.

So yes, I'll keep coding in Python. Sometimes with a bit of help from AI. Sometimes with a lot of help from AI.

And yes, I'll keep writing about Python here, as I've always done.

The Fun Factor

Here's the thing we don't talk about enough when discussing programming. It's fun. It's challenging. It's rewarding. It's fulfilling. It's stimulating. It keeps my brain active.

I code because I enjoy it. I'll keep writing about Python because I enojoy that, too,, and because I find value in sharing here - for myself and, hopefully, for you too.

Normal service resumes here. More Python posts coming. And maybe, just maybe, some of the AI things I'm learning will make their way in, too.

Psst-did you know you can become a premium member to be a part of The Club? It would mean so much to me!

Subscribe now


Quick question:

Photo by Maksim Goncharenok


Join The Club, the exclusive area for paid subscribers for more Python posts, videos, a members' forum, and more.

Subscribe now

You can also support this publication by making a one-off contribution of any amount you wish.

Support The Python Coding Stack


For more Python resources, you can also visit Real Python-you may even stumble on one of my own articles or courses there!

Also, are you interested in technical writing? You'd like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.

And you can find out more about me at stephengruppetta.com

24 Apr 2026 8:53am GMT

23 Apr 2026

feedDjango community aggregator: Community blog posts

DjangoCon 2026 Review

This week I have just got back from my third DjangoCon Europe. This year was Athens and once again it was an amazing experience. Made slightly stressful this time by doing a last minute talk that I had proposed but got rejected, it was a last minute cancellation that meant I got the opportunity to do so! So I spent most of the conference preparing for that in terms of slides, demo and practising. I also managed to squeeze in a lightning talk on Thursday for the online community working group to advertise it and call out that we can and should do better online as a community, there's more to do and we need help!

The main talk I did was on however essentially advertising Django prodserver, the package I've created, but framing it as an API design talk, so Django lacks a deployment API story, or running and I was focusing on running Django projects. So we lack a production story and the run server argument is that it's it doesn't really communicate actually what it's the commands doing. So I think it generated a bit of interest. Some peop I think it, lots of people appreciated it, I think, or enjoyed it.

I thoroughly enjoyed the other talks as well. There were some excellent database talks from Tim Bell at Kraken, Jake Howard from Torchbox, more folks at Kraken, Charlie and Sam talking about subatomic. A really an excellent set of talks overall this year, I could go through each one, but each I attended I either learnt something new or have a new package to experiment with later this year. Finally the keynotes are worth a mention, I caught most of Daniele's on philisophy and documentation and Carlson's was excellent to kick us off on types and his new package django-mantle. I I was very sad that I missed the sprints this year but had to get back for family commitments at the weekend which made for a fun trip home!

Finally the socials were great! My slight bias for Tuesday with a pre-conference Django social, the afterparty on Thursday was smooth with a visit to Django Gelato afterwards - would 100% recommend ffor any visit to Athens. The venue was excellent and allowed for multiple opportunities for networking with lots of folks at different times. I'm excited for already excited for 2027 where I think I might volunteer and maybe propose a talk again, but that entire depends on what happens in the next 6 months! Or perhaps next year I may be giving the US conference a go. In the meantime I'm hoping to attend Django on the Med (and/or perhaps a Django off the Med!)

23 Apr 2026 5:00am GMT

20 Apr 2026

feedDjango community aggregator: Community blog posts

Django: fixing a memory β€œleak” from Python 3.14’s incremental garbage collection

Back in February, I encountered an out-of-memory error while migrating a client project to Python 3.14. The issue occurred when running Django's database migration command (migrate) on a limited-resource server, and seemed to be caused by the new incremental garbage collection algorithm in Python 3.14.

At the time, I wrote a workaround and started on this blog post, but other tasks took priority and I never got around to finishing it. But four days ago, Hugo van Kemenade, the Python 3.14 release manager, announced that the new garbage collection algorithm will be reverted in Python 3.14.5, and the next Python 3.15 alpha release, due to reports of increased memory usage.

Here's the story of my workaround, as extra evidence that reverting incremental garbage collection is a good call.

Python 3.14's incremental garbage collection

Python (well, CPython) has a garbage collector that runs regularly to clean up unreferenced objects. Most objects are cleaned up immediately when their reference count drops to zero, but some objects can be part of reference cycles, where some set of objects reference each other and thus never reach a reference count of zero. The garbage collector sweeps through all objects to find and clean up these cycles.

Python 3.14 changed garbage collection to operate incrementally. Previously, a garbage collection run would sweep through all objects in one go, but this could lead to "stop the world" stalls where your program's real work could pause for seconds while the garbage collector did its job. The incremental garbage collection algorithm instead does a fraction of the work at a time, spreading out the cost of garbage collection.

Here's the full release note (historical source):

Incremental garbage collection

The cycle garbage collector is now incremental. This means that maximum pause times are reduced by an order of magnitude or more for larger heaps.

There are now only two generations: young and old. When gc.collect() is not called directly, the GC is invoked a little less frequently. When invoked, it collects the young generation and an increment of the old generation, instead of collecting one or more generations.

The behavior of gc.collect() changes slightly:

  • gc.collect(1): Performs an increment of garbage collection, rather than collecting generation 1.
  • Other calls to gc.collect() are unchanged.

(Contributed by Mark Shannon in 108362.)

The problem

I'd been helping one of my clients upgrade to Python 3.14 for a few months, chipping away at compatibility work like upgrading dependencies and fixing deprecations. Tests were finally all passing and everything was working on the local development server. The next stop was to launch a temporary deployment using Python 3.14 via Heroku's review apps feature.

At the basic tier, Heroku review apps use fairly resource-constrained servers, including just 512MB of RAM, with the ability to temporarily burst up to nearly 1GB (200%). Paying for larger servers is an option, but unfortunately the next step up is pretty expensive.

When I launched a review app for my Python 3.14 branch, I found its release phase failed while running migrate. Inspecting the logs, I found the migrations started fine:

$ heroku logs --app example-python-314-wsgk3w --num 1000 | less
...
app[release.6634]: System check identified no issues (26 silenced).
app[release.6634]: Operations to perform:
app[release.6634]: Apply all migrations: admin, auth, contenttypes, ...
app[release.6634]: Running migrations:

…but partway through, these messages started appearing:

heroku[release.6634]: Process running mem=527M(101.5%)
heroku[release.6634]: Error R14 (Memory quota exceeded)

…ramping up until the 200% mark:

heroku[release.9599]: Process running mem=977M(190.3%)
heroku[release.9599]: Error R14 (Memory quota exceeded)

…and finally the termination of the release process:

heroku[release.9599]: Process running mem=1033M(201.7%)
heroku[release.9599]: Error R15 (Memory quota vastly exceeded)
heroku[release.9599]: Stopping process with SIGKILL

These messages came from Heroku's process management layer, which terminated the memory-hungry release process with SIGKILL after the hard threshold of 1GB memory usage was breached. Repeat attempts hit the same issue.

I was confused: migrations should not consume much memory. While they create a lot of temporary objects (Django model classes and fields) in order to calculate the SQL to send to the database, such objects are all short-lived and should be garbage-collected fairly swiftly. Additionally, migrations worked fine on the local and CI environments, and they'd never had memory issues on previous Python versions.

It looked like there was a memory leak, and it was time to dig in.

Initial investigation

I first profiled memory usage of migrate locally using Memray, the memory profiler that I covered in my previous post, using:

$ memray run manage.py migrate

The profiles revealed that memory usage had slightly increased on Python 3.14 compared to 3.13, but did not find a memory leak (a pattern of continual growth). Still, I made some optimizations to defer some imports, saving about 30% of startup memory usage, and tried again, to no avail.

I then had the idea to profile on a Heroku dyno directly. After hacking the release process to not run migrations, I built a review app and SSH'd into its web server:

$ heroku ps:exec -a example-python-314-rspwtc --dyno web.1 bash
Establishing credentials... done
Connecting to web.1 on β¬’ example-python-314-rspwtc...
~ $

Initially, I tried using Memray's live mode to profile the migrations as they ran:

$ memray run --live manage.py migrate

While this tool looks great for some situations, it didn't really work here, especially since it seized up after Heroku terminated the server.

I then tried running the default memray run command:

$ memray run manage.py migrate
Writing profile results into memray-manage.py.724.bin

…then, on my local computer, I repeatedly ran this command to copy down the results file:

$ trash memray-manage.py.724.bin && heroku ps:copy -a example-python-314-rspwtc --dyno web.1 memray-manage.py.724.bin

I was a bit worried here that the Memray binary file might be corrupted due to copying it while memray run was generating it. But with a final truncated copy left over after the server crashed, I asked Memray to generate a flamegraph for it:

$ memray flamegraph memray-manage.py.724.bin

…and it worked! Kudos to the Memray team for making their output format usable even when incomplete.

This more detailed flamegraph revealed more than 50% of the memory usage was allocated in ModelState.render(), which creates temporary model classes:

class ModelState:
    ...

    def render(self, apps):
        """Create a Model object from our current state into the given apps."""
        ...
        return type(self.name, bases, body)

This information hinted that these temporary model classes were hanging around beyond their expected short lifetime, leading to the memory leak. For example, every model class could also end up in a list intended for debugging, but accidentally extending the lifetime of these temporary classes.

I decided to dig a bit deeper using machete-mode debugging, with the below snippet that captures the temporary model classes and logs details about them. I wrote this within the Django settings file, where it was guaranteed to run at Django startup time, before the migrate management command.

import atexit
import gc
import tracemalloc
import weakref
from itertools import islice

from django.db.migrations.state import ModelState

tracemalloc.start(2)

orig_render = ModelState.render

rendered_classes = weakref.WeakSet()


def wrapped_render(*args, **kwargs):
    cls = orig_render(*args, **kwargs)
    rendered_classes.add(cls)
    return cls


ModelState.render = wrapped_render


@atexit.register
def show_referrers():
    print(f"🎯 {len(rendered_classes)} classes referred to.\n")

    for cls in islice(rendered_classes, 2):
        print(f"🎁🎁🎁 {cls!r} 🎁🎁🎁")
        for i, referrer in enumerate(gc.get_referrers(cls), start=1):
            print(f"🍌 Referrer #{i}: {referrer!r}")
            if tb := tracemalloc.get_object_traceback(referrer):
                print("\n".join(tb.format(most_recent_first=True)))
            print()
        print()
        print()

Note:

  1. tracemalloc.start() starts Python's built-in memory allocation tracking.
  2. The ModelState.render() method was monkeypatched with a wrapper that stores every temporary model class in a WeakSet.
  3. The @atexit.register-decorated function runs at the end of the program, and logs two things.
  4. The first piece of logging is the number of temporary model classes still alive at the end of the program, which should be close to zero. (Some may stick around from the final migration state.)
  5. The second piece of logging iterates over the first two live temporary model classes and logs their name and their referring objects, discovered via gc.get_referrers(). For each referring object, it also logs the traceback of where that object was allocated, using tracemalloc.get_object_traceback() (which is why tracemalloc.start() was needed at the beginning).
  6. The emojis are a bit of fun to make the log messages easier to skim through. I have no idea why I picked 🎁 and 🍌!!

The output from this hook was voluminous, even with the limit to the first two live classes. For example, here's the output for a temporary ContentType model class:

🎁🎁🎁 <class '__fake__.ContentType'> 🎁🎁🎁
🍌 Referrer #1: <generator object WeakSet.__iter__ at 0x1234ef300>
  File "/.../example/core/apps.py", line 45
    for cls in islice(rendered_classes, 2):

...

🍌 Referrer #11: {'name': 'model', ..., 'model': <class '__fake__.ContentType'>}
  File "/.../.venv/lib/python3.14/site-packages/django/utils/functional.py", line 47
    res = instance.__dict__[self.name] = self.func(instance)
  File "/.../.venv/lib/python3.14/site-packages/django/db/models/fields/__init__.py", line 1210
    self.validators.append(validators.MaxLengthValidator(self.max_length))

I checked the live referrers for a few classes, and they all seemed to be expected. However, it did reveal just how many cycles exist between ORM objects. For example, model classes refer to their field objects, which in turn refer back to their model classes, thanks to Django's Field.contribute_to_class() creating this reference:

def contribute_to_class(self, cls, name, private_only=False):
    ...
    self.model = cls
    ...

Anyway, from comparing the output between Python 3.13 and 3.14, I could see that no new references were being created on Python 3.14. It seemed likely that the incremental garbage collection algorithm was the culprit.

The workaround

Given the investigation, I wanted to work around the issue by forcing a full garbage collection sweep with gc.collect() after each migration file ran. I came up with the below code, saved as management/commands/migrate.py in one of the project's Django apps. It extends the default migrate command to run gc.collect() after each successful migration (where "apply" is forwards and "unapply" is backwards).

import gc

from django.core.management.commands.migrate import Command as BaseCommand


class Command(BaseCommand):
    """Extended 'migrate' command."""

    def migration_progress_callback(self, action, migration=None, fake=False):
        """
        Extend Django's migration progress reporting to force garbage
        collection after each migration. This is a workaround to keep memory
        usage low, especially because we have a low limit on Heroku. It seems
        the incremental garbage collector introduced in Python 3.14 cannot
        keep up with the migration process's tendency to create many cyclical
        objects, so our best fallback is to force collection of everything
        after each migration is applied or unapplied.

        https://adamj.eu/tech/2026/04/20/django-python-3.14-incremental-gc/
        """
        super().migration_progress_callback(action, migration=migration, fake=fake)
        if action in ("apply_success", "unapply_success"):
            gc.collect()

It felt a bit hacky, but it did the trick! The review app succeeded to launch, showing a flat memory profile as before.

We then continued to deploy to staging and production without any issues, and the team have been happily using Python 3.14 for over a month now.

Fin

Well, that's where the tale ends right now. After the incremental garbage collection algorithm is reverted in Python 3.14.5, I guess I'll be able to remove this workaround.

While it would be nice to have incremental garbage collection work well, it's clear that the current implementation has some issues. I think the core team is making the right call reverting it, but hopefully there will be energy to improve the feature for the future.

May your garbage be collected efficiently and without fuss,

-Adam

20 Apr 2026 4:00am GMT

04 Apr 2026

feedPlanet Twisted

Donovan Preston: Using osascript with terminal agents on macOS

Here is a useful trick that is unreasonably effective for simple computer use goals using modern terminal agents. On macOS, there has been a terminal osascript command since the original release of Mac OS X. All you have to do is suggest your agent use it and it can perform any application control action available in any AppleScript dictionary for any Mac app. No MCP set up or tools required at all. Agents are much more adapt at using rod terminal commands, especially ones that haven't changed in 30 years. Having a computer control interface that hasn't changed in 30 years and has extensive examples in the Internet corpus makes modern models understand how to use these tools basically Effortlessly. macOS locks down these permissions pretty heavily nowadays though, so you will have to grant the application control permission to terminal. But once you have done that, the range of possibilities for commanding applications using natural language is quite extensive. Also, for both Safari and chrome on Mac, you are going to want to turn on JavaScript over AppleScript permission. This basically allows claude or another agent to debug your web applications live for you as you are using them.In chrome, go to the view menu, developer submenu, and choose "Allow JavaScript from Apple events". In Safari, it's under the safari menu, settings, developer, "Allow JavaScript from Apple events". Then you can do something like "Hey Claude, would you Please use osascript to navigate the front chrome tab to hacker news". Once you suggest using OSA script in a session it will figure out pretty quickly what it can do with it. Of course you can ask it to do casual things like open your mail app or whatever. Then you can figure out what other things will work like please click around my web app or check the JavaScript Console for errors. Another very important tips for using modern agents is to try to practice using speech to text. I think speaking might be something like five times faster than typing. It takes a lot of time to get used to, especially after a lifetime of programming by typing, but it's a very interesting and a different experience and once you have a lot of practice It starts to to feel effortless.

04 Apr 2026 1:31pm GMT

16 Mar 2026

feedPlanet Twisted

Donovan Preston: "Start Drag" and "Drop" to select text with macOS Voice Control

I have been using macOS voice control for about three years. First it was a way to reduce pain from excessive computer use. It has been a real struggle. Decades of computer use habits with typing and the mouse are hard to overcome! Text selection manipulation commands work quite well on macOS native apps like apps written in swift or safari with an accessibly tagged webpage. However, many webpages and electron apps (Visual Studio Code) have serious problems manipulating the selection, not working at all when using "select foo" where foo is a word in the text box to select, or off by one errors when manipulating the cursor position or extending the selection. I only recently expanded my repertoire with the "start drag" and "drop" commands, previously having used "Click and hold mouse", "move cursor to x", and "release mouse". Well, now I have discovered that using "start drag x" and "drop x" makes a fantastic text selection method! This is really going to improve my speed. In the long run, I believe computer voice control in general is going to end up being faster than WIMP, but for now the awkwardly rigid command phrasing and the amount of times it misses commands or misunderstands commands still really holds it back. I've been learning the macOS Voice Control specific command set for years now and I still reach for the keyboard and mouse way too often.

16 Mar 2026 11:04am GMT

04 Mar 2026

feedPlanet Twisted

Glyph Lefkowitz: What Is Code Review For?

Humans Are Bad At Perceiving

Humans are not particularly good at catching bugs. For one thing, we get tired easily. There is some science on this, indicating that humans can't even maintain enough concentration to review more than about 400 lines of code at a time..

We have existing terms of art, in various fields, for the ways in which the human perceptual system fails to register stimuli. Perception fails when humans are distracted, tired, overloaded, or merely improperly engaged.

Each of these has implications for the fundamental limitations of code review as an engineering practice:

Never Send A Human To Do A Machine's Job

When you need to catch a category of error in your code reliably, you will need a deterministic tool to evaluate - and, thanks to our old friend "alert fatigue" above - ideally, to also remedy that type of error. These tools will relieve the need for a human to make the same repetitive checks over and over. None of them are perfect, but:

Don't blame reviewers for missing these things.

Code review should not be how you catch bugs.

What Is Code Review For, Then?

Code review is for three things.

First, code review is for catching process failures. If a reviewer has noticed a few bugs of the same type in code review, that's a sign that that type of bug is probably getting through review more often than it's getting caught. Which means it's time to figure out a way to deploy a tool or a test into CI that will reliably prevent that class of error, without requiring reviewers to be vigilant to it any more.

Second - and this is actually its more important purpose - code review is a tool for acculturation. Even if you already have good tools, good processes, and good documentation, new members of the team won't necessarily know about those things. Code review is an opportunity for older members of the team to introduce newer ones to existing tools, patterns, or areas of responsibility. If you're building an observer pattern, you might not realize that the codebase you're working in already has an existing idiom for doing that, so you wouldn't even think to search for it, but someone else who has worked more with the code might know about it and help you avoid repetition.

You will notice that I carefully avoided saying "junior" or "senior" in that paragraph. Sometimes the newer team member is actually more senior. But also, the acculturation goes both ways. This is the third thing that code review is for: disrupting your team's culture and avoiding stagnation. If you have new talent, a fresh perspective can also be an extremely valuable tool for building a healthy culture. If you're new to a team and trying to build something with an observer pattern, and this codebase has no tools for that, but your last job did, and it used one from an open source library, that is a good thing to point out in a review as well. It's an opportunity to spot areas for improvement to culture, as much as it is to spot areas for improvement to process.

Thus, code review should be as hierarchically flat as possible. If the goal of code review were to spot bugs, it would make sense to reserve the ability to review code to only the most senior, detail-oriented, rigorous engineers in the organization. But most teams already know that that's a recipe for brittleness, stagnation and bottlenecks. Thus, even though we know that not everyone on the team will be equally good at spotting bugs, it is very common in most teams to allow anyone past some fairly low minimum seniority bar to do reviews, often as low as "everyone on the team who has finished onboarding".

Oops, Surprise, This Post Is Actually About LLMs Again

Sigh. I'm as disappointed as you are, but there are no two ways about it: LLM code generators are everywhere now, and we need to talk about how to deal with them. Thus, an important corollary of this understanding that code review is a social activity, is that LLMs are not social actors, thus you cannot rely on code review to inspect their output.

My own personal preference would be to eschew their use entirely, but in the spirit of harm reduction, if you're going to use LLMs to generate code, you need to remember the ways in which LLMs are not like human beings.

When you relate to a human colleague, you will expect that:

  1. you can make decisions about what to focus on based on their level of experience and areas of expertise to know what problems to focus on; from a late-career colleague you might be looking for bad habits held over from legacy programming languages; from an earlier-career colleague you might be focused more on logical test-coverage gaps,
  2. and, they will learn from repeated interactions so that you can gradually focus less on a specific type of problem once you have seen that they've learned how to address it,

With an LLM, by contrast, while errors can certainly be biased a bit by the prompt from the engineer and pre-prompts that might exist in the repository, the types of errors that the LLM will make are somewhat more uniformly distributed across the experience range.

You will still find supposedly extremely sophisticated LLMs making extremely common mistakes, specifically because they are common, and thus appear frequently in the training data.

The LLM also can't really learn. An intuitive response to this problem is to simply continue adding more and more instructions to its pre-prompt, treating that text file as its "memory", but that just doesn't work, and probably never will. The problem - "context rot" is somewhat fundamental to the nature of the technology.

Thus, code-generators must be treated more adversarially than you would a human code review partner. When you notice it making errors, you always have to add tests to a mechanical, deterministic harness that will evaluates the code, because the LLM cannot meaningfully learn from its mistakes outside a very small context window in the way that a human would, so giving it feedback is unhelpful. Asking it to just generate the code again still requires you to review it all again, and as we have previously learned, you, a human, cannot review more than 400 lines at once.

To Sum Up

Code review is a social process, and you should treat it as such. When you're reviewing code from humans, share knowledge and encouragement as much as you share bugs or unmet technical requirements.

If you must reviewing code from an LLM, strengthen your automated code-quality verification tooling and make sure that its agentic loop will fail on its own when those quality checks fail immediately next time. Do not fall into the trap of appealing to its feelings, knowledge, or experience, because it doesn't have any of those things.

But for both humans and LLMs, do not fall into the trap of thinking that your code review process is catching your bugs. That's not its job.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!

04 Mar 2026 5:24am GMT

22 Jan 2026

feedPlanet Plone - Where Developers And Integrators Write

Maurits van Rees: Mikel Larreategi: How we deploy cookieplone based projects.

We saw that cookieplone was coming up, and Docker, and as game changer uv making the installation of Python packages much faster.

With cookieplone you get a monorepo, with folders for backend, frontend, and devops. devops contains scripts to setup the server and deploy to it. Our sysadmins already had some other scripts. So we needed to integrate that.

First idea: let's fork it. Create our own copy of cookieplone. I explained this in my World Plone Day talk earlier this year. But cookieplone was changing a lot, so it was hard to keep our copy updated.

Maik Derstappen showed me copier, yet another templating language. Our idea: create a cookieplone project, and then use copier to modify it.

What about the deployment? We are on GitLab. We host our runners. We use the docker-in-docker service. We develop on a branch and create a merge request (pull request in GitHub terms). This activates a piple to check-test-and-build. When it is merged, bump the version, use release-it.

Then we create deploy keys and tokens. We give these access to private GitLab repositories. We need some changes to SSH key management in pipelines, according to our sysadmins.

For deployment on the server: we do not yet have automatic deployments. We did not want to go too fast. We are testing the current pipelines and process, see if they work properly. In the future we can think about automating deployment. We just ssh to the server, and perform some commands there with docker.

Future improvements:

  • Start the docker containers and curl/wget the /ok endpoint.
  • lock files for the backend, with pip/uv.

22 Jan 2026 9:43am GMT

Maurits van Rees: Jakob Kahl and Erico Andrei: Flying from one Plone version to another

This is a talk about migrating from Plone 4 to 6 with the newest toolset.

There are several challenges when doing Plone migrations:

  • Highly customized source instances: custom workflow, add-ons, not all of them with versions that worked on Plone 6.
  • Complex data structures. For example a Folder with a Link as default page, with pointed to some other content which meanwhile had been moved.
  • Migrating Classic UI to Volto
  • Also, you might be migrating from a completely different CMS to Plone.

How do we do migrations in Plone in general?

  • In place migrations. Run migration steps on the source instance itself. Use the standard upgrade steps from Plone. Suitable for smaller sites with not so much complexity. Especially suitable if you do only a small Plone version update.
  • Export - import migrations. You extract data from the source, transform it, and load the structure in the new site. You transform the data outside of the source instance. Suitable for all kinds of migrations. Very safe approach: only once you are sure everything is fine, do you switch over to the newly migrated site. Can be more time consuming.

Let's look at export/import, which has three parts:

  • Extraction: you had collective.jsonify, transmogrifier, and now collective.exportimport and plone.exportimport.
  • Transformation: transmogrifier, collective.exportimport, and new: collective.transmute.
  • Load: Transmogrifier, collective.exportimport, plone.exportimport.

Transmogrifier is old, we won't talk about it now. collective.exportimport: written by Philip Bauer mostly. There is an @@export_all view, and then @@import_all to import it.

collective.transmute is a new tool. This is made to transform data from collective.exportimport to the plone.exportimport format. Potentially it can be used for other migrations as well. Highly customizable and extensible. Tested by pytest. It is standalone software with a nice CLI. No dependency on Plone packages.

Another tool: collective.html2blocks. This is a lightweight Python replacement for the JavaScript Blocks conversion tool. This is extensible and tested.

Lastly plone.exportimport. This is a stripped down version of collective.exportimport. This focuses on extract and load. No transforms. So this is best suited for importing to a Plone site with the same version.

collective.transmute is in alpha, probably a 1.0.0 release in the next weeks. Still missing quite some documentation. Test coverage needs some improvements. You can contribute with PRs, issues, docs.

22 Jan 2026 9:43am GMT

Maurits van Rees: Fred van Dijk: Behind the screens: the state and direction of Plone community IT

This is a talk I did not want to give.

I am team lead of the Plone Admin team, and work at kitconcept.

The current state: see the keynotes, lots happening on the frontend. Good.

The current state of our IT: very troubling and daunting.

This is not a 'blame game'. But focussing on resources and people this conference should be a first priority. We are a real volunteer organisation, nobody is pushing anybody around. That is a strength, but also a weakness. We also see that in the Admin team.

The Admin team is 4 senior Plonistas as allround admin, 2 release managers, 2 CI/CD experts. 3 former board members, everyone overburdened with work. We had all kinds of plans for this year, but we have mostly been putting out fires.

We are a volunteer organisation, and don't have a big company behind us that can throw money at the problems. Strength and weakness. In all society it is a problem that volunteers are decreasing.

Root causes:

  • We failed to scale down in time in our IT landscape and usage.
  • We have no clean role descriptions, team descriptions, we can't ask a minimum effort per week or month.
  • The trend is more communication channels, platforms to join and promote yourself, apps to use.

Overview of what have have to keep running as admin team:

  • Support main development process: github, CI/CD, Jenkins main and runners, dist.plone.org.
  • Main communication, documentation: pone.org, docs.plone.org, training.plone.org, conf and country sites, Matomo.
  • Community office automation: Google docds, workspacae, Quaive, Signal, Slack
  • Broader: Discourse and Discord

The first two are really needed, the second we already have some problems with.

Some services are self hosted, but also a lot of SAAS services/platforms. In all, it is quite a bit.

The Admin team does not officially support all of these, but it does provide fallback support. It is too much for the current team.

There are plans for what we can improve in the short term. Thank you to a lot of people that I have already talked to about this. 3 areas: GitHub setup and config, Google Workspace, user management.

On GitHub we have a sponsored OSS plan. So we have extra features for free, but it not enough by far. User management: hard to get people out. You can't contact your members directly. E-mail has been removed, for privacy. Features get added on GitHub, and no complete changelog.

Challenge on GitHub: we have public repositories, but we also have our deployments in there. Only really secure would be private repositories, otherwise the danger is that credentials or secret could get stolen. Every developer with access becomes an attack vector. Auditing is available for only 6 months. A simple question like: who has been active for the last 2 years? No, can't do.

Some actionable items on GitHub:

  • We will separate the contributor agreement check from the organisation membership. We create a hidden team for those who signed, and use that in the check.
  • Cleanup users, use Contributors team, Developers
  • Active members: check who has contributed the last years.
  • There have been security incidents. Someone accidentally removed a few repositories. Someone's account got hacked, luckily discovered within a few hours, and some actions had already been taken.
  • More fine grained teams to control repository access.
  • Use of GitHub Discussions for some central communication of changes.
  • Use project management better.
  • The elephant in the room that we have practice on this year, and ongoing: the Collective organisation. This was free for all, very nice, but the development world is not a nice and safe place anymore. So we already needed to lock down some things there.
  • Keep deployments and the secrets all out of GitHub, so no secrets can be stolen.

Google Workspace:

  • We are dependent on this.
  • No user management. Admins have had access because they were on the board, but they kept access after leaving the board. So remove most inactive users.
  • Spam and moderation issues
  • We could move to Google docs for all kinds of things. Use Google workspace drives for all things. But the Drive UI is a mess, so docs can be in your personal account without you realizing it.

User management:

  • We need separate standalone user management, but implementation is not clear.
  • We cannot contact our members one on one.

Oh yes, Plone websites:

  • upgrade plone.org
  • self preservation: I know what needs to be done, and can do it, but have no time, focusing on the previous points instead.

22 Jan 2026 9:43am GMT