20 Apr 2026

feedDjango community aggregator: Community blog posts

Django: fixing a memory “leak” from Python 3.14’s incremental garbage collection

Back in February, I encountered an out-of-memory error while migrating a client project to Python 3.14. The issue occurred when running Django's database migration command (migrate) on a limited-resource server, and seemed to be caused by the new incremental garbage collection algorithm in Python 3.14.

At the time, I wrote a workaround and started on this blog post, but other tasks took priority and I never got around to finishing it. But four days ago, Hugo van Kemenade, the Python 3.14 release manager, announced that the new garbage collection algorithm will be reverted in Python 3.14.5, and the next Python 3.15 alpha release, due to reports of increased memory usage.

Here's the story of my workaround, as extra evidence that reverting incremental garbage collection is a good call.

Python 3.14's incremental garbage collection

Python (well, CPython) has a garbage collector that runs regularly to clean up unreferenced objects. Most objects are cleaned up immediately when their reference count drops to zero, but some objects can be part of reference cycles, where some set of objects reference each other and thus never reach a reference count of zero. The garbage collector sweeps through all objects to find and clean up these cycles.

Python 3.14 changed garbage collection to operate incrementally. Previously, a garbage collection run would sweep through all objects in one go, but this could lead to "stop the world" stalls where your program's real work could pause for seconds while the garbage collector did its job. The incremental garbage collection algorithm instead does a fraction of the work at a time, spreading out the cost of garbage collection.

Here's the full release note (historical source):

Incremental garbage collection

The cycle garbage collector is now incremental. This means that maximum pause times are reduced by an order of magnitude or more for larger heaps.

There are now only two generations: young and old. When gc.collect() is not called directly, the GC is invoked a little less frequently. When invoked, it collects the young generation and an increment of the old generation, instead of collecting one or more generations.

The behavior of gc.collect() changes slightly:

  • gc.collect(1): Performs an increment of garbage collection, rather than collecting generation 1.
  • Other calls to gc.collect() are unchanged.

(Contributed by Mark Shannon in 108362.)

The problem

I'd been helping one of my clients upgrade to Python 3.14 for a few months, chipping away at compatibility work like upgrading dependencies and fixing deprecations. Tests were finally all passing and everything was working on the local development server. The next stop was to launch a temporary deployment using Python 3.14 via Heroku's review apps feature.

At the basic tier, Heroku review apps use fairly resource-constrained servers, including just 512MB of RAM, with the ability to temporarily burst up to nearly 1GB (200%). Paying for larger servers is an option, but unfortunately the next step up is pretty expensive.

When I launched a review app for my Python 3.14 branch, I found its release phase failed while running migrate. Inspecting the logs, I found the migrations started fine:

$ heroku logs --app example-python-314-wsgk3w --num 1000 | less
...
app[release.6634]: System check identified no issues (26 silenced).
app[release.6634]: Operations to perform:
app[release.6634]: Apply all migrations: admin, auth, contenttypes, ...
app[release.6634]: Running migrations:

…but partway through, these messages started appearing:

heroku[release.6634]: Process running mem=527M(101.5%)
heroku[release.6634]: Error R14 (Memory quota exceeded)

…ramping up until the 200% mark:

heroku[release.9599]: Process running mem=977M(190.3%)
heroku[release.9599]: Error R14 (Memory quota exceeded)

…and finally the termination of the release process:

heroku[release.9599]: Process running mem=1033M(201.7%)
heroku[release.9599]: Error R15 (Memory quota vastly exceeded)
heroku[release.9599]: Stopping process with SIGKILL

These messages came from Heroku's process management layer, which terminated the memory-hungry release process with SIGKILL after the hard threshold of 1GB memory usage was breached. Repeat attempts hit the same issue.

I was confused: migrations should not consume much memory. While they create a lot of temporary objects (Django model classes and fields) in order to calculate the SQL to send to the database, such objects are all short-lived and should be garbage-collected fairly swiftly. Additionally, migrations worked fine on the local and CI environments, and they'd never had memory issues on previous Python versions.

It looked like there was a memory leak, and it was time to dig in.

Initial investigation

I first profiled memory usage of migrate locally using Memray, the memory profiler that I covered in my previous post, using:

$ memray run manage.py migrate

The profiles revealed that memory usage had slightly increased on Python 3.14 compared to 3.13, but did not find a memory leak (a pattern of continual growth). Still, I made some optimizations to defer some imports, saving about 30% of startup memory usage, and tried again, to no avail.

I then had the idea to profile on a Heroku dyno directly. After hacking the release process to not run migrations, I built a review app and SSH'd into its web server:

$ heroku ps:exec -a example-python-314-rspwtc --dyno web.1 bash
Establishing credentials... done
Connecting to web.1 on ⬢ example-python-314-rspwtc...
~ $

Initially, I tried using Memray's live mode to profile the migrations as they ran:

$ memray run --live manage.py migrate

While this tool looks great for some situations, it didn't really work here, especially since it seized up after Heroku terminated the server.

I then tried running the default memray run command:

$ memray run manage.py migrate
Writing profile results into memray-manage.py.724.bin

…then, on my local computer, I repeatedly ran this command to copy down the results file:

$ trash memray-manage.py.724.bin && heroku ps:copy -a example-python-314-rspwtc --dyno web.1 memray-manage.py.724.bin

I was a bit worried here that the Memray binary file might be corrupted due to copying it while memray run was generating it. But with a final truncated copy left over after the server crashed, I asked Memray to generate a flamegraph for it:

$ memray flamegraph memray-manage.py.724.bin

…and it worked! Kudos to the Memray team for making their output format usable even when incomplete.

This more detailed flamegraph revealed more than 50% of the memory usage was allocated in ModelState.render(), which creates temporary model classes:

class ModelState:
    ...

    def render(self, apps):
        """Create a Model object from our current state into the given apps."""
        ...
        return type(self.name, bases, body)

This information hinted that these temporary model classes were hanging around beyond their expected short lifetime, leading to the memory leak. For example, every model class could also end up in a list intended for debugging, but accidentally extending the lifetime of these temporary classes.

I decided to dig a bit deeper using machete-mode debugging, with the below snippet that captures the temporary model classes and logs details about them. I wrote this within the Django settings file, where it was guaranteed to run at Django startup time, before the migrate management command.

import atexit
import gc
import tracemalloc
import weakref
from itertools import islice

from django.db.migrations.state import ModelState

tracemalloc.start(2)

orig_render = ModelState.render

rendered_classes = weakref.WeakSet()


def wrapped_render(*args, **kwargs):
    cls = orig_render(*args, **kwargs)
    rendered_classes.add(cls)
    return cls


ModelState.render = wrapped_render


@atexit.register
def show_referrers():
    print(f"🎯 {len(rendered_classes)} classes referred to.\n")

    for cls in islice(rendered_classes, 2):
        print(f"🎁🎁🎁 {cls!r} 🎁🎁🎁")
        for i, referrer in enumerate(gc.get_referrers(cls), start=1):
            print(f"🍌 Referrer #{i}: {referrer!r}")
            if tb := tracemalloc.get_object_traceback(referrer):
                print("\n".join(tb.format(most_recent_first=True)))
            print()
        print()
        print()

Note:

  1. tracemalloc.start() starts Python's built-in memory allocation tracking.
  2. The ModelState.render() method was monkeypatched with a wrapper that stores every temporary model class in a WeakSet.
  3. The @atexit.register-decorated function runs at the end of the program, and logs two things.
  4. The first piece of logging is the number of temporary model classes still alive at the end of the program, which should be close to zero. (Some may stick around from the final migration state.)
  5. The second piece of logging iterates over the first two live temporary model classes and logs their name and their referring objects, discovered via gc.get_referrers(). For each referring object, it also logs the traceback of where that object was allocated, using tracemalloc.get_object_traceback() (which is why tracemalloc.start() was needed at the beginning).
  6. The emojis are a bit of fun to make the log messages easier to skim through. I have no idea why I picked 🎁 and 🍌!!

The output from this hook was voluminous, even with the limit to the first two live classes. For example, here's the output for a temporary ContentType model class:

🎁🎁🎁 <class '__fake__.ContentType'> 🎁🎁🎁
🍌 Referrer #1: <generator object WeakSet.__iter__ at 0x1234ef300>
  File "/.../example/core/apps.py", line 45
    for cls in islice(rendered_classes, 2):

...

🍌 Referrer #11: {'name': 'model', ..., 'model': <class '__fake__.ContentType'>}
  File "/.../.venv/lib/python3.14/site-packages/django/utils/functional.py", line 47
    res = instance.__dict__[self.name] = self.func(instance)
  File "/.../.venv/lib/python3.14/site-packages/django/db/models/fields/__init__.py", line 1210
    self.validators.append(validators.MaxLengthValidator(self.max_length))

I checked the live referrers for a few classes, and they all seemed to be expected. However, it did reveal just how many cycles exist between ORM objects. For example, model classes refer to their field objects, which in turn refer back to their model classes, thanks to Django's Field.contribute_to_class() creating this reference:

def contribute_to_class(self, cls, name, private_only=False):
    ...
    self.model = cls
    ...

Anyway, from comparing the output between Python 3.13 and 3.14, I could see that no new references were being created on Python 3.14. It seemed likely that the incremental garbage collection algorithm was the culprit.

The workaround

Given the investigation, I wanted to work around the issue by forcing a full garbage collection sweep with gc.collect() after each migration file ran. I came up with the below code, saved as management/commands/migrate.py in one of the project's Django apps. It extends the default migrate command to run gc.collect() after each successful migration (where "apply" is forwards and "unapply" is backwards).

import gc

from django.core.management.commands.migrate import Command as BaseCommand


class Command(BaseCommand):
    """Extended 'migrate' command."""

    def migration_progress_callback(self, action, migration=None, fake=False):
        """
        Extend Django's migration progress reporting to force garbage
        collection after each migration. This is a workaround to keep memory
        usage low, especially because we have a low limit on Heroku. It seems
        the incremental garbage collector introduced in Python 3.14 cannot
        keep up with the migration process's tendency to create many cyclical
        objects, so our best fallback is to force collection of everything
        after each migration is applied or unapplied.

        https://adamj.eu/tech/2026/04/20/django-python-3.14-incremental-gc/
        """
        super().migration_progress_callback(action, migration=migration, fake=fake)
        if action in ("apply_success", "unapply_success"):
            gc.collect()

It felt a bit hacky, but it did the trick! The review app succeeded to launch, showing a flat memory profile as before.

We then continued to deploy to staging and production without any issues, and the team have been happily using Python 3.14 for over a month now.

Fin

Well, that's where the tale ends right now. After the incremental garbage collection algorithm is reverted in Python 3.14.5, I guess I'll be able to remove this workaround.

While it would be nice to have incremental garbage collection work well, it's clear that the current implementation has some issues. I think the core team is making the right call reverting it, but hopefully there will be energy to improve the feature for the future.

May your garbage be collected efficiently and without fuss,

-Adam

20 Apr 2026 4:00am GMT

17 Apr 2026

feedDjango community aggregator: Community blog posts

Django News - 30% Off PyCharm Pro – 100% for Django - Apr 17th 2026

Introduction

Django News Newsletter is moving!

Just a quick heads up. We're planning to move our newsletter to a new platform next week.

If things look a little different when it shows up, it's still us.

Django Newsletter

News

PyCharm & Django annual fundraiser

JetBrains and the Django Software Foundation team up again to offer 30% off PyCharm while matching donations to fund Django's core development and community programs.

djangoproject.com

New Technical Governance - request for community feedback

Django proposes a simpler, more flexible technical governance model and is inviting community feedback ahead of a planned July 2026 rollout.

djangoproject.com

Could you host DjangoCon Europe 2027? Call for organizers

DjangoCon Europe 2026 is happening right now in Athens, Greece but plans for 2027 have already begun. This post lays out all the resources for any questions, support, and more for future organizers.

djangoproject.com

Reverting the incremental GC in Python 3.14 and 3.15 - Core Development

Python is rolling back its new incremental garbage collector in 3.14 and 3.15 after real-world memory issues, reverting to the proven generational model while rethinking a future reintroduction.

python.org

PEP 772: Packaging Council governance process (Round 3) - Packaging / Coordination

PEP 772 has officially been approved, creating a new Python Packaging Council to guide the future of packaging standards, tools, and ecosystem governance.

python.org

Django Software Foundation

Django Has Adopted Contributor Covenant 3

The 3.0 edition of the new Code of Conduct is here! This milestone represents the completion of a careful, community-driven process that began earlier this year.

djangoproject.com

DSF Board monthly meeting, April 9, 2026

The Django Software Foundation approved a modernized Code of Conduct, new working group charters, and key community initiatives, signaling a fresh push toward clearer governance and sustained project growth.

django.github.io

Python Software Foundation

PyCon US 2026: Why we're asking you to think about your hotel reservation

For many years, PyCon US has relied on hotel booking commissions to help pay for conference space. If you are attending this year, please use an official hotel to be both close to the venue.

pyfound.blogspot.com

Python Software Foundation News: Reflecting on Five Years as the PSF's First CPython Developer in Residence

Łukasz Langa looks back on five years and highlights including the transition to GitHub issues from bugs.python.org, the replacement of the mostly manual CLA process with an automated system, the introduction of free threading to Python, and the replacement of the interactive shell in the interpreter. Also while addressing thousands of bugs, he's witnessed the full-time paid developer in residence roster at the Python Software Foundation grow from one person to five.

pyfound.blogspot.com

Updates to Django

Today, "Updates to Django" is presented by Johanan Oppong Amoateng from Djangonaut Space! 🚀

Last week we had 12 pull requests merged into Django by 10 different contributors - including a first-time contributor! Congratulations to Jonathan Wu for having their first commits merged into Django - welcome on board!

This week's Django highlights: 🦄

Django Newsletter

Django Fellow Reports

Fellow Report - Natalia

A good chunk of this week focused on improving contributor workflows and reducing review overhead by introducing automated quality checks for PRs :robot:. This builds on prior experimentation (thanks @frankwiles) and seeks to provide early, actionable feedback for PR authors while helping maintainers focus on substantive review. We also had a flood of overly verbose and low quality reports from the same person, which I closed eagerly making use of the recent new guidelines we published in the security policy.

djangoproject.com

Fellow Report - Jacob

The last report before DjangoCon Europe. Lots of tickets triaged, reviewed, authored, discussed, and the usual kaleidoscope of miscellaneous tasks.

djangoproject.com

Django Fellow Report - Sarah

Django Fellow Sarah Boyce returns from maternity leave with part-time updates, tackling triage, reviews, security work, and GSoC prep while navigating connectivity challenges from Turkey.

djangoproject.com

Sponsored Link 1

You know @login_required. Now meet @app.reasoner(). AgentField turns Python functions into production AI agents, structured output, async execution, agent discovery. Every decorator becomes a REST endpoint. Open source, Apache 2.0. Python, Go & TypeScript SDKs.

agentfield.ai

Articles

Enforce Business Logic in the Database with Django

A practical guide to enforcing business logic at the database layer in Django using transactions, select_for_update locks, and CheckConstraint / UniqueConstraint to prevent race conditions and invalid data rather than relying on application-level validation.

lincolnloop.com

Let's talk about LLMs

James Bennett consolidates his thoughts on AI/LLMs in this wide-ranging piece, ending with a call to invest in software fundamentals instead of racing to adopt the latest AI craze.

b-list.org

Django Table, Filter and Export With Htmx

A reusable pattern for combining django-tables2, django-filter, and HTMX into a single generic view and template. Very cool stuff.

fundor333.com

Decoupling Your Business Logic from the Django ORM

Carlton Gibson's latest The Stack Report is a detailed dive into business logic and how to handle it in Django. This is a perennial topic, but he comes at it with decades of experience and wisdom.

buttondown.com

djust 0.4.0 - The Developer Experience Release

djust 0.4.0 is about developer experience - making everyday tasks faster, safer, and more intuitive. 30+ new features, critical bug fixes, and a security hardening pass that eliminated every known vulnerability.

djust.org

Why aren't we uv yet?

A decent chunk of new Python repos already use uv. Coding agents still overwhelmingly recommend pip and requirements.txt, while many users prefer uv.

aleyan.com

Events

Are You Attending PyCon, or Orbiting It?

PSF Board Member Georgi Ker makes a personal case for booking hotels via the official PyCon US website before April 24th.

georgiker.com

Design Articles

Under the hood of MDN's new frontend

From 2-min dev server starts to 2s. They rewrote MDN's entire frontend, ditching the React SPA for Lit web components, server components, and Rspack. The result: less JS shipped, scoped CSS, and a build pipeline that just works.

mozilla.org

Videos

Debunking Django Myths - Sarah Boyce at PyTV

Django Fellow Sarah Boyce gave a talk recently at PyTV titled, "Django Has a Marketing Problem: Debunking the Myths That Won't Die." It is a fantastic overview of what Django does well and what it can improve.

youtu.be

Incremental Typing in Django - Carlton Gibson

Former Django Fellow and current Django Chat podcast host Carlton Gibson, recently gave a talk titled, "Static Islands, Dynamic Sea: Some Thoughts on Incremental Typing." In it he talks about why Python's dynamic nature is a feature, not a bug, and demonstrates Mantle - a library of utilities for typing around Django's liquid core.

youtu.be

Sponsored Link 2

Annual PyCharm Promo - 30% off, all money goes to Django

The annual PyCharm + Django promotion is live until May 1st. This is the single biggest fundraiser for Django and has raised over $350,000 since 2016.

jetbrains.com

Podcasts

Django Tasks - Jake Howard

Episode 200(!) features Jake Howard, a Senior Systems Engineer at Torchbox and the author of DEP 14, django.tasks, the highlight feature in Django 6.0. We discuss his work on the Django security team, work with Wagtail, AI dabblings, and more.

djangochat.com

Django Job Board

Python Developer at Open Data Services

Remote UK role building Python data systems for social-impact projects, offering ~£48k plus profit share in a collaborative worker co-op.

djangojobboard.com

Projects

yassi/dj-signals-panel

Display registered Django signals and receivers, showing what fires and where.

github.com

dvf/opinionated-django

An opinionated Django project with Repository pattern, Pydantic DTOs, svcs DI, and Stripe-style ULID IDs

github.com


This RSS feed is published on https://django-news.com/. You can also subscribe via email.

17 Apr 2026 3:00pm GMT

Djangocon EU: zero-migration encryption - Vjeran Grozdanic

(One of my summaries of the 2026 Djangocon EU in Athens).

Full title: zero-migration encryption: building drop-in encrypted field in Django.

He works at Sentry. Huge site with a Django backend and thousands requests per second.

He had to add a new table to store 3rd party API credentials. Oh: should this be encrypted? Yes. But: each team has its own way to encrypt data. And there were at least 10 encryption keys here and there (as environment variables). And tens of places where encryption/decryption happens.

So: better to build a generic solution. Or use an existing generic solution. And yes, there are multiple libraries. EncryptedCharField looked nice. But the problem was all the existing data in the various places. Sentry is not a site that you can shut down for a while, so you have to do it with zero downtime. This means you can never change an existing column type.

A solution could be to add a new encrypted field next to the existing one. Then fill it and backfill it and make sure no new data is written to the old field and then you can remove the old field. But that's quite a job with all the different locations that had to be changed.

A Field class in Django has get_prep_value() and from_db_value(). Those are called before storing data in the database and after grabbing it from the database. You could create a new CharField-like field and start to encrypt values in get_prep_value and decrypt the other way.

You'd have to be able to recognise the old un-encrypted values. A solution: prefix encrypted values with enc:. Also key rotation can be handled this way, by including that in the prefix (enc:key2:).

But there's also a bjson field. They solved that by encrypting the json and writing a json to the database with the encrypted json in a field and also the encryption key info.

https://reinout.vanrees.org/images/2026/kat2.jpeg

Unrelated photo explanation: a cat I encountered in Athens on an evening stroll in the neighbourhood behind the hotel.

17 Apr 2026 4:00am GMT

Djangocon EU: supply chain attacks on Python projects - Mateusz Bełczowski

(One of my summaries of the 2026 Djangocon EU in Athens).

Full title: what's in your dependencies? Supply chain attacks on Python projects.

How supply chain attacks work: attackers don't attack your code directly, they target something you trust. A typical Django project has lots of dependencies. Direct dependencies and "transitive dependencies", dependencies of our dependencies. If you depend on requests, requests itself will grab certify and urllib3.

Possible package attacks:

  • Inject malicious code directly into the repo.
  • Create malicious package. Typosquatting (abusing typos), slopsquatting (abusing typos made by LLMs). "Brandjacking": quickly after deepseek became popular, a deepseekai package was published that stole credentials.
  • Compromise existing package. Credential stealing, CI/CD exploits.

What attackers typically do with access is to steal credentials. Environment variables, cloud keys (AWS_xyz), pypi tokens, ssh private keys, database URLs, saved passwords.

Example: num2words was hacked in July 2025. Phishing leading to maintainer credentials theft. Fake login page at pypj.org instead of pypi.org. Then they uploaded faulty releases with the captured credentials. Credentials weren't rotated, so a second attack happened a few days later. This malware targeted .pypirc files, leading to more compromises.

How can we defend from this kind of attacks? Depends on the kind of attack. When publishing via GitHub actions, use "trusted publishing", in that case there are no credentials to steal.

Another example: LiteLLM was compromised via trivy, a security scanner that itself was compromised... It in turn collected environment variables, secrets and ssh keys, bundled it all in a tarball and posted it to some legitimate-looking domain.

Some myths:

  • "Lockfiles protect us". No, they only prevent accidental upgrades, not when adding a package for the first time.
  • "Just don't install suspicious packages". Lots is installed via transitive dependencies.
  • "We run everything in Docker so we're safe". It limits the blast radius, but credentials and environment variables are still at risk.
  • "We can fully prevent attacks".

Some tips:

  • Use dependency cooldowns. uv has "exclude-newer", pip has "uploaded-prior-to". Don't be the first to install a fresh release, as most malicious packages are discovered within hours or days.
  • Pin versions and verify hashes.

Pypi is getting better:

  • Trusted publishing.
  • Project quarantine.
  • Attestations: cryptographic tools to verify the source.
  • Typosquatting protection.

AI has risks:

  • Slopsquatting. Hallucinated package names that get exploited.
  • Prompt injection via github issues.
  • Agents often "just" pip-install things directly.

Note: AI can also be used to detect malware! A small project started after the LiteLLM compromise managed to detect a dangerous different compromise almost the moment it was published. Nice!

https://reinout.vanrees.org/images/2026/kat8.jpeg

Unrelated photo explanation: a cat I encountered in Athens on an evening stroll in the neighbourhood behind the hotel. It was a cat on the hunt: relevant to the topic of this talk :-)

17 Apr 2026 4:00am GMT

Djangocon EU: lightning talks day 3

(One of my summaries of the 2026 Djangocon EU in Athens).

Announcement - Carlton Gibson

They've been working on improving the technical governance of Django. They'd like to get feedback. There's a blog post about it.

Oh, and look at the "30% off PyCharm" button on the django website, that raises quite a lot of funds for Django. PyCharm's sponsoring is a very sizeable financial part of Django, thanks!

Even more table partitioning with Django, Postgres and UUIDs - Tim Bell

(See his earlier talk on partitioning).

UUID is 128-bits, usually displayed as hex strings. It starts with the unix timestamp, followed by several random fields (in version 7). In version 8, you have more flexibility. You can customize it to put a specific value (an id of a related field in their case) in the first field.

Partitioning per UUID (they used it as their ID) then effectively also partitions on the related field.

Speeding up Django startup times with lazy imports - Anze Pecar

Imports in Python can be slow. Luckily, python has something build-in to check it, the "importtime" flag:

python -X importtime manage.py check

He worked around the packages he found by importing the package inside the functions where he used them. It worked, but it was ugly.

Look at things like post_worker_init in gunicorn, you can use that to pre-load the offending modules.

You can also wait for python 3.15. PEP810: explicit lazy imports!

PyLadies Seoul: rebooting a community for women in tech scenes - Hwayoung Cha

At Pycon Korea 2023 there were only three woman in attendance. So: time to re-start Pyladies Seoul! And with success. One of the new attendees is now a CTO of a company (and also a PyLadies volunteer herself).

They'll also start a Django workshop soon.

Join your local PyLadies chapter!

What I learned during learning to solve rubic cube - Venelin Stoykov

He learned solving a Rubic cube in about two weeks.

We can learn new things more easily by association with things we already know. We need to practice a lot. Repeat, repeat: that way we tell our brain that we need to remember it.

"Thinking slow and fast" is a book he recommends.

AI is like the fast thinking. Fast is also a bit sloppy and often a bit wrong.

If we really want to understand something, it takes time and work.

Why volunteering and contributing to communities is important - Alex Gómez

Get involved! Volunteer! Do some work! Volunteers are necessary.

Volunteering is a lot of work, but it is worth it.

Djangofmt, a Django template formatter written in rust - Thibaut Decombe

Djangofmt, a fast, html aware, django template formatter, written in Rust.

https://github.com/UnknownPlatypus/djangofmt

You can run it as a pre-commit hook.

https://reinout.vanrees.org/images/2026/kat9.jpeg

Unrelated photo explanation: a cat I encountered in Athens in the morning near the hotel.

17 Apr 2026 4:00am GMT

Djangocon EU: improving runserver with django-prodserver - Andrew Miller

(One of my summaries of the 2026 Djangocon EU in Athens).

Original title: improving one of Django's most used APIs - and it's not the one you're thinking of. (I'm using a more descriptive title for my blog entry.)

APIs are everywhere. Django rest framework, but also the models layer. And: manage.py runserver, he considers that an API. Everybody runs it. So: can we improve it?

"Runserver" doesn't sound the same as "devserver" or "rundevserver". It doesn't advertise that it is only intended for development. A name change could help there. But... Django really likes to be stable. It is probably too entrenched to change.

Production is missing a cohesive API. You normally run something like gunicorn myapp.wsgi 0.0.0.0:8000 --workers 4...

He tried to get improvements in. Since 5.2, there's a warning when you start runserver: "it is only intended for development". But you might miss it when you get lots of log messages. Other people complained about the extra two lines.

He started django-prodserver. pip install django-prodserver[gunicorn]. You can then run manage.py prodserver (and manage.py devserver, he snuck that in).

You have to add a bit of configuration to your settings file:

PRODUCTION_PROCESSES = {
    "web": {
        "BACKEND": "django_prodserver.backends.gunicorn.GunicornServer",
        "ARGS": {"bind": "0.0.0.0:8000",},  # add number of workers and so.
    },
}

Run it with manage.py prodserver web. You can add more processes, for instance for a background worker process.

It is the first version. He wants feedback, especially on the naming. manage.py prodserver web or just manage.py web? And manage.py worker? manage.py serve --prod?

Django isn't just a framework, it is a set of APIs. We can prototype new APIs in packages.

https://reinout.vanrees.org/images/2026/kat4.jpeg

Unrelated photo explanation: a cat I encountered in Athens on an evening stroll in the neighbourhood behind the hotel.

17 Apr 2026 4:00am GMT

Djangocon EU: body of knowledge - Daniele Procida

(One of my summaries of the 2026 Djangocon EU in Athens).

Athens! The thinking industry started here. Athens is often the origin if you follow ideas to the source.

Here also Socrates was found guilty (280-221) for "corrupting the youth" on trumped-up charges. Though... he made it is job to be a complete nuisance: exposing everyone's hypocrisy and asking difficult questions. After the 280-221 vote he got to give a speech in reaction. After that, the vote on the actual punishment was 360-141 in favour of the death penalty. The speech must have been particularly irritating.

On to a different subject. He watched the recent launch of the NASA rocket that went to the moon. A marvel of technology. That was measured using body parts, being 322 feet tall. And the distance to the moon in miles. Why not the scientific meter and kilometer?

Plato already mentioned it. "Now take the acquisition of knowledge; is the body a hindrance or not, if one takes it into partnership to share an investigation". And "when the soul tries to investigate anything with the help of the body, it is obviously led astray".

0.098 km = 98 m = 98000 mm, a child can understand it. Pure rationality. But ask a Metric Martyr in the UK how many feet are in a mile and most of them won't know.

The world seems to be divided in two camps:

  • Thinking, rationality, abstraction, unboundedness.
  • Bodies, materiality, tangibility, being rooted.

Wouldn't Plato have loved a computer? Pure rationality, following its logic programming without fail?

What about those body-part-units? They're not that weird actually. They're rational Roman measurements:

  • A mile is 1000 Roman paces.
  • 1 passus = 5 pedes (feet).
  • 1/12 pes (feet) = 1 uncia (thus: inch).

(Note: according to the Greek, Romans are only good for stealing Greek ideas, building roads and killing people.)

On to another aspect. Why is Django's documentation so good? Well, it has been prioritized from the start. It is complete, accurate, consistent, rational and well-structured: all Platonic values.

But Daniele also thinks the documentation is so good because it fits the human body.

The size has to be right. The limitations of our intelligence are the limits of our embodied intelligence. We can only grasp so much, mentally. A list can be too long. A page can be too long. If information is cut in too-small parts, you also can get into problems as you have to context-switch between pages too much. We tire mentally also because we tire physically.

The same applies to our body. Our hands and fingers can grasp objects. But it has to be of a certain size. Too big and we can't grasp it. Too small and our fingers can't pick it up.

We experience documentation in time and space. We move with it. How long have you been reading the Django documentation? "Where are you in the text?" We orient ourselves in text as if in a space or a building. We rely on the humanised rationality of structure. Sometimes you're in a building and it is clear where you have to go and in other buildings you feel lost.

Django's documentation is so good because of the quality of experience that it gives you. It is almost an embodied being that you can experience in space and time. Does it fit you? Do you notice it? The embodied nature of the work and intelligence that the Django community poured into the documentation?

Early Macintosh manuals had to explain new concepts and really tried to explain them in a human way. Scrolling being explained with help of an old book scroll, for instance. A floppy disk for storage as a floor plan of a building with a corridor and rooms.

Aldine Press (started 1494 in Venice) had a vision to print the old classics in a more accessible way. Books in the middle ages used to be big. And sometimes chained to the desk. Not really accessible. By printing them in smaller, lighter, more accessible formats, he wanted to make our "body of knowledge" more fitting to the human body.

You can see the bodily aspects of knowledge in our language:

  • Seizing/taking: grasp, comprehend, apprehend, perceive.
  • Measuring: ponder, weigh up, fathom.
  • Body movement: jumping to conclusions, intuitive leap, stumble/trip
  • Spatiality: understand, position

Mental space. When he asked a question of Russell Keith-Magee at a Django sprint, Russell would close his eyes and turn his eye inwards for a while. He would look at the Django codebase in his head and navigate it. Just like you yourself would navigate a city?

Being a programmer isn't so different from being a human with a body in time and space. Look at questions you might have as a beginning programmer:

  • Which file or directory or window to be in.
  • Where to expect the output.
  • When to expect it.
  • Where to enter a command.
  • When to do something.
  • In what order to do things.

And then look at an experienced programmer. They seem to know where they are. They know their way around. They can move smoothly.

Closing comment: there are some uncanny features in software nowadays. As a human, we are used to having limits. But nowadays we have infinite scrolling, doomscrolling. And edgeless, endless, virtual cloud resources. And LLM indeterminism. Those are not inherintly bad, but it is different from what we're used to. Is this still computing fit for the embodied mind?

https://reinout.vanrees.org/images/2026/kat1.jpeg

Unrelated photo explanation: a cat I encountered in Athens on an evening stroll in the neighbourhood behind the hotel.

17 Apr 2026 4:00am GMT

Djangocon EU: auto-prefetching with model field fetch modes in Django 6.1 - Jacob Walls

(One of my summaries of the 2026 Djangocon EU in Athens).

There's an example to experiment with here: https://dryorm.xterm.info/fetch-modes-simple

Timeline: it will be included in Django 6.1 in August.

The reason is the 1+n problem:

books = Book.objects.all()
for book in books:
    print(book.author.name)
    # This does a fresh query for author every time.

You can solve it with select_related(relation_names) or prefetch_related(relation_names). The first does an inner join. The second does two queries.

But: you might miss a relation. You might specify too many relations, getting data you don't need. Or you might not know about the relation as the code is in a totally different part of the code.

Fetch mode is intended to solve it. You can append .fetch_mode(models.FETCH_xyz) to your query:

  • models.FETCH_ONE: the current behaviour, which will be the default.
  • models.FETCH_PEERS: Fetch a deferred field for all instances that came from the same queryset. More or less prefetch_related in an automatic, lazy manner.
  • models.FETCH_RAISE: useful for development, it will raise FieldFetchBlocked. And it will thus tell you that you'll have a performance problem and that you might need FETCH_PEERS

This is what happens:

books = Book.objects.all().fetch_mode(models.FETCH_PEERS)
for book in books:
    # We're iterating over the query, so the query executes and grabs all books.
    print(book.author.name)
    # We accessed a relation, so at this point the prefetch_related-like
    # mechanism ist fired off and all authors linked to by the books are
    # grabbed in one single query.

You can write your own fetch modes, for instance if you only want a warning instead of raising an error.

https://reinout.vanrees.org/images/2026/kat3.jpeg

Unrelated photo explanation: a cat I encountered in Athens on an evening stroll in the neighbourhood behind the hotel.

17 Apr 2026 4:00am GMT

Djangocon EU: How Django is helping to build the biggest X-ray observatory to date - Loes Crama

(One of my summaries of the 2026 Djangocon EU in Athens).

She works at Cosine, they develop measurement systems and space instrumentation. They work for the space industry (ESA, NASA, etc).

They're now working on "high-energy optics", the NewAthena x-ray observatory, the biggest one to date. NewAthena: NEW Advanced Telescope for High-ENergy Astrophysics. Planned launch is in 2037 on Ariane 6.4.

Talking about rocket science is cool, but where's the software? Within the company, software is an internal service, supporting scientists. Handling data in many ways: visualization, analysis, processing, management. Django plays a big role in all of this.

When you build something with Django in a scientific context, you really need to understand the data. Workflows must be flexible. R&D and production often don't need to be strictly separated. Multiple datastores for various purposes (like an extra MongoDB, for instance) is often handy.

Their application consists of:

  • SXRO (silicon x-ray optics) database.
  • A Mysql database.
  • Django.

The goal is to track all the components that go into the observatory. Status and quality. Configuration and geometry. Component relationships. Inspections. History of all the components.

The default Django admin is their primary method of using the application. Often, it is said that the admin is not not not intended for end users. But they're using it anyway. It is an internal tool for technical people. Most of them have a PhD: they can handle such an interface. They've been using it for years.

There are some third party packages:

  • django-simple-history: easy history.
  • djangoql: advanced queries for the search bar.
  • django-admin-rangefilter, django-admin-list-filter-dropdown, django-admin-numeric-filter: little tools to tweak the filters on the right hand side.

There are some separate forms, mostly for actions performed in the lab or cleanroom. For instance a form where you can use an ipad to indicate defects in one of the components by just drawing on the picture of the component.

There's also a REST API. Other software and data tools can use it to integrate with Django:

  • Observability tools (prometheus/grafana).
  • JupyterHub.
  • Stacking robots.

They use: djangorestframework, drf-spectacular (API docs), django-filter (filtering via GET parameters).

Django is their software backbone. A general-purpose framework that's well suited for a scientific context.

https://reinout.vanrees.org/images/2026/kat6.jpeg

Unrelated photo explanation: a cat I encountered in Athens on an evening stroll in the neighbourhood behind the hotel.

17 Apr 2026 4:00am GMT

Djangocon EU: Django templates on the frontend? - Christophe Henry

(One of my summaries of the 2026 Djangocon EU in Athens).

It all started with formsets: you generate a new form based on other forms. You can use it to create pretty fancy forms. But your designer can get quite creative. And you might have variable forms that have to react to user input.

A common solution is to use htmx, but that means server requests all the time. And some users have really bad connections. Regular requests aren't handy in that scenario.

He looked at django-rusty-templates: Django's template engine implemented in Rust. It had a template parser that he could re-use. With OXC (javascript oxidation compiler) he converted that to javascript.

That way, he could offload much of the django form creation handling to the frontend, including reacting to user input and showing alerts.

The work-in-progress project is called django-template-transpiler: https://github.com/christophehenry/django-template-transpiler . Don't use it for production.

https://reinout.vanrees.org/images/2026/kat7.jpeg

Unrelated photo explanation: a cat I encountered in Athens on an evening stroll in the neighbourhood behind the hotel.

17 Apr 2026 4:00am GMT

16 Apr 2026

feedDjango community aggregator: Community blog posts

Djangocon EU: when SaaS is not allowed: shipping Django as a desktop app - Jochen Wersdörfer

(One of my summaries of the 2026 Djangocon EU in Athens).

He works on "steel-IQ", an open source modelling platform for steel decarbonisation. They knew that they couldn't run it as a web app because of strict security requirements at the end users. So they thought about distributing it as a Python library or as jupyter notebooks. But the users would probably mess it up, so an installable UI was needed.

Perhaps we can do it with django? The first working version was easier than expected. They used "Electron" to get an installable app that showed a web interface. You'd start django inside the process, wait until it responded and then show the web interface as usual.

So: electron + django + sqlite.

The actual Steel-IQ app is full of steel terminology. Not everyone has a blast furnace in the back yard, so he created a sample project that's simpler: https://github.com/ephes/desktop-django-starter

The components:

  • Electron: main.js nodejs program.
  • Django server.
  • BrowserWindow: a chromium renderer, this is what the user sees.
  • Django workers: for the background simulation work.
  • Shared data layer: sqlite + filesystem. The sqlite database is also used by the DatabaseBackend of django-tasks. This means you can run background tasks without needing rabbitmq processes or so. Handy!

Some security measures:

  • Django listens on 127.0.0.1 on a random port. So it doesn't connect to any external network interfaces.
  • Django still validates requests (csrf, ALLOWED_HOSTS).
  • Electron page stays unpriviliged: no node, no filesystem access.

Packaging was a bit tricky. There's quite a lot: electron+Chromium, standalone Python, Python dependencies via uv, Django apps + assets. They build the package in CI. Writable data lives outside the app bundle. DMG for mac, windows installer for windows, tgz for linux.

He demoed it. Worked fine. Even with a live "check for upgrades" that installed a new version.

When you think about making a desktop app from your Django website, many things like templates models and static files stay the same. Authentication changes, of course. You need desktop-specific settings. Ensure writeable paths for logs, media files, etc.

Electron is not the only toolkit you can use. Tauri is an alternative that looks nicer (he discovered it too late). For a simpler Python-first desktop wrapper: look at pywebview/positron. If mobile is required, a mobile web or even native toolkit is the best. If you need native widgets, look at QT/PySide, Kivy, BeeWare/Toga.

Note: there was a similar talk in 2015 in Cardiff: https://reinout.vanrees.org/weblog/2015/06/02/09-django-desktop.html

https://reinout.vanrees.org/images/2026/moezel7.jpeg

Unrelated photo explanation: a trip in November to the Mosel+Eifel region in Germany. Sunset over the Mosel valley, seen from the "Mont Royal" fortifications build by the French king Louis XIV.

16 Apr 2026 4:00am GMT

Djangocon EU: role-based access control in Django - how we forked Guardian - Gergő Simonyi

(One of my summaries of the 2026 Djangocon EU in Athens).

He works for authentik (an "open-core, self-hosted identity provider").

Note: in the talk he'll mix "access control" and "authorization".

In Django, every model gets some basic permissions for CRUD named after the app and the model. You can ask a user object if it has a certain permission. That method, behind the scenes, will ask all authentication backends with backend.has_perm(self, perm, obj). The default will check whether the user has the permission or if the user belongs to a group that has the permission. So that's quite a query. The backends are queried in turn. If a backend doesn't know if a user has a permission, it can just return None, the next backend will then be checked. If the backend knows the user has no access, it raises PermissionDenied.

The backends have methods like .has_perm(self, perm, obj), but "obj" isn't normally called, it is None by default. But you can implement it if you want object permissions. Django-guardian is an implementation of object permissions for Django by providing an extra authentication backend: ObjectPermissionBackend.

user.has_perm("change_book") asks if the user has the permission to change all books. user.has_perm("change_book", obj=my_book) asks for a specific book.

Generic permission mechanisms deal with user, group, permission. Object permissions add userobjectpermission and groupobjectpermission. Well, that's still reasonably OK.

But then Enterprise comes along with even more wishes:

  • Just-in-time privileged access.
  • Delegating permissions. Someone has a permission through my permissions. If I lose them, they lose them.
  • Custom permissions.
  • Group hierarchy.
  • Permission inheritance through group hierarchy.

Especially the group hierarchy was a problem. One time, an enterprise they worked with had a group that, after a couple of more groups, was a member of itself... And: django-guardian used Django's group concept, which they couldn't adjust.

So they started modifying django-guardian to use a custom Group model. They also made some other changes by emphasising a new Role concept and using Role to tie users/groups to permissions:

  • User can be in a group.
  • Groups can be nested.
  • A user (directly) or group can have a role.
  • A RoleObjectPermission references a role and a permission.
  • Lastly they added RoleModelPermission, replacing django's model permission mechanism.

The core SQL query is only 52 lines, so that's not bad. Even with 10k roles and 10k users, the query was below 1ms.

The fork is here: https://github.com/goauthentik/authentik/tree/main/packages/ak-guardian

His slides: https://github.com/gergosimonyi/djangocon-eu-2026/

https://reinout.vanrees.org/images/2026/moezel4.jpeg

Unrelated photo explanation: a trip in November to the Mosel+Eifel region in Germany. The "Oberburg" castle ruin in Manderscheid.

16 Apr 2026 4:00am GMT

Djangocon EU: lightning talks (day 2)

(One of my summaries of the 2026 Djangocon EU in Athens).

Developing Django's community - Andy Miller

Andy likes these conferences. But getting here and attending costs at least €1000. So conferences are limited to those that can afford it.

The new online community working group that wants to improve the online possibilities of gathering, as those are available to everyone.

Perhaps new virtual social events. Better feeds of what's happening in the community. So: improve the community for everyone.

More info: https://github.com/django/online-community-working-group

To JWT or not to JWT - Benedikt

JSON web tokens are not a one-size-fits-all solution.

JWT is a bas64 encoded string with three parts: header, payload, signature. Marketed as stateless, but revocation always adds state.

Why would you want to use it? Well, third-party identity providers often give you one. And: you can save database queries by embedding info in the token. And you can use it for offline mode in mobile or desktop apps. But there are drawbacks.

Some reasons for using something else: JWTs are immutable, data remains valid until expiration even when the data changes server-side. Stateless revocation is impossible. Logout-from-all-devices requires tracking state, defeating the process.

  • JWT if your provider uses them.
  • Regular Session/cookie auth for web apps is often better.
  • Opaque tokens for mobile/desktop.

Django on the Med / Django Italia - Paolo Melchiorre

"Django on the Med" is a sprint. Not a sprint after a conference, but just a sprint. After a conference you often want to get home or you're tired, so what we get done at a conference sprint is often a bit limited.

What they got done in September at "Django on the Med" is amazing. This year it is in Pescara, Italy. 23-25 September.

Somewhat related: 27 May there'll be a free "Django off the Med" online workshop at the PyCon Italia conference.

Two ways I used GeneratedField during a rewrite - Anthony Ricaud

GeneratedField: google for Paolo, he's done lots of talks on it (for instance this one).

He used GeneratedField in a migration scenario:

archived = models.GeneratedField(
    output_field=models.BooleanField,
    db_persist=True,
    expression=(models.Q(soft_delete=True | models.Q(....)),
)

The second scenario involved generating a unique "city id" based on two other fields.

django-mediastorage - Alissa Gerhard

A media file is data that is stored as a file and accessible to users. FileFields store files attached to models.

Local filesystem is the default and sufficient for most projects. You can use X-Accell-Redirect to get the proxy (like nginx) to actually serve the file, instead of Django. But every proxy has its own solution.

django-mediastorage can handle it for you for several different proxies.

There's a new FileField subclass, ProtectedFileField, to handle authentication requirements.

There's also integration for django restframework.

There's still a lot to do, but they're using it in production themselves.

Django VPS deployments made simple - Jan Raasch

Let's talk about Django's deployment story.

He demoed deploying a simple Django app to a small virtual server. With python manage.py deploy --serveral-options.

He used django-simple-deploy and a custom plugin for django-simple-deploy that used 'kamal' to do the actual deploying.

https://reinout.vanrees.org/images/2026/moezel9.jpeg

Unrelated photo explanation: a trip in November to the Mosel+Eifel region in Germany. Some restored remnants of the Virneburg castle.

16 Apr 2026 4:00am GMT

Djangocon EU: is it time for a Django admin rewrite? If so, how? - Emma Delescolle

(One of my summaries of the 2026 Djangocon EU in Athens).

One of Django's "batteries included" batteries is the admin interface. It is great. With a few lines you get Full CRUD, plus filtering, searching and pagination. But it isn't perfect:

  • Want drag/drop ordering?
  • Nested inlines?
  • An extra button next to "edit" and "delete"?
  • Form layout with columns?
  • change_list_view and change_form_view are hard to extend.
  • Better translation support?

(There are plug-ins for most of the individual issues.)

The admin actually predates most of the rest of Django. It has stagnated even though Django got many new features. The admin is a separate framework within django. Lots of patterns from regular Django views don't apply. You're missing musle memory when working on the admin.

The community knows there's a problem. 30% of new proposed fixes are for the admin. There's discussion about a new UI. And there's issue #70 that proposes a change. The community has also been trying:

  • django-admin2. Abandoned at the moment.
  • Grapelli/Jet. Mostly better skins and extensions on top of the regular admin.
  • Wagtail. It is actually a CMS, but people are using its admin interface for Django itself.
  • drf-schema-adapter. Her own attempt via django restframework and a frontend app.

What do we want?

  • Extendable.
  • Composable.
  • Django native.
  • Pluggable.

So: what if your admin was just regular Django code? Build it on django's generic views: one way to do CRUD, not two. Everything is composable with views, permissions and actions. Plugins will be first-class, not an afterthought. And: you get familiar patterns, your regular Django muscle memory applies.

  • Django's class based views as the basis. ListView, CreateView, UpdateView, DeleteView.
  • UI layer for breadcrumbs, column headers, dashboard, sidebar.
  • Permissions and actions layer.
  • Plugin system. She used djp/pluggy.
  • A bit of glue: view factories (idea taken from django-admin2). The factory composes the admin page for you.

The project is here: django admin deux.

The current admin.py syntax still works, apart from a change in the import statements. But it has extras like a layout attribute on admin objects that you can use to adjust the page.

The admin generates its own documentation. Especially when you use many plugins, the generated documentation is great for figuring out what is happening and which part of the mechanism is responsible for what.

You can try it without risk, admin-deux and the regular admin can run side-by-side. Within nine months the core of the admin and the most important plugins can be finished and made completely robust. And within two years it could be in widespread use and integrated in the community. (There's a need for funding).

https://reinout.vanrees.org/images/2026/moezel5.jpeg

Unrelated photo explanation: a trip in November to the Mosel+Eifel region in Germany. Windsborn crater lake near Bettenfeld. One of the very few vulcano craters filled with water north of the Alps. Beautiful in its autumn colors.

16 Apr 2026 4:00am GMT

Djangocon EU: advanced ORM kung-fu - Mathias Wedeken

(One of my summaries of the 2026 Djangocon EU in Athens).

Full title: advanced ORM kung-fu for on-demand filtering, sorting, and summing 40 million financial transactions.

He's working on a property management platform (so: buildings, rent payments, extra costs, maintenance, grouping per housing complex, etc). 300000 users. 472000 financial accounts. 40 million transactions.

Users expect real-time filtering, sorting and summing. But originally his website often had "please wait for 10 seconds" spinners. This should be improved.

A colleague told him that the database ought to be able to handle everything just fine. So the goal was to fully leverage postgresql through Django's ORM without writing raw SQL.

Some things he discovered:

  • Annotations are composable. Almost functional programming.
  • Func(template=...) to drop custom SQL expressions into the ORM. You can use them to group items in a custom way, for instance.
  • In JSONB, you can use __0 to grab the first element of a list. Use that before using raw SQL.
  • Materialized views (note: use "managed=False") means you have pre-computed data available, usable just like a model. You do have to handle those views (create, refresh) yourself, though.
  • Coalesce is something he mentioned and it looked useful, but I have to check what it actually does. The slides had a bit too much content to see what was happening.
https://reinout.vanrees.org/images/2026/moezel8.jpeg

Unrelated photo explanation: a trip in November to the Mosel+Eifel region in Germany. View on Virneburg from the town's similarly-named castle.

16 Apr 2026 4:00am GMT

Djangocon EU: Django task workers in subinterpreters - Melhin Ahammad

(One of my summaries of the 2026 Djangocon EU in Athens).

Full title: Django task workers in subinterpreters: single-server Django applications without process overhead.

He was inspired to tinker with subprocessors by Antony Shaw`s pycon talk

You normally use a wsgi runner like gunicorn to run your django app. There's also daphne, uvicorn and hypercorn: those also support asgi. If you have tasks, you might use celery , RQ or django-Q (or django-tasks). Instead of a single gunicorn, you then have a broker, a worker and a web server: all deployed separately.

He's a fan of the fediverse and wanted to run his own server, but he just wanted one program, not three.

You need python 3.14 to actually try some of the stuff he talks about. And you need to have some longer-running tasks in Django.

  • Python 3.14 allows you to run multiple python interpreters in one process. You have separate imports, builtins and separate namespaces per interpreter: isolation. There's even IPC (inter process communication) via memory. Objects are copied. Subinterpreters handle the GIL by each having their own.
  • Django: preferably 6.0 as you can then use the new Django task framework.

Combining it, you'd use a subinterpreter for the web runner and one or more for the tasks. He hacked a subclass of one of django tasks' to run something off the queue in a subprocessor.

The demo was the as-you-can-clearly-see version where you have 3 seconds to read 40 lines of logs, so I couldn't really tell what was happening. It seemed to work :-)

Important note: there are some Python libraries you might use that cannot deal with subprocessors yet, for instance numpy and pydantic. So check that before you start experimenting. If you use pure python + psycopg? Give it a try now. Watch the ecosystem. Subinterpreters unlock a new form of parallelism for deferred workflows.

Here are some more links: https://github.com/melhin/parimitham/blob/main/slides/links.md

https://reinout.vanrees.org/images/2026/moezel6.jpeg

Unrelated photo explanation: a trip in November to the Mosel+Eifel region in Germany. The Mosel river near Bernkastel-Kues, seen from castle Landshut.

16 Apr 2026 4:00am GMT