20 Apr 2026
Django community aggregator: Community blog posts
Django: fixing a memory “leak” from Python 3.14’s incremental garbage collection
Back in February, I encountered an out-of-memory error while migrating a client project to Python 3.14. The issue occurred when running Django's database migration command (migrate) on a limited-resource server, and seemed to be caused by the new incremental garbage collection algorithm in Python 3.14.
At the time, I wrote a workaround and started on this blog post, but other tasks took priority and I never got around to finishing it. But four days ago, Hugo van Kemenade, the Python 3.14 release manager, announced that the new garbage collection algorithm will be reverted in Python 3.14.5, and the next Python 3.15 alpha release, due to reports of increased memory usage.
Here's the story of my workaround, as extra evidence that reverting incremental garbage collection is a good call.
Python 3.14's incremental garbage collection
Python (well, CPython) has a garbage collector that runs regularly to clean up unreferenced objects. Most objects are cleaned up immediately when their reference count drops to zero, but some objects can be part of reference cycles, where some set of objects reference each other and thus never reach a reference count of zero. The garbage collector sweeps through all objects to find and clean up these cycles.
Python 3.14 changed garbage collection to operate incrementally. Previously, a garbage collection run would sweep through all objects in one go, but this could lead to "stop the world" stalls where your program's real work could pause for seconds while the garbage collector did its job. The incremental garbage collection algorithm instead does a fraction of the work at a time, spreading out the cost of garbage collection.
Here's the full release note (historical source):
Incremental garbage collection
The cycle garbage collector is now incremental. This means that maximum pause times are reduced by an order of magnitude or more for larger heaps.
There are now only two generations: young and old. When
gc.collect()is not called directly, the GC is invoked a little less frequently. When invoked, it collects the young generation and an increment of the old generation, instead of collecting one or more generations.The behavior of
gc.collect()changes slightly:
gc.collect(1): Performs an increment of garbage collection, rather than collecting generation 1.- Other calls to
gc.collect()are unchanged.(Contributed by Mark Shannon in 108362.)
The problem
I'd been helping one of my clients upgrade to Python 3.14 for a few months, chipping away at compatibility work like upgrading dependencies and fixing deprecations. Tests were finally all passing and everything was working on the local development server. The next stop was to launch a temporary deployment using Python 3.14 via Heroku's review apps feature.
At the basic tier, Heroku review apps use fairly resource-constrained servers, including just 512MB of RAM, with the ability to temporarily burst up to nearly 1GB (200%). Paying for larger servers is an option, but unfortunately the next step up is pretty expensive.
When I launched a review app for my Python 3.14 branch, I found its release phase failed while running migrate. Inspecting the logs, I found the migrations started fine:
$ heroku logs --app example-python-314-wsgk3w --num 1000 | less
...
app[release.6634]: System check identified no issues (26 silenced).
app[release.6634]: Operations to perform:
app[release.6634]: Apply all migrations: admin, auth, contenttypes, ...
app[release.6634]: Running migrations:
…but partway through, these messages started appearing:
heroku[release.6634]: Process running mem=527M(101.5%)
heroku[release.6634]: Error R14 (Memory quota exceeded)
…ramping up until the 200% mark:
heroku[release.9599]: Process running mem=977M(190.3%)
heroku[release.9599]: Error R14 (Memory quota exceeded)
…and finally the termination of the release process:
heroku[release.9599]: Process running mem=1033M(201.7%)
heroku[release.9599]: Error R15 (Memory quota vastly exceeded)
heroku[release.9599]: Stopping process with SIGKILL
These messages came from Heroku's process management layer, which terminated the memory-hungry release process with SIGKILL after the hard threshold of 1GB memory usage was breached. Repeat attempts hit the same issue.
I was confused: migrations should not consume much memory. While they create a lot of temporary objects (Django model classes and fields) in order to calculate the SQL to send to the database, such objects are all short-lived and should be garbage-collected fairly swiftly. Additionally, migrations worked fine on the local and CI environments, and they'd never had memory issues on previous Python versions.
It looked like there was a memory leak, and it was time to dig in.
Initial investigation
I first profiled memory usage of migrate locally using Memray, the memory profiler that I covered in my previous post, using:
$ memray run manage.py migrate
The profiles revealed that memory usage had slightly increased on Python 3.14 compared to 3.13, but did not find a memory leak (a pattern of continual growth). Still, I made some optimizations to defer some imports, saving about 30% of startup memory usage, and tried again, to no avail.
I then had the idea to profile on a Heroku dyno directly. After hacking the release process to not run migrations, I built a review app and SSH'd into its web server:
$ heroku ps:exec -a example-python-314-rspwtc --dyno web.1 bash
Establishing credentials... done
Connecting to web.1 on ⬢ example-python-314-rspwtc...
~ $
Initially, I tried using Memray's live mode to profile the migrations as they ran:
$ memray run --live manage.py migrate
While this tool looks great for some situations, it didn't really work here, especially since it seized up after Heroku terminated the server.
I then tried running the default memray run command:
$ memray run manage.py migrate
Writing profile results into memray-manage.py.724.bin
…then, on my local computer, I repeatedly ran this command to copy down the results file:
$ trash memray-manage.py.724.bin && heroku ps:copy -a example-python-314-rspwtc --dyno web.1 memray-manage.py.724.bin
I was a bit worried here that the Memray binary file might be corrupted due to copying it while memray run was generating it. But with a final truncated copy left over after the server crashed, I asked Memray to generate a flamegraph for it:
$ memray flamegraph memray-manage.py.724.bin
…and it worked! Kudos to the Memray team for making their output format usable even when incomplete.
This more detailed flamegraph revealed more than 50% of the memory usage was allocated in ModelState.render(), which creates temporary model classes:
class ModelState:
...
def render(self, apps):
"""Create a Model object from our current state into the given apps."""
...
return type(self.name, bases, body)
This information hinted that these temporary model classes were hanging around beyond their expected short lifetime, leading to the memory leak. For example, every model class could also end up in a list intended for debugging, but accidentally extending the lifetime of these temporary classes.
I decided to dig a bit deeper using machete-mode debugging, with the below snippet that captures the temporary model classes and logs details about them. I wrote this within the Django settings file, where it was guaranteed to run at Django startup time, before the migrate management command.
import atexit
import gc
import tracemalloc
import weakref
from itertools import islice
from django.db.migrations.state import ModelState
tracemalloc.start(2)
orig_render = ModelState.render
rendered_classes = weakref.WeakSet()
def wrapped_render(*args, **kwargs):
cls = orig_render(*args, **kwargs)
rendered_classes.add(cls)
return cls
ModelState.render = wrapped_render
@atexit.register
def show_referrers():
print(f"🎯 {len(rendered_classes)} classes referred to.\n")
for cls in islice(rendered_classes, 2):
print(f"🎁🎁🎁 {cls!r} 🎁🎁🎁")
for i, referrer in enumerate(gc.get_referrers(cls), start=1):
print(f"🍌 Referrer #{i}: {referrer!r}")
if tb := tracemalloc.get_object_traceback(referrer):
print("\n".join(tb.format(most_recent_first=True)))
print()
print()
print()
Note:
tracemalloc.start()starts Python's built-in memory allocation tracking.- The
ModelState.render()method was monkeypatched with a wrapper that stores every temporary model class in a WeakSet. - The
@atexit.register-decorated function runs at the end of the program, and logs two things. - The first piece of logging is the number of temporary model classes still alive at the end of the program, which should be close to zero. (Some may stick around from the final migration state.)
- The second piece of logging iterates over the first two live temporary model classes and logs their name and their referring objects, discovered via
gc.get_referrers(). For each referring object, it also logs the traceback of where that object was allocated, usingtracemalloc.get_object_traceback()(which is whytracemalloc.start()was needed at the beginning). - The emojis are a bit of fun to make the log messages easier to skim through. I have no idea why I picked 🎁 and 🍌!!
The output from this hook was voluminous, even with the limit to the first two live classes. For example, here's the output for a temporary ContentType model class:
🎁🎁🎁 <class '__fake__.ContentType'> 🎁🎁🎁
🍌 Referrer #1: <generator object WeakSet.__iter__ at 0x1234ef300>
File "/.../example/core/apps.py", line 45
for cls in islice(rendered_classes, 2):
...
🍌 Referrer #11: {'name': 'model', ..., 'model': <class '__fake__.ContentType'>}
File "/.../.venv/lib/python3.14/site-packages/django/utils/functional.py", line 47
res = instance.__dict__[self.name] = self.func(instance)
File "/.../.venv/lib/python3.14/site-packages/django/db/models/fields/__init__.py", line 1210
self.validators.append(validators.MaxLengthValidator(self.max_length))
I checked the live referrers for a few classes, and they all seemed to be expected. However, it did reveal just how many cycles exist between ORM objects. For example, model classes refer to their field objects, which in turn refer back to their model classes, thanks to Django's Field.contribute_to_class() creating this reference:
def contribute_to_class(self, cls, name, private_only=False):
...
self.model = cls
...
Anyway, from comparing the output between Python 3.13 and 3.14, I could see that no new references were being created on Python 3.14. It seemed likely that the incremental garbage collection algorithm was the culprit.
The workaround
Given the investigation, I wanted to work around the issue by forcing a full garbage collection sweep with gc.collect() after each migration file ran. I came up with the below code, saved as management/commands/migrate.py in one of the project's Django apps. It extends the default migrate command to run gc.collect() after each successful migration (where "apply" is forwards and "unapply" is backwards).
import gc
from django.core.management.commands.migrate import Command as BaseCommand
class Command(BaseCommand):
"""Extended 'migrate' command."""
def migration_progress_callback(self, action, migration=None, fake=False):
"""
Extend Django's migration progress reporting to force garbage
collection after each migration. This is a workaround to keep memory
usage low, especially because we have a low limit on Heroku. It seems
the incremental garbage collector introduced in Python 3.14 cannot
keep up with the migration process's tendency to create many cyclical
objects, so our best fallback is to force collection of everything
after each migration is applied or unapplied.
https://adamj.eu/tech/2026/04/20/django-python-3.14-incremental-gc/
"""
super().migration_progress_callback(action, migration=migration, fake=fake)
if action in ("apply_success", "unapply_success"):
gc.collect()
It felt a bit hacky, but it did the trick! The review app succeeded to launch, showing a flat memory profile as before.
We then continued to deploy to staging and production without any issues, and the team have been happily using Python 3.14 for over a month now.
Fin
Well, that's where the tale ends right now. After the incremental garbage collection algorithm is reverted in Python 3.14.5, I guess I'll be able to remove this workaround.
While it would be nice to have incremental garbage collection work well, it's clear that the current implementation has some issues. I think the core team is making the right call reverting it, but hopefully there will be energy to improve the feature for the future.
May your garbage be collected efficiently and without fuss,
-Adam
20 Apr 2026 4:00am GMT
17 Apr 2026
Django community aggregator: Community blog posts
Django News - 30% Off PyCharm Pro – 100% for Django - Apr 17th 2026
Introduction
Django News Newsletter is moving!
Just a quick heads up. We're planning to move our newsletter to a new platform next week.
If things look a little different when it shows up, it's still us.
Django Newsletter
News
PyCharm & Django annual fundraiser
JetBrains and the Django Software Foundation team up again to offer 30% off PyCharm while matching donations to fund Django's core development and community programs.
New Technical Governance - request for community feedback
Django proposes a simpler, more flexible technical governance model and is inviting community feedback ahead of a planned July 2026 rollout.
Could you host DjangoCon Europe 2027? Call for organizers
DjangoCon Europe 2026 is happening right now in Athens, Greece but plans for 2027 have already begun. This post lays out all the resources for any questions, support, and more for future organizers.
Reverting the incremental GC in Python 3.14 and 3.15 - Core Development
Python is rolling back its new incremental garbage collector in 3.14 and 3.15 after real-world memory issues, reverting to the proven generational model while rethinking a future reintroduction.
PEP 772: Packaging Council governance process (Round 3) - Packaging / Coordination
PEP 772 has officially been approved, creating a new Python Packaging Council to guide the future of packaging standards, tools, and ecosystem governance.
Django Software Foundation
Django Has Adopted Contributor Covenant 3
The 3.0 edition of the new Code of Conduct is here! This milestone represents the completion of a careful, community-driven process that began earlier this year.
DSF Board monthly meeting, April 9, 2026
The Django Software Foundation approved a modernized Code of Conduct, new working group charters, and key community initiatives, signaling a fresh push toward clearer governance and sustained project growth.
Python Software Foundation
PyCon US 2026: Why we're asking you to think about your hotel reservation
For many years, PyCon US has relied on hotel booking commissions to help pay for conference space. If you are attending this year, please use an official hotel to be both close to the venue.
Python Software Foundation News: Reflecting on Five Years as the PSF's First CPython Developer in Residence
Łukasz Langa looks back on five years and highlights including the transition to GitHub issues from bugs.python.org, the replacement of the mostly manual CLA process with an automated system, the introduction of free threading to Python, and the replacement of the interactive shell in the interpreter. Also while addressing thousands of bugs, he's witnessed the full-time paid developer in residence roster at the Python Software Foundation grow from one person to five.
Updates to Django
Today, "Updates to Django" is presented by Johanan Oppong Amoateng from Djangonaut Space! 🚀
Last week we had 12 pull requests merged into Django by 10 different contributors - including a first-time contributor! Congratulations to Jonathan Wu for having their first commits merged into Django - welcome on board!
This week's Django highlights: 🦄
-
Added
user_perm_strhelper function that can be used when checking user permission usinghas_perm(). (#37021) -
The task decorator was updated to accept
**kwargsand forward them totask_class, allowing additional parameters to be passed to custom Task subclasses. (#36816)
Django Newsletter
Django Fellow Reports
Fellow Report - Natalia
A good chunk of this week focused on improving contributor workflows and reducing review overhead by introducing automated quality checks for PRs :robot:. This builds on prior experimentation (thanks @frankwiles) and seeks to provide early, actionable feedback for PR authors while helping maintainers focus on substantive review. We also had a flood of overly verbose and low quality reports from the same person, which I closed eagerly making use of the recent new guidelines we published in the security policy.
Fellow Report - Jacob
The last report before DjangoCon Europe. Lots of tickets triaged, reviewed, authored, discussed, and the usual kaleidoscope of miscellaneous tasks.
Django Fellow Report - Sarah
Django Fellow Sarah Boyce returns from maternity leave with part-time updates, tackling triage, reviews, security work, and GSoC prep while navigating connectivity challenges from Turkey.
Sponsored Link 1
You know @login_required. Now meet @app.reasoner(). AgentField turns Python functions into production AI agents, structured output, async execution, agent discovery. Every decorator becomes a REST endpoint. Open source, Apache 2.0. Python, Go & TypeScript SDKs.
Articles
Enforce Business Logic in the Database with Django
A practical guide to enforcing business logic at the database layer in Django using transactions, select_for_update locks, and CheckConstraint / UniqueConstraint to prevent race conditions and invalid data rather than relying on application-level validation.
Let's talk about LLMs
James Bennett consolidates his thoughts on AI/LLMs in this wide-ranging piece, ending with a call to invest in software fundamentals instead of racing to adopt the latest AI craze.
Django Table, Filter and Export With Htmx
A reusable pattern for combining django-tables2, django-filter, and HTMX into a single generic view and template. Very cool stuff.
Decoupling Your Business Logic from the Django ORM
Carlton Gibson's latest The Stack Report is a detailed dive into business logic and how to handle it in Django. This is a perennial topic, but he comes at it with decades of experience and wisdom.
djust 0.4.0 - The Developer Experience Release
djust 0.4.0 is about developer experience - making everyday tasks faster, safer, and more intuitive. 30+ new features, critical bug fixes, and a security hardening pass that eliminated every known vulnerability.
Why aren't we uv yet?
A decent chunk of new Python repos already use uv. Coding agents still overwhelmingly recommend pip and requirements.txt, while many users prefer uv.
Events
Are You Attending PyCon, or Orbiting It?
PSF Board Member Georgi Ker makes a personal case for booking hotels via the official PyCon US website before April 24th.
Design Articles
Under the hood of MDN's new frontend
From 2-min dev server starts to 2s. They rewrote MDN's entire frontend, ditching the React SPA for Lit web components, server components, and Rspack. The result: less JS shipped, scoped CSS, and a build pipeline that just works.
Videos
Debunking Django Myths - Sarah Boyce at PyTV
Django Fellow Sarah Boyce gave a talk recently at PyTV titled, "Django Has a Marketing Problem: Debunking the Myths That Won't Die." It is a fantastic overview of what Django does well and what it can improve.
Incremental Typing in Django - Carlton Gibson
Former Django Fellow and current Django Chat podcast host Carlton Gibson, recently gave a talk titled, "Static Islands, Dynamic Sea: Some Thoughts on Incremental Typing." In it he talks about why Python's dynamic nature is a feature, not a bug, and demonstrates Mantle - a library of utilities for typing around Django's liquid core.
Sponsored Link 2
Annual PyCharm Promo - 30% off, all money goes to Django
The annual PyCharm + Django promotion is live until May 1st. This is the single biggest fundraiser for Django and has raised over $350,000 since 2016.
Podcasts
Django Tasks - Jake Howard
Episode 200(!) features Jake Howard, a Senior Systems Engineer at Torchbox and the author of DEP 14, django.tasks, the highlight feature in Django 6.0. We discuss his work on the Django security team, work with Wagtail, AI dabblings, and more.
Django Job Board
Python Developer at Open Data Services
Remote UK role building Python data systems for social-impact projects, offering ~£48k plus profit share in a collaborative worker co-op.
Projects
yassi/dj-signals-panel
Display registered Django signals and receivers, showing what fires and where.
dvf/opinionated-django
An opinionated Django project with Repository pattern, Pydantic DTOs, svcs DI, and Stripe-style ULID IDs
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
17 Apr 2026 3:00pm GMT
Djangocon EU: auto-prefetching with model field fetch modes in Django 6.1 - Jacob Walls
(One of my summaries of the 2026 Djangocon EU in Athens).
There's an example to experiment with here: https://dryorm.xterm.info/fetch-modes-simple
Timeline: it will be included in Django 6.1 in August.
The reason is the 1+n problem:
books = Book.objects.all()
for book in books:
print(book.author.name)
# This does a fresh query for author every time.
You can solve it with select_related(relation_names) or prefetch_related(relation_names). The first does an inner join. The second does two queries.
But: you might miss a relation. You might specify too many relations, getting data you don't need. Or you might not know about the relation as the code is in a totally different part of the code.
Fetch mode is intended to solve it. You can append .fetch_mode(models.FETCH_xyz) to your query:
- models.FETCH_ONE: the current behaviour, which will be the default.
- models.FETCH_PEERS: Fetch a deferred field for all instances that came from the same queryset. More or less prefetch_related in an automatic, lazy manner.
- models.FETCH_RAISE: useful for development, it will raise FieldFetchBlocked. And it will thus tell you that you'll have a performance problem and that you might need FETCH_PEERS
This is what happens:
books = Book.objects.all().fetch_mode(models.FETCH_PEERS)
for book in books:
# We're iterating over the query, so the query executes and grabs all books.
print(book.author.name)
# We accessed a relation, so at this point the prefetch_related-like
# mechanism ist fired off and all authors linked to by the books are
# grabbed in one single query.
You can write your own fetch modes, for instance if you only want a warning instead of raising an error.
Unrelated photo explanation: a cat I encountered in Athens on an evening stroll in the neighbourhood behind the hotel.
17 Apr 2026 4:00am GMT

