23 Jan 2026
Django community aggregator: Community blog posts
Django News - Djangonaut Space Session 6 Applications Open! - Jan 23rd 2026
News
uvx.sh by Astral
Astral, makers of uv, have a new "install Python tools with a single command" website.
Python Software Foundation
Announcing Python Software Foundation Fellow Members for Q4 2025!
The PSF announces new PSF Fellows for Q4 2025, recognizing community leaders who contribute projects, education, events, and mentorship worldwide.
Departing the Python Software Foundation (Staff)
Ee Durbin is stepping down as PSF Director of Infrastructure, transitioning PyPI and infrastructure responsibilities to staff while providing 20% support for six months.
Djangonaut Space News
Announcing Djangonaut Space Session 6 Applications Open!
Djangonaut Space Session 6 opens applications for an eight-week mentorship program to contribute to Django core, accessibility, third-party projects, and new BeeWare documentation.
New Admins and Advisors for Djangonaut Space
Djangonaut Space appoints Lilian Tran and Raffaella Suardini as admins and Priya Pahwa as advisor, strengthening Django community leadership and contributor support.
Wagtail CMS News
llms.txt - preparing Wagtail docs for AI tools
Wagtail publishes developer and user documentation in llms.txt to provide authoritative, AI-friendly source files for LLMs, improving accessibility and evaluation for smaller models.
Updates to Django
Today, "Updates to Django" is presented by Pradhvan from Djangonaut Space! 🚀
Last week we had 16 pull requests merged into Django by 11 different contributors - including 3 first-time contributors! Congratulations to Kundan Yadav, Parth Paradkar, and Rudraksha Dwivedi for having their first commits merged into Django - welcome on board! 🥳
This week's Django highlights: 🦄
ModelIterablenow checks if foreign key fields are deferred before attempting optimization, avoiding N+1 queries when using.only()on related managers. (#35442)- The XML deserializer now raises errors for invalid nested elements instead of silently processing them, preventing potential performance issues from malformed fixtures. (#36769)
- Error messages now clearly indicate when annotated fields are excluded by earlier
.values()calls in chained queries. (#36352) - Improved performance in
construct_change_message()by avoiding unnecessarytranslation_override()calculation when logging additions. (#36801)
Django Newsletter
Articles
Unconventional PostgreSQL Optimizations
Use PostgreSQL check constraints, function-based or virtual generated columns, and hash-based exclusion constraints to reduce scans, shrink indexes, and enforce uniqueness efficiently.
Django 6.0 Tasks: a framework without a worker
Django 6.0 adds a native tasks abstraction but only supports one-off tasks without scheduling, retries, persistence, or a worker backend, limiting real-world utility.
I Created a Game Engine for Django?
Multiplayer Snake implemented in Django using Django LiveView, 270 lines of Python, server side game state, WebSocket driven HTML updates, no custom JavaScript.
Django Icon packs with template partials
Reusable SVG icon pack using Django template partialdefs and dynamic includes to render configurable icons with classes, avoiding custom template tags.
Building Critical Infrastructure with htmx: Network Automation for the Paris 2024 Olympics
HTMX combined with Django, Celery, and procedural server-side views enabled rapid, maintainable network automation tools for Paris 2024, improving developer productivity and AI-assisted code generation.
Don't Let Old Migrations Haunt Your Codebase
Convert old data migrations that have already run into noop RunPython migrations to preserve the migration graph while preventing test slowdowns and legacy breakage.
Django Time-Based Lookups: A Performance Trap
Your "simple" __date filter might be turning a millisecond query into a 30-second table scan-here's the subtle Django ORM trap and the one-line fix that restores index-level performance.
Podcasts
Django Brew
DjangoCon US 2025 recap covering conference highlights, community discussions on a REST story, SQLite in production, background tasks, and frontend tools like HTMX.
Django Job Board
Two new senior roles just hit the Django Job Board, one focused on building Django apps at SKYCATCHFIRE and another centered on Python work with data heavy systems at Dun & Bradstreet.
Senior Django Developer at SKYCATCHFIRE 🆕
Senior Python Developer at Cial Dun & Bradstreet
Django Newsletter
Projects
quertenmont/django-msgspec-field
Django JSONField with msgspec structs as a Schema.
radiac/django-nanopages
Generate Django pages from Markdown, HTML, and Django template files.
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
23 Jan 2026 5:00pm GMT
22 Jan 2026
Django community aggregator: Community blog posts
Python Leiden meetup: PostgreSQL + Python in 2026 -- Aleksandr Dinu
(One of my summaries of the Python Leiden meetup in Leiden, NL).
He's going to revisit common gotchas of Python ORM usage. Plus some Postgresql-specific tricks.
ORM (object relational mappers) define tables, columns etc using Python concepts: classes, attributes and methods. In your software, you work with objects instead of rows. They can help with database schema management (migrations and so). It looks like this:
class Question(models.Model):
question = models.Charfield(...)
answer = models.Charfield(...)
You often have Python "context managers" for database sessions.
ORMs are handy, but you must be beware of what you're fetching:
# Bad, grabs all objects and then takes the length using python: questions_count = len(Question.objects.all()) # Good: let the database do it, # the code does the equivalent of "SELECT COUNT(*)": questions_count = Question.objects.all().count()
Relational databases allow 1:M and N:M relations. You use them with JOIN in SQL. If you use an ORM, make sure you use the database to follow the relations. If you first grab the first set of objects and then grab the second kind of objects with python, your code will be much slower.
"Migrations" generated by your ORM to move from one version of your schema to the next are real handy. But not all SQL concepts can be expressed in an ORM. Custom types, stored procedures. You have to handle them yourselves. You can get undesired behaviour as specific database versions can take a long time rebuilding after a change.
Migrations are nice, but they can lead to other problems from a database maintainer's point of view, like the performance suddenly dropping. And optimising is hard as often you don't know which server is connecting how much and also you don't know what is queried. Some solutions for postgresql:
- log_line_prefix = '%a %u %d" to show who is connecting to which database.
- log_min_duration_statement = 1000 logs every query taking more than 1000ms.
- log_lock_waits = on for feedback on blocking operations (like migrations).
- Handy: feedback on the number of queries being done, as simple programming errors can translate into lots of small queries instead of one faster bigger one.
If you've found a slow query, run that query with EXPLAIN (ANALYZE, BUFFERS) the-query. BUFFERS tells you how many pages of 8k the server uses for your query (and whether those were memory or disk pages). This is so useful that they made it the default in postgresql 18.
Some tools:
- RegreSQL: performance regression testing. You feed it a list of queries that you worry about. It will store how those queries are executed and compare it with the new version of your code and warn you when one of those queries suddenly takes a lot more time.
- Squawk: tells you (in CI, like github actions) which migrations are backward-incompatible or that might take a long time.
- You can look at one of the branching tools: aimed at getting access to production databases for testing. Like running your migration against a "branch"/copy of production. There are several tricks that are used, like filesystem layers. "pg_branch" and "pgcow" are examples. Several DB-as-a-service products also provide it (Databricks Lakebase, Neon, Heroku, Postgres.ai).
22 Jan 2026 5:00am GMT
Python Leiden meetup: PR vs ROC curves, which to use - Sultan K. Imangaliyev
(One of my summaries of the Python Leiden meetup in Leiden, NL).
Precision-recall (PR) versus Receiver Operating Characteristics (ROC) curves: which one to use if data is imbalanced?
Imbalanced data: for instance when you're investigating rare diseases. "Rare" means few people have them. So if you have data, most of the data will be of healthy people, there's a huge imbalance in the data.
Sensitivity versus specificity: sensitive means you find most of the sick people, specificity means you want as few false negatives and false positives as possible. Sensitivity/specificity looks a bit like precision/recall.
- Sensitivity: true positive rate.
- Specificity: false positive rate
If you classify, you can classify immediately into healthy/sick, but you can also use a probabilistic classifier which returns a chance (percentage) that someone can be classified as sick. You can then tweak which threshold you want to use: how sensitive and/or specific do you want to be?
PR and ROC curves (curve = graph showing the sensitivity/specificity relation on two axis) are two ways of measuring/visualising the sensitivity/specificity relation. He showed some data: if the data is imbalanced, PR is much better at evaluating your model. He compared balanced and imbalanced data with ROC and there was hardly a change in the curve.
He used scikit-learn for his data evaluations and demos.
22 Jan 2026 5:00am GMT