21 Oct 2025

feedDjango community aggregator: Community blog posts

Django on the Med - Paolo Melchiorre

πŸ”— Links

πŸ“¦ Projects

πŸ“š Books

πŸŽ₯ YouTube

Sponsor

This episode was brought to you by HackSoft, your development partner beyond code. From custom software development to consulting, team augmentation, or opening an office in Bulgaria, they're ready to take your Django project to the next level!

21 Oct 2025 5:00pm GMT

17 Oct 2025

feedDjango community aggregator: Community blog posts

Moving a Domain to Another Registrar

The Situation

The domain for my first SaaS project 1st-things-1st.com was registered with GoDaddy. Even though the whole project was already running under my company's name, I never really bothered to move the domain to my company's account at Namecheap.

Last week I noticed that the domain was about to expire, and I thought, alright, time to finally do it.

I had never transferred a domain before, so I wasn't sure how it would go or whether I could pull it off without any downtime. Here's how it went.

The Story

Namecheap has this feature called "Transfer to Us." You just follow a few simple steps: request a transfer for your domain, enter a one-time Auth code (also called as EPP - Extensible Provisioning Protocol - code) from another registrar to confirm you're the owner, and pay for another year.

At GoDaddy's side, there was a whole confirmation process to make sure I really wanted to transfer the domain and understood it couldn't be undone. Once that was done, I got the code I needed for Namecheap.

Then came the waiting part. It took almost a week to get the confirmation that the transfer was successful, which was expected according to their help docs. As soon as I got the notification, I jumped into my Namecheap settings to check if everything looked right.

Unfortunately, the domain records weren't migrated automatically. Only the ownership was transferred. The nameserver settings in Namecheap were still pointing to GoDaddy's servers.

And since GoDaddy no longer listed the domain or its DNS records, I had to recover those values myself.

I needed to extract the following records for each subdomain:

I did it from the command line like this:

$ dig 1st-things-1st.com ANY > 1st-things-1st.txt
$ dig www.1st-things-1st.com ANY >> 1st-things-1st.txt
$ dig my.1st-things-1st.com ANY >> 1st-things-1st.txt
$ dig our.1st-things-1st.com ANY >> 1st-things-1st.txt
$ dig apps.1st-things-1st.com ANY >> 1st-things-1st.txt
$ dig analytics.1st-things-1st.com ANY >> 1st-things-1st.txt

Alternatively, you can look up WHOIS information on sites like who.is.

Once I had all the records, I switched to Namecheap's nameservers (Namecheap BasicDNS) and added everything manually.

The website was down for about 30 to 60 minutes, depending on where in the world the visitor was. Some downtime was unavoidable, but overall, the transfer went through successfully.

The Learnings

The whole thing could have been smoother if I had known what to expect. Here's how I'd do it next time:

  1. Save all domain records to a local file before starting.
  2. Get the Auth/EPP code from the old registrar.
  3. Request the transfer at the new registrar.
  4. Wait for the transfer to complete.
  5. Pick the best time for downtime and let users know in advance.
  6. Switch to the new nameservers and add all records manually.

17 Oct 2025 5:00pm GMT

Fixing the `Query` import error while upgrading Wagtail from 5 to 7

While upgrading an old project from Wagtail 5 to 7, I encountered this error:

ImportError: cannot import name 'Query' from 'wagtail.search.models'

After some searching, I found out what's wrong. It turns out Wagtail moved the Query model from wagtail.search.models to wagtail.contrib.search_promotions in version 5, but until version 6, you …

Read now

17 Oct 2025 3:54pm GMT

Django News - 2025 Malcolm Tredinnick Memorial Prize awarded to Tim Schilling - Oct 17th 2025

News

2025 Malcolm Tredinnick Memorial Prize awarded to Tim Schilling

The Malcolm Tredinnick Memorial Prize for 2025 was awarded to Tim Schilling. Check out Tim's post about winning it too.

djangoproject.com

2026 DSF Board Nominations

DSF board nominations are now open, inviting candidates to help shape Django governance, marketing, and global event outreach through strategic leadership.

djangoproject.com

Python 3.15.0 alpha 1

Python 3.15.0 alpha 1 introduces experimental features including a dedicated profiling package, default UTF-8 encoding, a new PyBytesWriter C API, and improved error messages.

blogspot.com

Python 3.13.9 is now available!

Python 3.13.9 quickly fixes a regression in inspect.getsourcelines when decorators are followed by comments or empty lines, enhancing introspection reliability for modern development.

blogspot.com

Announcing PSF Community Service Award Recipients!

PSF Community Service Awards recognize exemplary Python community contributions with awardees like Katie McLaughlin whose diverse efforts include advancing Django and open source outreach.

blogspot.com

Updates to Django

Today, "Updates to Django" is presented by Raffaella from Djangonaut Space! πŸš€

Last week we had 9 pull requests merged into Django by 7 different contributors - including 1 first-time contributor! Congratulations to Tim Kamanin for having their first commits merged into Django - welcome on board!

In Django 5.2 it's added compatibility for oracledb 3.4.0 ticket: 36646

News from Django Forum:

Let's build an automatic Django ORM feature matrix - How about creating an updated table showing which databases support which Django ORM feature? Find out more in the forum post.

Django Newsletter

Wagtail CMS

Wagtail Space 2025: A Stellar Journey

Wagtail Space 2025 showcased innovative CMS customization including practical AI integrations, advanced search features, and community-driven contributions that enhance Wagtail and Django development.

wagtail.org

Sponsored Link 1

Expert Insights. Better Django.

Unlock your project's full potential with our Django consulting services. From tricky bugs to big-picture architecture - we've got the answers. Learn more!

hacksoft.io

Articles

Introducing django-http-compression

A new django-http-compression package extends Django's response compression to support Gzip, Brotli and Zstandard, enabling more efficient bandwidth usage and performance.

adamj.eu

Django bulk_update memory issue

Django bulk_update suffers from unexpected memory bloat during large migrations, and a custom batching approach prevents SIGTERM crashes by limiting in-memory update statements.

pecar.me

Python 3.14 is here. How fast is it?

Python 3.14 improves CPython performance with gains over older versions and promising free-threading benefits for multi-threaded CPU intensive workloads, while benchmarks show PyPy's superior speed.

miguelgrinberg.com

Releasing Python 3.14.0 Β· Hugo van Kemenade

Python 3.14 release automation details buildbot tests, CI builds, deferred blockers, and installer creation steps to ensure reliable Python deployments benefiting Django applications.

hugovk.dev

Adding imports to the Django shell

Overriding the Django shell command to pre-import custom modules streamlines interactive development by reintroducing familiar functionality from django-extensions in Django 5.2.

jmduke.com

Events

PyCascades 2026 closes Monday, October 27th, 2025 AoE

PyCascades 2026 calls for Python talk proposals with comprehensive submission guidelines, diverse speaker options, and detailed scheduling for an in-person Vancouver event.

The CFP closes on Monday, October 27th, 2025 AoE.

pretalx.com

My Django On The Med 2025 πŸ–οΈ

The Django On The Med 2025 sprint successfully accelerated ORM improvements and PR discussions while fostering community collaboration through productive coding sessions and networking.

paulox.net

My DjangoCon US 2025

DjangoCon US 2025 highlighted innovative Django enhancements with deep dives into AI, deployment automation, governance, and community-driven features advancing the framework's evolution.

paulox.net

Videos

PyBeach videos are up!

PyBeach's videos are up which explores various advanced Python topics including tooling, packaging, patterns, and collaborative development insights.

youtube.com

Podcasts

Episode 26.1: CPython Sprint Week in Cambridge UK, Part 1 by core.py

Episode 26.1 kicks off coverage of the CPython Sprint Week in Cambridge, UK, featuring interviews with over a dozen members of the core team sharing updates and insights from the event.

spotify.com

Django News Jobs

This week brings three new Django and Python job listings, with opportunities ranging from early-stage startups to established companies. New roles include a Founding Backend Engineer in San Francisco, a Senior Python Developer focused on health tech, and a Senior Engineer working across Python and Solidity.

Founding Backend Engineer (On-site San Francisco) - Python β€’ AWS β€’ LLM/RAG at Purrfect Hire πŸ†•

Senior Python Developer at Basalt Health πŸ†•

Senior Software Engineer (Python and Solidity) at LiquidFi πŸ†•

Django/Python Full-stack Engineer at JoinTriple.com

Senior Python/Django Engineer at Search Atlas

Django Newsletter

Projects

knyghty/django-snakeoil

Simple and quick meta descriptions and titles for your django objects and URLs. Supports OpenGraph and Twitter Cards.

github.com

CuriousLearner/django-postgres-anonymizer

Django integration for PostgreSQL Anonymizer extension.

github.com

adamchainz/django-http-compression

Django middleware for compressing HTTP responses with Zstandard, Brotli, or Gzip.

github.com


This RSS feed is published on https://django-news.com/. You can also subscribe via email.

17 Oct 2025 3:00pm GMT

16 Oct 2025

feedDjango community aggregator: Community blog posts

Pycon NL: workshop: measuring and elevating quality in engineering practice - Daniele Procida

(One of my summaries of the Pycon NL one-day conference in Utrecht, NL).

Daniele works as director of engineering at Canonical (the company behind Ubuntu). What he wants to talk about today is how to define, measure and elevate engineering quality at scale. That's his job. He needs to influence/change that in an organization with a thousand technical people in dozens of teams with 100+ projects. They ideally must converge on the standards of quality he has defined and there's only one of me. Engineering people are opinionated people :-)

Your personal charm and charisma wears thin after a while: there needs to be a different way. So: how can you get 1000+ to do what you want, the way you want. Ideally somewhat willingly? You cannot make people do it. You'll have to be really enthousiastic about it.

He suggests three things:

  • Principle. Description of quality as objective conditions, allowing it to be defined and measured.
  • Tool. A simple dashboard, that reinforces your vision of quality and reflects it back to your teams. Daniele focuses on documentation, and showed a dashboard/spreadsheet that showed the documentation status/progress of various projects. You can do the same for "security" for instance.
  • Method. A way of drawing your teams into your vision, so that they actively want to participate in.

It being a workshop, we worked through a few examples. Someone mentioned "improved test coverage in our software".

  • Describe your aim(s). What do you want. What is the background documentation. What is your reason.
  • You need objectives on various levels. "Started", "first results", "mature". And you can have those levels for each of your aims/categories. Start small and start specific.
    • Started. "The team understands the significance of automated testing". "We have coverage information about tests".
    • First results. "There is a significant increase in test coverage". Note: "significant" means you have something to talk about. You can be reasonable on the one hand, but you can also call out low numbers. Human-sized words with value, like "significant", help internalize it. More than a number like "25%" would ever do. You don't want to check off the box "25%", you want to be able to claim that your team now has significant test coverage!
    • Mature. Let's keep it simple with "100% test coverage".
  • Measure the level projects are at at the moment. Show it in a dashboard. He used a Google spreadsheet previously, now it is a Django website. He'll make it a world-public website soon. So it is visible for everybody. This helps draw teams into it.

Why does this work with human beings?

  • Peer pressure. People see their peers doing the right thing. People want to be seen doing the right thing.
  • Objectification. The contract and the results are described objectively. The conditions and evidence stand outside you: it is not personal anymore, so it is not a threat.

Humans are funny creatures. As soon as they believe in something, it will carry them over many bumps in the road.

People love to see their work recognized. So if you maintain a spreadsheet with all the projects' results and progress, you won't have to ask them for an update: they will bug you if the spreadsheet hasn't been updated in a while. They really want to see the work they've put in!

You can get a positive feedback loop. If the work you need to do is clear, if the value is clear and if there is recognition, you'll want to do it almost automatically. And if you do it, you mention it in presentations and discussions with others. Then the others are automatically more motivated to work on it, too.

Giving kids a sticker when they do something successfully really helps. It also works for hard-core programmers and team managers!

https://reinout.vanrees.org/images/2025/austria-vacation-7.jpeg

Unrelated photo from our 2025 holiday in Austria: just over the border in Germany, Passau has a nice cathedral.

16 Oct 2025 4:00am GMT

Pycon NL: typing your python code like a ninja - Thiago Bellini Ribeiro

(One of my summaries of the Pycon NL one-day conference in Utrecht, NL).

By now, the basics of python type hints are well known:

def something(x: int) -> float:
    ...

def get_person(name: str, age: int|None) -> Person:
    ...

Note: I've tried typing (...) fast enough, but my examples will probably have errors in them, so check the typing documentation! His slides are here so do check those :-)

Sometimes you can have multiple types for some input. Often the output also changes then. You can accept both import types and suggest both output types, but with @overload you can be more specific:

from typing import overload

@overload
def something(x: str) -> str:
    ...

def something(x: int) -> int:
    ...

Tyou can do the same with a generic:

from typing import TypeVar

T = TypeVar("T")

@overload
def something(x: T) -> T:
    ...

# New syntax
def something[T](x: T) -> T:
    ...

# Same, but restricted to two types
def something[T: str|int](x: T) -> T:
    ...

Generic classes can be handy for, for instance, django:

class ModelManager[T: Model]:
    def __init__(self, model_class: type[T]) -> None:
        ....

    def get(self, pk: int) -> T:
        ...

Type narrowing. Sometimes you accept a broad range of items, but if you return True, it means the input is of a specific type:

from typing import TypeGuard

def is_user(obj: Any) -> TypeGuard[User]:
    ....

def something(obj: Any):
    if is_user(obj):
        # From here on, typing knows obj is a User

Generic **kwargs are a challenge, but there's support for it:

from typing import TypedDict, Required, Unpack

class SomethingArgs(TypedDict, total-False):
    usernanme: Required(str)
    age: int

def something(**kwargs: Unpack[SomethingArgs]):
    ...

If you return "self" from some class method, you run into problems with subclasses, as normally the method says it returns the parent class. You can use from typing import Self` and return the type ``Self instead.

Nice talk, I learned quite a few new tricks!

https://reinout.vanrees.org/images/2025/austria-vacation-2.jpeg

Unrelated photo from our 2025 holiday in Austria: church of Neufelden seen on the top of the hill.

16 Oct 2025 4:00am GMT

Pycon NL: tooling with purpose - Aris Nivortis

(One of my summaries of the Pycon NL one-day conference in Utrecht, NL).

Full title: tooling with purpose: making smart choices as you build.

Aris uses python and data to answers research questions about everything under the ground (as geophysicist).

As a programmer you have to make lots of choices. Python environment, core project tooling, project-specific tooling, etc.

First: python environment management: pyenv/venv/pip, poetry, uv. And conda/pixi for the scientific python world. A show of hands showed uv to be real popular.

Now core project tooling. Which project structure? Do you use a template/cookiecutter for it? Subdirectories? A testing framework? Pytest is the default, start with that. (He mentioned "doctests" becoming very popular: that surprised me, as they were popular before 2010 and started to be considered old and deprecated after 2010. I'll need to investigate a bit more).

Linting and type checking? Start with ruff for formatting/checking. Mypy is the standard type checker, but pyright/vscode and pyre are options. And the new ty is alpha, but looks promising.

Also, part of the core tooling: do you document your code? At least a README.

For domain specific tooling there are so many choices. It is easy to get lost. What to use for data storage? Web/API? Visualization tools. Scientific libraries.

Choose wisely! With great power comes great responsibility, but with great power also comes the burden of decision-making. Try to standardize. Enforce policies. Try to keep it simple.

Be aware of over-engineering. Over-engineering often comes with good intentions. And... sometimes complexity is the right path. As an example, look at database choices. You might wonder between SQL or a no-sql database and whether you need to shard your database. But often a simple sqlite database file is fast enough!

Configuration management: start with a simple os.getenv() and grab settings from environment variables. Only start using .toml files when that no longer fits your use case.

Web/api: start simple. You probably don't need authentication from the start if it is just a quick prototype. Get something useful working, first. Once it works, you can start working on deployment or a nicer frontend.

Async code is often said to be faster. But debugging is time-consuming and hard. Error handling is different. It only really pays off when you have many, many concurrent operations. Profile your code before you start switching to async. It won't speed up CPU-bound code.

Logging: just start using with the built-in logging module. Basic logging is better than no logging. Don't start the Perfect Fancy Logging Setup until you have the basics running.

Testing is good and recommended, but don't go overboard. Don't "mock" everything to get 100% coverage. Those kinds of tests break often. And often the tests test the mock instead of your actual code. Aim for the same amount of test code compared to your actual code.

Some closing comments:

  • Sometimes simple choices are better.
  • Don't let decision=making slow you down. Start making prototypes.
  • One-size-fits-all solutions don't exist. Evaluate for your use case.
  • If you are an experienced developer, help your colleagues. They have to make lots of choices.
  • Early-career developer? Luckily a lot of choices are already made for you due to company policy or because the project you're working on already made most choices for you :-)
https://reinout.vanrees.org/images/2025/austria-vacation-4.jpeg

Unrelated photo from our 2025 holiday in Austria: Neufelden station. From a 1991 train trip. I remembered the valley as being beautiful. As we now do our family holidays by train, I knew where to go as soon as Austria was chosen as destination.

16 Oct 2025 4:00am GMT

Pycon NL: programming, past and future - Steven Pemberton

(One of my summaries of the Pycon NL one-day conference in Utrecht, NL).

(Note: I've heard a keynote by Steven at pygrunn 2016.)

Steven is in the python documentary, he co-designed the abc programming language that was the predecessor to python. ABC was a research project that was designed for the programmer's needs. He also was the first user of the open internet in Europe in November 1988, as the CWI at the university had the first 64kbps connection in Europe. Co-designer of html, css, xhtml, rdf, etc.

1988, that's 37 years ago. But only about 30 years earlier, the first municipality (Norwich, UK) got a computer. 21 huge crates. It ran continuously for 10 years. A modern Raspberry pi would take 5 minutes to do the same work!

Those early computers were expensive: an hour of programming time was a year's salary for a programmer. So, early programming languages were designed to optimize for the computer. Nowadays, it is the other way around: computers are almost free and programmers are expensive. This hasn't really had an effect on the way we program.

He's been working on declarative programming languages. One of the declarative systems is xforms, an xml-based declarative system for defining applications. It is a w3c standard, but you rarely see it mentioned. But quite some companies and government organisations use it, like the Dutch weather service (KNMI).

The NHS (UK nationwide health service) had a "Lorenzo" system for UK patient records that cost billions of pounds, took 10 years to build and basically failed. Several hospitals (and now hospitals in Ukraine!) use an xforms-system written in three years by a single programmer. Runs, if needed, on a Raspberry pi.

He thinks declarative programming allows programmers to be at least ten times more productive. He thinks, eventually everyone will program declaratively: fewer errors, more time, more productivity. (And there's a small conference in Amsterdam in November).

https://reinout.vanrees.org/images/2025/austria-vacation-1.jpeg

Unrelated photo from our 2025 holiday in Austria: in Vienna/Wien I visited the military museum. This is the car in which archduke Franz Ferdinand was shot in Sarajevo in 1914.

16 Oct 2025 4:00am GMT

Pycon NL: keynote: how not to get fooled by your data while AI engineering - Sofie van Landeghem

(One of my summaries of the Pycon NL one-day conference in Utrecht, NL).

(Sofie helps maintain FastAPI, Typer and spaCy; this talk is all about AI).

Sofie started with an example of a chatbot getting confused about the actual winner of an F1 race after disqualification of the winner. So you need to have a domain expert on board who can double-check the data and the results.

Let's say you want your chatbot output to link to Wikipedia for important terms. That's actually a hard task, as it has to do normalization of terms, differentiating between Hamilton-the-driver, Hamilton-the-town, Hamilton-the-founding-father and more.

There's a measure for quality of output that's called an "F-score". She used some AI model to find the correct page and got a 79.2% F-score. How good or bad is it?

For this, you can try to determine a reasonable bottom line. "Guessing already means 50%" is what you might think. No, there are 7 million Wikipedia pages, so random guessing gives 0% F-score. Let's pick all the pages which actually mention the word "Hamilton". If we then look at more words like "Alexander Hamilton" or "Lewis Hamilton", we can reason that a basic non-AI regular approach should get 78% at least, so the AI model's 79.2% isn't impressive.

The highest reachable quality depends on the data itself and what people expect. "Hamilton won at Spa", do you expect Spa to point at the town or at the circuit? The room voted 60/40, so even the best answer itself can't be 100% correct :-)

A tip: if you get a bad result, investigate the training data to see if you can spot some structural problem (which you can then fix). Especially if you have your own annotated data. In her example, some of the annotators annotated circuit names including the "GP" or "grand prix" name ("Monaco GP") and others just the town name ("Spa").

Some more tips:

  • Ensure your label scheme is consistent.
  • Draft clear annotation guidelines.
  • Measure inter-annotator agreement (IAA). So measure how much your annotators agree on terms. An article on F1 and politics: how many annotate it as politics and how many as F1?
  • Consider reframing your task/guidelines if the IAA is low.
  • Model uncertainty in your annotation workflow.
  • Identify structural data errors.
  • Apply to truly unseen data to measure your model's performance.
  • Make sure you climb the right hill.
https://reinout.vanrees.org/images/2025/austria-vacation-8.jpeg

Unrelated photo from our 2025 holiday in Austria: just over the border in Germany, we stayed two days in Passau. View from the 'Oberhaus' castle on three rivers combining, with visibly different colors. From the left, the small, dark 'Ilz'. The big, drab-colored one in the middle is the 'Donau' (so 'schΓΆne blaue Donau' should be taken with a grain of salt). From the right, also big, the much lighter 'Inn' (lots of granite sediment from the Alps, here).

16 Oct 2025 4:00am GMT

Pycon NL: kedro, lessons from maintaining an open source framework - Merel Theisen

(One of my summaries of the Pycon NL one-day conference in Utrecht, NL).

Full title: leading kedro: lessons from maintaining an open source python framework.

Merel is the tech lead of the python open source framework kedro.

What is open source? Ok, the source code is publicly available for anyone to use, modify and share. But it is also a concept of sharing. Developing together. "Peer production". It also means sharing of technical information and documentation. In the 1990s the actual term "open source" was coined. Also, an important milestone: Github was launched in 2008, greatly easing open source development.

Kedro is a python toolbox that applies software engineering principles to data science code, making it easier to go from prototype to production. Started in 2017, it was open sourced in 2019. (Note: Kedro has now been donated to the Linux foundation). This made it much easier to collaborate with others outside the original company (Quantumblack).

Open source also means maintenance challenges. It is not just code. Code is the simple part. How to attract contributors? How to get good quality contributions? What to accept/reject? How to balance quick wins with the long term vision of the project? How to make contributors come back?

What lessons did they learn?

  • Importance of contributor guidance. They themselves had high standards with good programming practices. How much can you ask from new contributors? They noticed they needed to improve their documentation a lot. And they had to improve their tooling. If you want well-formatted code, you need easy tools to do the formatting, for instance. And you need to actually document your formatting guidelines :-)
  • Response time is important. Response time for issues, pull requests and support. If you don't get a timely answer, you'll lose interest as contributor. Also: tickets need to be polished and made clearer so that new contributors can help fixing them.
  • Sharing pain points is a contribution, too. More contributors and users automatically mean more feature requests. But you don't want your project to become a Frankenstein monster... A configuration file, for instance, can quickly become too cluttered because of all the options. Sometimes you need to evolve the architecture to deal with common problems. Users will tell you what they want, but perhaps it can be solved differently.
  • The importance of finding contribution models that fit. Perhaps a plugin mechanism for new functionality? Perhaps a section of the code marked "community" without the regular project's guarantees about maintenance and longevity?
  • Be patient and kind. "Open source" means "people". Code is the easy part, people add complexity. Maintainers can be defensive and contributors can be demanding.
https://reinout.vanrees.org/images/2025/austria-vacation-6.jpeg

Unrelated photo from our 2025 holiday in Austria: Neufelden has a dam+reservoir, the water travels downstream by underground pipe to the hydropower plant. At this point the pipe comes to the surface and crosses the river on a concrete construction. Nearby, the highest road bridge in this region also crosses.

16 Oct 2025 4:00am GMT

Pycon NL: from flask to fastapi - William Lacerda

(One of my summaries of the Pycon NL one-day conference in Utrecht, NL).

Full title: from flask to fastapi: why and how we made the switch.

He works at "polarsteps", a travel app. Especially a travel app that will be used in areas with really bad internet connectivity. So performance is top of mind.

They used flask for a long time. Flask 2 added async, but it was still WSGI-bound. They really needed the async scaling possibility for their 4 million monthly users. Type hinting was also a big wish item for improved reliability.

They switched to fastapi:

  • True async support. It is ASGI-native
  • Typing and validation with pydantic. Pydantic validates requests and responses. Type hints help a lot.
  • Native auto-generated docs (openapi). Built-in swagger helps for the frontend team.

This meant they gave up some things that Flask provided:

  • Flask has a mature ecosystem. So they left a big community + handy heap of stackoverflow answers + lots of ready-made plugins behind.
  • Integrated command-line dev tools. Flask is handy there.
  • Simplicity, especially for new devs.

They did a gradual migration. So they needed to build a custom fastapi middleware that could support both worlds. And some api versioning to keep the two code bases apart. It took a lot of time to port everything over.

The middleware was key. Completely async in fastapi. Every request came through here. If needed, a request would be routed to Flask via wsgi, if possible it would go to the new fastapi part of the code.

For the migration, they made a dashboard of all the endpoints and the traffic volume. They migrated high-traffic APIs first: early infra validation. Attention to improvements by checking if the queries were faster. Lots of monitoring of both performance and errors.

Some lessons learned:

  • Async adds complexity, but pays off at scale. They started the process with 4 million users, now they're at 20.
  • Pydantic typing catches errors early.
  • Versioned middleware made incremental delivery safe.
  • Data-driven prioritization (=the dashboard) beats a big-bang rewrite.
  • AI helps, but hallucinates too much on complex APIs.
https://reinout.vanrees.org/images/2025/austria-vacation-3.jpeg

Unrelated photo from our 2025 holiday in Austria: the beautiful 'große Mühl' river valley.

16 Oct 2025 4:00am GMT

Pycon NL: don't panic, a developer's guide to security - Sebastiaan Zeeff

(One of my summaries of the Pycon NL one-day conference in Utrecht, NL).

He showed a drawing of Cornelis "wooden leg" Jol, a pirate from the 17th century from Sebastiaan's hometown. Why is he a pirate? He dresses like one, has a wooden leg, murders people like pirate and even has a parrot, so he's probably a pirate. For python programmers used to duck typing, this is familiar.

The 17th century, the Netherlands were economically wealthy. And had a big sea-faring empire. But they wanted a way to expand their might without paying for it. So... privatization to the rescue. You give pirates a vrijbrief, a government letter saying they've got some kind of "permission" from the Dutch government to rob and pillage and kill everybody as long it aren't Dutch people and ships. A privateer.So it looks like a pirate and behaves like a pirate, but it isn't technically a real pirate.

Now on to today. There are a lot of cyber threats. Often state-sponsored. You might have a false sense of security in working for a relatively small company instead of for a juicy government target. But... privateers are back! Lots of hacking companies have coverage of governments - as long as they hack other countries. And hacking small companies can also be profitable.

"I care about security". Do you really? What do real security people think? They think developers don't really pay much attention to it. Eye-roll at best, disinterest at worst. Basically, "it is somebody else's problem".

What you need is a security culture. A buy-in at every level. You can draw an analogy with safety culture at physically dangerous companies like petrochemical. So: you as developer, should argue for security with your boss. You are a developer, so you have a duty to speak up. Just like a generic employee at a chemical plant has the duty to speak when seeing something risky.

You don't have to become a security export (on top of everything else), but you do have to pay attention. Here are some pointers:

  • "Shift left". A term meaning you have to do it earlier rather than later. Don't try to secure your app just before shipping, but take it into account from the beginning. Defense in depth.
  • "Swiss cheese model". You have multiple layers in your setup. Every layer only needs one hole for the total to be penetrated.
  • Learn secure design principles. "Deny by default", "fail securely", "avoid security by obscurity", "minimize your attack surface", etc. Deny by default is a problem in the python world. We're beginner-friendly, so often everything is open...
  • Adopt mature security practices. Ignore ISO 27001, that's too hard to understand. Look at OWASP instead. OWASP DevSecOps maturity model ("pin your artifacts", for instance).
  • Know common vulnerabilities. Look at the popular "top 10" lists. Today, SQL injection still makes victims...
https://reinout.vanrees.org/images/2025/austria-vacation-5.jpeg

Unrelated photo from our 2025 holiday in Austria: center of Neufelden, nicely restored and beautifully painted.

16 Oct 2025 4:00am GMT

15 Oct 2025

feedDjango community aggregator: Community blog posts

Developing and building with AI

Earlier this year, how to use Agentic AI/LLM 'clicked' in my head, mainly when I tried out Zed's agentic mode and it could take a codebase as context and do simple tasks for me to review. This was great for adding admin classes to my existing project or creating __str__ methods for my models. But I often found myself going in circles when building out a larger feature. Over the summer Brian Casel launched Agent OS and along with it came another term - spec-driven development. It took me some time to get my head around the concept, but I really have gotten in to the flow over the last month.

Using Agent OS has allowed me to build out features with remarkable speed. Features that probably would have taken weeks were compressed into days or even hours. The key with this process is getting the AI to have layers of context (standards, product, specs) and it starts way before code, most of my time is spent reviewing markdown files, then it's prompting the AI of choice to execute a task one at a time (or a few in a row) with me reviewing the output and making manual adjustments or creating a custom prompt if needed. The key to consistency here is to ensure any new decisions are recorded back into the appropriate layer (standards, product or specs). Agent OS does this by creating the specs and tasks in the first place and then adds verification documentation as it's executing tasks.

For me though, it's understanding the meta framework that Agent OS provides which could be applied to other industries interacting with API. As I have mentioned there are 3 layers of context built into Agent OS. They are standards, product and specs. Really these are 3 layers of specificity of context. Standards are generally global pieces of information that are relevant to a user, in Agent OS this is how you code, your tech stack. But say in photography, this could be your camera, the editing tools you use, your prefered styles. Then we have a product. When programming this is a project, but again in photography this would likely be a particular client or type of shoot you offer. Finally we get to the specs, specs in coding are the documentation of how to build a feature, again relating to photography this is processing a singular photoshoot for a client. This type of layering I can see being applicable across other knowledge based industries, for example one of my clients is in sports nutrition, they are exploring AI and I can see this layered framework being applicable to them in how they add AI features to more reliably do the same thing each time for a coach.

I'm excited to try Agent OS 2.0 this week, which has sub-agents when used with Claude Code. The use of sub-agents again renforces the meta framework as I hand off different specialised tasks to allow me to focus on the larger product being built. One final important note is that we need to start focusing on documentation, particularly our standards, they form the basis for an LLM to write code similar to what we would have written in the first place.

15 Oct 2025 5:00am GMT

12 Oct 2025

feedDjango community aggregator: Community blog posts

My Django On The Med 2025Β πŸ–οΈ

A summary of my experience at Django On The Med 2025 told through the posts I published on Mastodon during the conference.

12 Oct 2025 3:00am GMT

10 Oct 2025

feedDjango community aggregator: Community blog posts

Django News - πŸ₯§ Python 3.14 is released! - Oct 10th 2025

News

Python 3.14.0 (final) is here!

Python 3.14.0 release offers new free-threaded support, deferred annotations, template string literals, multiple interpreters, and performance optimizations beneficial to Django backends.

blogspot.com

Python Insider: Python 3.13.8 is now available

Python 3.13.8 releases approximately 200 bug fixes, build improvements, and documentation updates for enhanced stability and performance, benefiting Django projects and upgrades.

blogspot.com

Python 3.x security release

This week we saw security releases for every active Python version: Python 3.9.24, Python 3.10.19, Python 3.11.14, and Python 3.12.12.

Django Newsletter

Updates to Django

Today 'Updates to Django' is presented by Pradhvan from Djangonaut Space!πŸš€

Last week we had 13 pull requests merged into Django by 7 different contributors - including a first-time contributor! Congratulations to Chaitanya Keyal for having their first commits merged into Django - welcome on board! πŸŽ‰

This week's Django highlights 🌟

That's all for this week in Django development! 🐍

Django Newsletter

Wagtail CMS

Bring your UX feature requests to Wagtail Space community day

Wagtail's UI team encourages proposals and votes on UX enhancements focused on the CMS admin interface during the upcoming Wagtail Space community day event.

wagtail.org

Articles

Django Forever

After fourteen years of evolution, Django remains a stable, ergonomic framework with excellent API design and comprehensive documentation, sustaining long-term open source commitment.

jmduke.com

Django & REST & APIs - Software Crafts

Proposes a unified Django API design integrating URL routing, class-based view layers and flexible serialization that leverages ORM definitions and supports CRUD operations.

softwarecrafts.co.uk

Run Django tests using PostgreSQL in GitHub Actions

Configure GitHub Actions to run Django unit tests on a PostgreSQL service using environment variables with python-dotenv and dj-database-url for accurate production replication.

loopwerk.io

Disabling Signup in Django allauth

Disable Django allauth user registration by implementing a custom AccountAdapter that returns False in is_open_for_signup to completely restrict signup functionality.

mariatta.ca

Django: one ORM to rule all databases πŸ’

Django ORM matrix compares official database backends and highlights supported and limited ORM features across PostgreSQL, SQLite, MariaDB, MySQL, and Oracle.

paulox.net

DjangoCon US 2025 Recap

Kati Michel's annual DjangoCon US Recap is here!

github.io

DjangoCon US 2025: A Celebration of Community, Code and 20 Years of Django

DjangoCon US 2025 celebrated Django's 20-year milestone with sessions on GeneratedField, db_comment, and PostgreSQL enhancements, strengthening community collaboration.

caktusgroup.com

Django News Jobs

Senior Python Developer at Basalt Health πŸ†•

Senior Software Engineer (Python and Solidity) at LiquidFi πŸ†•

Django/Python Full-stack Engineer at JoinTriple.com

Senior Python/Django Engineer at Search Atlas

Django Newsletter

Projects

FarhanAliRaza/django-rapid

Contribute to FarhanAliRaza/django-rapid development by creating an account on GitHub.

github.com

joshuadavidthomas/djtagspecs

Structured metadata for Django-style template tags.

github.com


This RSS feed is published on https://django-news.com/. You can also subscribe via email.

10 Oct 2025 3:00pm GMT

Django: Introducing django-http-compression

HTTP supports response compression, which can significantly reduce the size of responses, thereby decreasing bandwidth usage and load times for users. It's a cheap and valuable technique for improving website performance.

Lighthouse, Google's web performance auditing tool, recommends enabling compression where it is not enabled, presenting estimated bandwidth savings. For example, on one client site, it estimated a 64KiB (65%) saving on a dashboard page:

Lighthouse Document request latency audit, showing "No compression applied"

For Django projects, many deployment situations will let you enable response compression at the web server or CDN level. But there are still cases where you may not have that option or it's inconvenient, such as with some PaaS providers. In such situations, you can use Django's built-in GZipMiddleware to use Gzip compression-pop it in MIDDLEWARE, above any middleware that modifies the response content:

MIDDLEWARE = [
    ...,
    "django.middleware.gzip.GZipMiddleware",
    ...,
]

…and hey presto, instant site-wide compression! Browsers and HTTP clients have supported Gzip for decades, so practically all visitors will benefit.

Django's Gzip support dates back to 2005, before its 1.0 release. Since then, two newer compression algorithms have been developed and achieved wide support: Brotli and Zstandard. Both offer better compression ratios than Gzip, with Zstandard even matching Gzip on speed.

Python 3.14, released this Tuesday, includes Zstandard support in the standard library (release note). To help Django projects take advantage of this, I have created django-http-compression, a drop-in replacement for GZipMiddleware that supports all three algorithms (where available). Brotli support requires the brotli package, and Zstandard support requires Python 3.14+.

Now you can support the best HTTP compression directly in Django through two settings changes:

INSTALLED_APPS = [
    ...,
    "django_http_compression",
    ...,
]

MIDDLEWARE = [
    ...,
    "django_http_compression.middleware.HttpCompressionMiddleware",
    # Remove GZipMiddleware or similar if present!
    ...,
]

The middleware selects the best coding supported by the client, respecting any quality values (q parameters) specified in the accept-encoding header.

My intention with this package is to provide an evolution of GZipMiddleware that can provide a base for adding (at least) Zstandard support to Django itself. It's already helping, as during its development I found two bugs in GZipMiddleware, reported in Ticket #36655 and Ticket #36656.

Fin

Please try out django-http-compression in your projects today and let me know how it goes!

May your site run ever more smoothly,

-Adam

10 Oct 2025 4:00am GMT