27 Mar 2026
Django community aggregator: Community blog posts
Django News - Balancing the AI Flood in Django - Mar 27th 2026
News
Calling for research participants from Django, Laravel, Ruby on Rails, Next.js and Spring Boot communities
Former DSF President and researcher Anna Makarudze is seeking Django developers to share insights on dependency vulnerabilities and supply chain risks in open source.
Djangonaut Space News
Djangonaut Space Financial Report 2025
Djangonaut Space's 2025 report highlights a community-powered year of $2.2k in donations funding tools and conference access, while setting sights on sending contributors to even more events in 2026.
Djangonaut diaries, week 3 - Working on an ORM issue
A deep dive into Django's ManyToMany indexes reveals an unnecessary extra index, showing how databases already optimize with composite indexes and setting the stage for a cleaner ORM fix.
Wagtail CMS News
Wagtail Routable Pages and Layout Configuration
Build flexible Wagtail routable pages that use StreamField layouts to dynamically control how Django model data renders on detail views.
Updates to Django
Today, "Updates to Django" is presented by Raffaella from Djangonaut Space! π
Last week we had 18 pull requests merged into Django by 15 different contributors - including 4 first-time contributors! Congratulations to Juho Hautala, Huwaiza, (James) Kanin Kearpimy π and Praful Gulani for having their first commits merged into Django - welcome on board!
News in Django 6.1:
- Providing
fail_silently=True,auth_user, orauth_passwordto mail sending functions (such assend_mail()) while also providing aconnectionnow raises aTypeError. assertContains()andassertNotContains()can now be called multiple times on the sameStreamingHttpResponse. Previously, they would consume the streaming response's content, causing subsequent calls to fail.- Because quoted aliases are case-sensitive, raw SQL references to aliases mixing case, such as when using
RawSQL, might have to be adjusted to also make use of quoting.
Django Newsletter
Django Fellow Reports
Fellow Report - Natalia
A significant portion of this week was dedicated to security work (yes, again). As usual, details here are intentionally kept at a high level, but the time went into triaging new reports, progressing in-flight likely confirmed issues, validating proposed fixes, and coordinating next steps with the team.
One additional challenge worth noting is the volume of near-duplicate reports; beyond triage, this often requires careful comparison across long submissions to identify what is actually new or meaningfully different.
Fellow Report - Jacob
Easy to miss in the release notes (as we only described the user-facing changes for edge cases), but last week we landed (with great joy) @charettes' defense-in-depth measure for the ORM that ensures user-provided aliases are always quoted.
In addition to the below, another steady week advancing pending security reports.
Sponsored Link 1
The deployment service for developers and teams.
Articles
Learning LLM Integration
A practical, from-scratch look at integrating LLMs into a Django app, highlighting why isolating the AI layer and writing precise prompts makes all the difference.
Give Django your time and money, not your tokens
The Django community wants to collaborate with you, not a facade of you.
Open Source Has a Bot Problem
The maintainer of awesome-mcp-servers came up with a solution, of sorts, to curating AI-generated PRs.
Why pylock.toml includes digital attestations
A Python project got hacked where malicious releases were directly uploaded to PyPI. I said on Mastodon that had the project used trusted publishing with digital attestations, then people using a pylock.toml file would have noticed something odd was going on thanks to the lock file including attestation data.
Rewriting a 20-year-old Python library
A thoughtful deep dive into rewriting a 20-year-old Python library, covering async design, API ergonomics, and how to modernize without breaking users.
Playground embedding, packages and more
The nanodjango playground has several new exciting features which transform what you can achieve with it - you can now manage packages and secrets, share scripts from the command line, and embed live Django code in your own site.
Human.json
A quick look at human.json, a lightweight protocol for sharing human-readable metadata, with a simple Django implementation and a healthy dose of skepticism about its long-term adoption.
Videos
PyCon US 2026 - Elaine Wong & Jon Banafato
A behind-the-scenes look at PyCon US 2026 with chair Elaine Wong and co-chair Jon Banafato, covering what's new, how to prepare, and tips to make the most of the biggest Python conference in North America.
Django Job Board
Solutions Architect - Python (Client-facing) at JetBrains
Django Newsletter
Django Forum
Discouraging "the voice from nowhere" (~LLMs) in documentation
Forum discussion on maintaining a human (not LLM) voice in Django's documentation.
Projects
kujov/django-tw
Zero-config Tailwind CSS v4 for Django.
VojtechPetru/django-live-translations
In-browser translation editing for Django superusers.
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
27 Mar 2026 3:00pm GMT
25 Mar 2026
Django community aggregator: Community blog posts
LLMs for Open Source maintenance: a cautious case
LLMs for Open Source maintenance: a cautious case
When ChatGPT appeared on the scene I was very annoyed at all the hype surrounding it. Since I'm working in the fast moving and low margin business of communication and campaigning agencies I'm surrounded by people eager to jump on the hype train when a tool promises to lessen the workload and take stuff from everyone's plate.
These discussions coupled with the fact that the training of these tools required unfathomable amounts of stealing were the reason for a big reluctance on my part when trying them out. I'm using the word stealing here on purpose, since that's exactly the crime Aaron Swartz was accused of by the attorney's office of the district of Massachusetts. It's frustrating that some people can get away with the same crime when it is so much bigger. For example, OpenAI and Anthropic downloaded much more data than Aaron ever did.
A somewhat related thing happened with the too-big-to-fail banks: There, the people at the top were even compensated with golden parachutes at the end. LLM companies seem to be above accountability too.
Despite all this, I have slowly started integrating these tools into my workflows. I don't remember the exact point in time, but since some time in 2025 my opinions on their utility has started to change. At the beginning, I always removed the attribution and took great care to write and rewrite the code myself, only using the LLMs for inspiration and maybe to generate integration tests. More and more I have to admit that they are useful, especially in time constrained projects with a clear focus and purpose.
Last month I fixed and/or closed all open issues in the django-tree-queries repository with the help of Claude Code. Is that a good thing? It could be argued I should have done the work myself. But I wouldn't have - I have other things I want to do with my time. I don't want to (always) work on Open Source software in the evening. I definitely also have leaned heavily on LLMs when working on django-prose-editor.
Is faster better?
We can produce more code, more features and close tickets faster than before. In my experience the speed up isn't as big as some people may want us to believe, but it's there. And contrary to what people in my LinkedIn feed say, that's not an obviously good thing. Is it a race to the bottom where we drown in LLM-generated slop in quantities impossible to maintain? It doesn't feel like that - but it's a race that could go both ways. Throwaway code can be thrown away though, and well tested code does what the tests say, which is good enough according to my rules for releasing open source software.
Speaking as someone who has put more into the training set than they've taken out so far, I don't feel all that bad using the tools. Coding agents can already be run locally with reasonable hardware requirements, at least during inference, which is where the ongoing cost sits. Maybe using them is still rationalization. But contribution and profit needing to stay in some rough balance feels like the right frame. Total abstinence isn't the only ethical choice we have.
Community tensions
What makes me less comfortable is how communities are reacting. There are real concerns within the Django world, and not just the practical one of overworked maintainers wading through hastily generated patches that don't actually fix anything. The deeper worry is about the communal nature of contribution: that working on Django is supposed to be a learning experience, a way into the community, and that using an LLM as a vehicle rather than a tool hollows out that process. Reviewers end up interacting with what is essentially a facade, unable to tell whether anyone actually understood the problem. That's a real concern and I don't want to dismiss it.
But it maps onto a different situation from what I've been describing. Using Claude Code to close issues in projects I maintain and understand is not the same as using it to paper over gaps in comprehension on a ticket in someone else's project. Whether LLM-assisted contributions to Django itself are appropriate is a difficult question; whether it's appropriate to use them when maintaining your own software less so.
There's also a harder tension around quality. Django's conservatism has real value: rigorous review, minimal magic, a coherent philosophy. The ORM and template system don't need to reinvent themselves, they work well, are still evolving while staying rock-solid for all my use cases. And reading the release notes always brings me joy. But it could be more exciting more often. Quality isn't a strictly positive thing. Everything has costs. It's not great if the price of the bar is that legitimate bugs sit open for years because nobody has a few evenings to spend on them. It happened with django-tree-queries before I went through it with Claude Code. I think the bar for contributing to Django is too high. I would value a little more motion and a little less stability, even as someone running dozens of Django websites and apps.
Then there's the pile-on dynamic that plays out on Mastodon and GitHub. When the Harfbuzz and chardet maintainers disclosed LLM usage, the reaction from some corners was something to behold. People expressing what amounted to personal grievance over tooling choices in projects they may not even use. There's a particular kind of entitlement in telling a maintainer - who is keeping software alive, possibly even in their spare time - that the way they choose to do that work is an affront. Open source is a gift, whether paid or not, and nobody has to accept it, but disclosing your tooling isn't an invitation for complaints. The ethical concerns about training data, resource use and other negative externalities are legitimate and worth raising. Performative outrage directed at individual maintainers is not the same thing.
I don't have an easy conclusion. The tools are useful, the ethics are murky, and communities are still figuring out how to respond. A cautious, honest use of them feels better to me than the alternatives.
25 Mar 2026 5:00pm GMT
Building modern Django apps with Alpine AJAX, revisited
About nine months ago I wrote an article about my quest to simplify my web development stack. How I went from SvelteKit on the frontend and Django on the backend, to an all-Django stack for a new project, using Alpine AJAX to enable partial page updates.
I've now been using this new stack for a while, and my approach -as well as my opinion- has changed significantly. Let's get into what works, what doesn't, and where I ended up.
A quick recap
Alpine AJAX is a lightweight alternative to htmx, which you can use to enhance server-side rendered HTML with a few attributes, turning <a> and <form> tags into AJAX-powered versions. No more full page refreshes when you submit a form.
The key mechanic: when a form has x-target="comments", Alpine AJAX submits the form via AJAX, finds the element with that ID in the response, and swaps it into the page. The server returns HTML, not JSON.
In the original article I used django-template-partials (since merged into Django itself) to mark sections of a template as named partials using {% partialdef %}. Combined with a custom AlpineTemplateResponse the view could automatically return just the targeted partial when the request came from Alpine AJAX.
Where I began: template partials
Let's say you have an article page with the article body parsed from Markdown, a like button, and a comment section. The template looks something like this:
article.html{% extends "base.html" %} {% block body %} <article> <h1>{{ article.title }}</h1> {{ article_html|safe }} {% partialdef like_form inline %} <form method="post" id="like_form" x-target="like_form"> {% csrf_token %} <button type="submit" name="toggle-like"> {% if article.is_liked %}Unlike{% else %}Like{% endif %} </button> </form> {% endpartialdef %} {% partialdef comments inline %} <div id="comments"> {% for comment in article.comments.all %} <div>{{ comment.user }}: {{ comment.text }}</div> {% endfor %} <form method="post" x-target="comments"> {% csrf_token %} {{ comment_form }} <button type="submit" name="add-comment">Submit</button> </form> </div> {% endpartialdef %} </article> {% endblock %}
Every form action POSTs to the same article view, which handles all the actions in one big post method:
views.pyclass ArticleView(View): def get_context(self, request, pk): article = get_object_or_404( Article.objects.prefetch_related("comments") .annotate_is_liked(request.user), pk=pk, ) return { "article": article, "article_html": markdown(article.body), "comment_form": CommentForm(), } def post(self, request, pk): context = self.get_context(request, pk) article = context["article"] if "toggle-like" in request.POST: if article.is_liked: article.unlike(request.user) article.is_liked = False else: article.like(request.user) article.is_liked = True return AlpineTemplateResponse(request, "article.html", context) if "add-comment" in request.POST: form = CommentForm(request.POST) if form.is_valid(): Comment.objects.create(article=article, user=request.user, ...) return AlpineTemplateResponse(request, "article.html", context) return redirect(article) def get(self, request, pk): context = self.get_context(request, pk) return AlpineTemplateResponse(request, "article.html", context)
The AlpineTemplateResponse from the original article takes care of returning just the targeted partial when the request comes from Alpine AJAX. It works. I thought I was being smart to prevent template duplication this way, but there are two problems:
-
The view does too much work. Every POST action calls
get_context, which fetches everything: the article, the parsed Markdown body, the comments, the like state, the comment form. When the user clicks "Like", we do all this work we'll never use in the partial template. The template partial means the response is small, but the server-side work is exactly the same as rendering the full page. -
The template is a mess. Those
{% partialdef %}blocks scattered throughout the template make it noisy and hard to read. In a small example it's fine, but in a real template with 200+ lines, it gets ugly fast.
When doubt set in: switching to Jinja2
To be honest though, the real killer of my motivation while working on this project has been the Django Template Language. I'm sorry, but I just hate it. I have since 2009, and I still do. The syntax is bad enough, but then you have to constantly fight its limitations. The fact I can't simply call a function is so incredibly annoying, and is causing way more boilerplate with tons of custom template tags and filters.
So, switch to Jinja2, right? Except that template partials aren't supported in combination with Jinja2. No more {% partialdef %}. Which means returning full page responses for AJAX requests, which isn't exactly ideal.
I did it anyway. I ripped out all the {% partialdef %} tags, migrated my templates to Jinja2, and my views just returned the full template for AJAX requests. Alpine AJAX is smart enough to extract the elements it needs by their IDs, and throws away the rest.
This was simpler and I was much happier writing Jinja2 templates. But the wastefulness got worse. Before, the server at least returned a small response. Now it rendered the entire page and sent all of it over the wire, just for the browser to use a tiny piece of it.
It was at this moment that I seriously thought about throwing the entire frontend away and rebuilding it in SvelteKit, with Django REST Framework returning JSON responses. But that seemed like a pretty big waste of effort, so instead I took a deep breath and thought about what I wanted:
- Jinja2 templates. Non-negotiable.
- Small, fast AJAX responses. No rendering the full page for a like toggle.
- No template duplication between the full page and the AJAX response.
- Simple views that only do the work they need to do.
Template partials gave me #2 and #3, but not #1 or #4. Switching to Jinja2 and returning the full template for AJAX requests gave me #1 and #3, but not #2 or #4. I needed a different approach.
Where I ended up: separate views with template includes
The answer turned out to be straightforward, and the one I initially discarded as "too much boilerplate": instead of one monolithic view handling all POST actions, split each action into its own view with its own URL. And instead of {% partialdef %}, use plain {% include %} tags to extract reusable template fragments.
Let me show you. Here's the simplified article template:
article.html{% extends "base.html" %} {% block body %} <article> <h1>{{ article.title }}</h1> {{ article.body }} {% include "articles/_like_form.html" %} {% include "articles/_comments.html" %} </article> {% endblock %}
Clean and readable. Each include is a self-contained fragment. And here's the like form:
_like_form.html<form method="post" action="{{ url('toggle-like', args=[article.id]) }}" id="like_form" x-target="like_form"> {{ csrf_input }} {% if article.is_liked %} <button type="submit">Unlike</button> {% else %} <button type="submit">Like</button> {% endif %} </form>
And finally, the view:
views.pyclass ToggleLikeView(LoginRequiredMixin, View): def post(self, request, pk): article = get_object_or_404( Article.objects.annotate_is_liked(request.user), pk=pk, ) if article.is_liked: article.unlike(request.user) article.is_liked = False article.like_count -= 1 else: article.like(request.user) article.is_liked = True article.like_count += 1 if is_alpine(request): return TemplateResponse( request, "articles/_like_form.html", {"article": article}, ) # For non-Alpine requests, we just redirect back return redirect(article)
No comment queries. No form building. No Markdown parsing. Just the like state.
The is_alpine check provides a redirect fallback for non-JavaScript POST requests, keeping things progressive. And the ArticleView itself becomes GET-only. No more branching on POST keys. No get_context method that fetches everything for every action. Each view does one thing.
The trade-offs
More templates. For the article page, I went from one template to several: the include fragments (_like_form.html, _comments.html) that are shared between the full page and the AJAX responses. When an action needs to update multiple elements on the page, you also end up with small response templates that combine the right includes. For example, if submitting a comment should update both the comment list and a comment count elsewhere on the page:
_add_comment_response.html{% include "articles/_comments.html" %} {% include "articles/_engagement_counts.html" %}
Trivial, but still a file you have to create and name.
More views and URL routes. Each action gets its own view class and its own path() entry. For a page with likes, comments, and subscriptions, that's three or four extra views.
But here's what I got in return:
Actual performance improvement. Not just smaller responses, but less work on the server. Each view only queries what it needs.
Jinja2. I'm using Jinja2 instead of the Django Template Language. I can call functions, I have proper expressions, and I don't need custom template tags for basic things. This alone was worth the switch.
Readable templates. The main article.html is short and shows the page structure at a glance. Each fragment is self-contained. No {% partialdef %} blocks scattered everywhere.
Simple views. Each view does exactly one thing. Easy to understand, easy to test, easy to optimize.
Conclusion
I went through three stages: template partials with Django Template Language, full-page responses with Jinja2, and finally separated views with template includes. Each step solved a real problem with the previous approach.
The pattern I've landed requires more files and views than I'd like, but each is simple and does one thing.
My overall feelings on Django + Alpine AJAX have also changed. I still believe there are benefits to using a simplified tech stack and using hypermedia as the engine of state. Just return HTML instead of returning JSON to a JavaScript framework which then has to turn it into HTML. Conceptually it just makes sense to me.
But the dream was to build a plain old Django application using simple views and simple templates, using old-fashioned MPA server-rendered pages. Sprinkle in a few Alpine AJAX attributes and magically your site gets SPA-like usability. And it simply hasn't played out that way for me. Yes, you could do that, if you're fine with the wastefulness of returning full pages as a response to AJAX requests. But when you want to do it better than that, you end up with more boilerplate to make it possible to return small bits of HTML.
And this isn't really about Alpine AJAX specifically; htmx would lead to the exact same place. The fundamental tension is in the HTML-over-the-wire approach itself: the server has to know which fragments of HTML to return, and that means structuring your views and templates around it. You trade the complexity of a JavaScript frontend for a different kind of complexity on the server.
Progressive enhancement adds to that complexity. Every view needs an is_alpine check with a redirect fallback, every form needs to work both as a regular POST and as an AJAX submit. If I dropped progressive enhancement and just required JavaScript, those redirect fallbacks and the branching that comes with them would disappear. The views would be simpler. But I think progressive enhancement is important enough to keep in place.
Would I use Alpine AJAX (or htmx) again? Honestly: probably not. I have a lot more fun when building frontends with SvelteKit. Building composable and reusable UI components is so much more natural there, and the performance is simply better (once the initial JS bundle has been downloaded and parsed). But am I going to throw away my current project's code and redo it all? No, I am not. Django with Alpine AJAX is a nice change of scenery, it's a nice playground I don't usually get to play in. I think I ended up with a good compromise, and hey: I still don't have to build and maintain a separate API, API docs, and frontend.
25 Mar 2026 3:16pm GMT
23 Mar 2026
Django community aggregator: Community blog posts
Built with Django β Weekly Roundup (Mar 16βMar 23, 2026)
Hey, Happy Monday!
Why are you getting this: *You signed up to receive this newsletter on Built with Django. I promised to send you the latest projects and jobs on the site as well as any other interesting Django content I encountered during the month. If you don't want to receive this newsletter, feel free to unsubscribe anytime.
Sponsor
This issue is sponsored by TuxSEO - your AI content team on auto-pilot.
- Plan and ship SEO content faster
- Generate practical, publish-ready drafts
- Keep your content pipeline moving every week
Projects
- Reckot - Speak and We make it, event management reinvent.
- Your Cloud Hub - YourCloudHub.ai is a technology and digital solutions company offering IT staffing and outsourcing services along with software development, website development, digital marketing.
From the Community
- Django Apps vs Projects Explained: A Complete Production Guide - DEV Community
- Building a Seamless JWT Onboarding Flow with React Router v7 and Django - DEV Community
- How to Show a Waitlist Until Your Wagtail Site Is Ready
Support
You can support this project by using one of the affiliate links below. These are always going to be projects I use and love! No "Bluehost" crap here!
- Buttondown - Email newsletter tool I use to send you this newsletter.
- Readwise - Best reading software company out there. I you want to up your e-reading game, this is definitely for you! It also so happens that I work for Readwise. Best company out there!
- Hetzner - IMHO the best place to buy a VPS or a server for your projects. I'll be doing a tutorial on how to use this in the future.
- SaaS Pegasus is one of the best (if not the best) ways to quickstart your Django Project. If you have a business idea but don't want to set up all the boring stuff (Auth, Payments, Workers, etc.) this is for you!
23 Mar 2026 6:00pm GMT
21 Mar 2026
Django community aggregator: Community blog posts
Human.json
I have seen more and more people talk about human.json lately and I think it is a pretty neat idea. From what I can tell it checks all the boxes I would expect from a protocol like this.
The fact that it relies on browser extensions right now makes sense, but might become a limiting factor in future. Or the number of extensions needs to go up beyond the two easy ones and come to mobile as well. I am not sure this will be going anywhere beyond a few enthusiastic people, but you never know.
Implementing the protocol was not much work, which is expected considering it only consists of two required values and an optional list of two more values. If you want to add it to your Django based site, I packaged everything up and you can find it on PyPI.
Should you use the package? Eh, that is not an easy question. From a supply chain perspective I would say "no". It is only a few lines of code. But you never know how the protocol will evolve, so things might look more complicated in a month. I will do my best to keep up with the protocol and not ship crypto miners.
I am still not a fan of Python packaging, but I have to admit uv makes it kind of bearable despite still not being without little gotchas.
21 Mar 2026 5:05pm GMT
Wagtail Routable Pages and Layout Configuration

If you are familiar with Wagtail CMS for Django, you know that you can create Wagtail pages and control their content and layout with blocks inside of stream fields. But what if you have entries coming from normal Django models through a routable page? In this article, I will explore how you can control the dynamic layout of a detail view in a routable page.
Routable pages in Wagtail are dynamic pages of your CMS page tree that can have their own URL subpaths and views. You can use them for filtered list and detail views, multi-step forms, multiple formats for the same data, etc. Here I will show you a routable ArticleIndexPage with a list and detail views for Article instances rendering the detail views based on the block layout in a detail_layout stream field.

1. Project Setup
Create a Wagtail project myproject and articles app:
pip install wagtail
wagtail start myproject
cd myproject
python manage.py startapp articles
Add to INSTALLED_APPS in your Django project settings:
INSTALLED_APPS = [
...
"wagtail.contrib.routable_page", # required for RoutablePage
"myproject.apps.articles",
]
2. File Structure
The articles app:
myproject/apps/articles/
βββ __init__.py
βββ apps.py
βββ models.py # Article, Category, ArticleIndexPage
βββ blocks.py # All StreamField block definitions
βββ admin.py # Register Article and Category in Django admin
The articles templates:
myproject/templates/articles/
βββ article_list.html # List view
βββ article_detail.html # Detail view
βββ blocks/
βββ cover_image_block.html
βββ description_block.html
βββ related_articles_block.html
3. Models
myproject/apps/articles/models.py
Create the Category and Article Django models, and the ArticleIndexPage routable Wagtail page with article list and detail views:
from django.core.paginator import EmptyPage, PageNotAnInteger, Paginator
from django.db import models
from django.shortcuts import get_object_or_404
from django.utils.translation import gettext_lazy as _
from wagtail.admin.panels import FieldPanel, ObjectList, TabbedInterface
from wagtail.contrib.routable_page.models import RoutablePageMixin, path
from wagtail.fields import StreamField
from wagtail.models import Page
from .blocks import article_detail_layout_blocks
class Category(models.Model):
name = models.CharField(max_length=100, verbose_name=_("name"))
slug = models.SlugField(unique=True, verbose_name=_("slug"))
class Meta:
verbose_name = _("category")
verbose_name_plural = _("categories")
def __str__(self):
return self.name
class Article(models.Model):
title = models.CharField(max_length=255, verbose_name=_("title"))
slug = models.SlugField(unique=True, verbose_name=_("slug"))
category = models.ForeignKey(
Category,
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name="articles",
verbose_name=_("category"),
)
cover_image = models.ForeignKey(
"wagtailimages.Image",
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name="+",
verbose_name=_("cover image"),
)
description = models.TextField(blank=True, verbose_name=_("description"))
created_at = models.DateTimeField(auto_now_add=True, verbose_name=_("created at"))
class Meta:
verbose_name = _("article")
verbose_name_plural = _("articles")
def __str__(self):
return self.title
class ArticleIndexPage(RoutablePageMixin, Page):
"""
A single Wagtail page that owns:
- /articles/ β paginated list of all Articles
- /articles/<slug>/ β detail view for one Article
The StreamField is edited once in the Wagtail admin and
defines the layout for every detail view.
"""
articles_per_page = models.IntegerField(default=10, verbose_name=_("articles per page"))
detail_layout = StreamField(
article_detail_layout_blocks(),
blank=True,
use_json_field=True,
verbose_name=_("detail layout"),
help_text=_(
"Configure the layout for all article detail pages. "
"Add, remove, and reorder blocks to change what appears "
"on every article detail view."
),
)
# TabbedInterface gives List View and Detail View their own tabs.
# promote_panels and settings_panels must be added explicitly here
# because edit_handler takes full ownership of the admin UI structure.
edit_handler = TabbedInterface([
ObjectList(Page.content_panels + [FieldPanel("articles_per_page")], heading=_("List View")),
ObjectList([FieldPanel("detail_layout")], heading=_("Detail View")),
ObjectList(Page.promote_panels, heading=_("SEO / Promote")),
ObjectList(Page.settings_panels, heading=_("Settings")),
])
class Meta:
verbose_name = _("article index page")
verbose_name_plural = _("article index pages")
@path("")
def article_list(self, request):
all_articles = Article.objects.select_related("category", "cover_image").order_by("-created_at")
paginator = Paginator(all_articles, self.articles_per_page)
page_number = request.GET.get("page")
try:
articles = paginator.page(page_number)
except PageNotAnInteger:
articles = paginator.page(1)
except EmptyPage:
articles = paginator.page(paginator.num_pages)
return self.render(
request,
context_overrides={"articles": articles, "paginator": paginator},
template="articles/article_list.html",
)
@path("<slug:article_slug>/")
def article_detail(self, request, article_slug):
article = get_object_or_404(
Article.objects.select_related("category", "cover_image"),
slug=article_slug,
)
return self.render(
request,
context_overrides={"article": article},
template="articles/article_detail.html",
)
4. StreamField Blocks
myproject/apps/articles/blocks.py
Create Wagtail stream-field blocks for the cover image, description, and the related articles of an actual article. Each block can have some settings on how to represent the content of the block.
from django.utils.translation import gettext_lazy as _
from wagtail import blocks
class CoverImageBlock(blocks.StructBlock):
aspect_ratio = blocks.ChoiceBlock(
choices=[
("16-9", _("16:9 Widescreen")),
("4-3", _("4:3 Standard")),
("1-1", _("1:1 Square")),
("3-1", _("3:1 Banner")),
],
default="16-9",
label=_("Aspect ratio"),
help_text=_("Controls the cropping of the cover image."),
)
class Meta:
template = "articles/blocks/cover_image_block.html"
icon = "image"
label = _("Cover Image")
class DescriptionBlock(blocks.StructBlock):
max_lines = blocks.IntegerBlock(
min_value=0,
default=0,
label=_("Maximum lines"),
help_text=_("Clamp the description to this many lines. Set to 0 to show all."),
required=False,
)
class Meta:
template = "articles/blocks/description_block.html"
icon = "pilcrow"
label = _("Description")
class RelatedArticlesBlock(blocks.StructBlock):
sort_order = blocks.ChoiceBlock(
choices=[
("newest", _("Newest first")),
("oldest", _("Oldest first")),
("title_asc", _("Title A β Z")),
("title_desc", _("Title Z β A")),
],
default="newest",
label=_("Sort order"),
help_text=_("Order in which related articles are listed."),
)
def get_context(self, value, parent_context=None):
context = super().get_context(value, parent_context=parent_context)
article = (parent_context or {}).get("article")
if not article or not article.category_id:
context["related_articles"] = []
return context
from .models import Article
sort_map = {
"newest": "-created_at",
"oldest": "created_at",
"title_asc": "title",
"title_desc": "-title",
}
context["related_articles"] = (
Article.objects.select_related("category", "cover_image")
.filter(category=article.category)
.exclude(pk=article.pk)
.order_by(sort_map.get(value["sort_order"], "-created_at"))[:3]
)
return context
class Meta:
template = "articles/blocks/related_articles_block.html"
icon = "list-ul"
label = _("Related Articles")
def article_detail_layout_blocks():
"""
Returns the list of (name, block) tuples used in ArticleIndexPage.detail_layout.
Defined as a function so models.py can import it without circular issues.
"""
return [
("cover_image", CoverImageBlock()),
("description", DescriptionBlock()),
("related_articles", RelatedArticlesBlock()),
]
The RelatedArticlesBlock here also has a customized context where we pass related_articles variable with 3 other articles of the same category sorted by the sorting order defined in the block.
5. Templates
articles/article_list.html
This will be the template for the paginated article list. Later you could augment it with a search form and filters.
{% extends "base.html" %}
{% load wagtailcore_tags wagtailimages_tags i18n wagtailroutablepage_tags %}
{% block content %}
<main class="article-index">
<h1>{{ page.title }}</h1>
<ul class="article-list">
{% for article in articles %}
<li class="article-card">
{% if article.cover_image %}{% image article.cover_image width-400 as img %}
<img src="{{ img.url }}" alt="{{ article.title }}">
{% endif %}
<h2>
<a href="{% routablepageurl page "article_detail" article.slug %}">{{ article.title }}</a>
</h2>
{% if article.category %}<span class="badge">{{ article.category.name }}</span>{% endif %}
<p>{{ article.description|truncatewords:30 }}</p>
</li>
{% empty %}
<li>{% trans "No articles yet." %}</li>
{% endfor %}
</ul>
{% if articles.has_other_pages %}
<nav class="pagination" aria-label="{% trans 'Article pagination' %}">
{% if articles.has_previous %}
<a href="?page={{ articles.previous_page_number }}">{% trans "β Previous" %}</a>
{% endif %}
<span>{% blocktrans with num=articles.number total=articles.paginator.num_pages %}Page {{ num }} of {{ total }}{% endblocktrans %}</span>
{% if articles.has_next %}
<a href="?page={{ articles.next_page_number }}">{% trans "Next β" %}</a>
{% endif %}
</nav>
{% endif %}
</main>
{% endblock %}
articles/article_detail.html
The detail page would use the {% include_block page.detail_layout with article=article page=page %} to pass the article to the context of each block:
{% extends "base.html" %}
{% load i18n wagtailcore_tags wagtailroutablepage_tags %}
{% block content %}
<article class="article-detail">
<header>
<h1>{{ article.title }}</h1>
{% if article.category %}<span class="badge">{{ article.category.name }}</span>{% endif %}
</header>
{% include_block page.detail_layout with article=article page=page %}
<p>
<a href="{% routablepageurl page "article_list" %}">{% trans "β Back to all articles" %}</a>
</p>
</article>
{% endblock %}
articles/blocks/cover_image_block.html
Cover image block would show the article cover image with the aspect ratio set in the block:
{% load wagtailimages_tags %}
{% if article.cover_image %}
<div class="cover-image cover-image--{{ value.aspect_ratio }}">
{% image article.cover_image width-1200 as img %}
<img src="{{ img.url }}" alt="{{ article.title }}">
</div>
{% endif %}
articles/blocks/description_block.html
Description block would hide the article description text overflow based on the max lines set in the block:
<section class="article-description">
<p{% if value.max_lines > 0 %} class="line-clamp" style="-webkit-line-clamp: {{ value.max_lines }};"{% endif %}>
{{ article.description }}
</p>
</section>
articles/blocks/related_articles_block.html
The related articles block would list the related articles as defined in the extra context of the block:
{% load i18n wagtailimages_tags wagtailroutablepage_tags %}
{% if related_articles %}
<section class="related-articles">
<h2>{% trans "Related Articles" %}</h2>
<ul class="related-articles__list">
{% for rel in related_articles %}
<li class="related-card">
{% if rel.cover_image %}{% image rel.cover_image width-400 as img %}
<img src="{{ img.url }}" alt="{{ rel.title }}">
{% endif %}
<div class="related-card__body">
{% if rel.category %}<span class="badge">{{ rel.category.name }}</span>{% endif %}
<h3>
<a href="{% routablepageurl page "article_detail" rel.slug %}">{{ rel.title }}</a>
</h3>
<p>{{ rel.description|truncatewords:20 }}</p>
</div>
</li>
{% endfor %}
</ul>
</section>
{% endif %}
6. Django Admin Registration
articles/admin.py
Let's not forget to register admin views for the categories and articles so that we can add some data there:
from django.contrib import admin
from .models import Article, Category
@admin.register(Category)
class CategoryAdmin(admin.ModelAdmin):
list_display = ("name", "slug")
prepopulated_fields = {"slug": ("name",)}
@admin.register(Article)
class ArticleAdmin(admin.ModelAdmin):
list_display = ("title", "category", "created_at")
list_filter = ("category",)
search_fields = ("title", "description")
prepopulated_fields = {"slug": ("title",)}
7. Migrations and Initial Data
python manage.py makemigrations articles
python manage.py migrate
python manage.py createsuperuser
python manage.py runserver
8. Wagtail Admin Setup
- Open
http://localhost:8000/cms/and log in. - In the Pages explorer, create an Article Index Page as a child of the root page.
- Set the Slug to
articles.
- Set the Slug to
- On the List View tab, set Articles per page (e.g.
24). - On the Detail View tab, open the Detail Layout StreamField and add blocks in your preferred order:
- Cover Image - choose an aspect ratio.
- Description - optionally set a maximum line count to clamp long descriptions.
- Related Articles - choose the sort order for the three related articles shown.
- Publish the page.
- In the Django admin (
/django-admin/), create some Categories and Articles with cover images and descriptions. - Visit
http://localhost:8000/articles/for the paginated list. - Click any article to see the detail view rendered using the StreamField layout you configured in step 4.
Final words
Using stream fields we can render not only editorial content, for example, images or rich-text descriptions, but also dynamic content based on values from other models and/or the context of the given template.
The approach illustrated in this article allows us to create Wagtail pages where content editors have freedom to adjust the layouts of the pages or insert blocks, such as ads or info texts, into specific places based on real-time events.
21 Mar 2026 5:00pm GMT
20 Mar 2026
Django community aggregator: Community blog posts
How to Show a Waitlist Until Your Wagtail Site Is Ready

This year, I want to bring my centralized gamified donation platform www.make-impact.org to life (at least technically). Earlier I had the version I was developing separate from the waiting list, but I decided to merge them and have a switch between the waitlist and an early preview.
This allows me to have no data duplication, the possibility to create user accounts immediately, and saves hosting and maintenance costs.
This guide walks through a pattern that lets you ship a temporary waitlist page while your Wagtail site is still being built, with the ability to show your progress to chosen people. If you are building a Software as a Service (SaaS) or a web platform with Django, this article is for you.
The Concept
A custom start page view will check for a specific cookie value. If it is unset, the visitor will be redirected to a waitlist form at /waitlist/. If it is set, the visitor will be served the Wagtail home page.
All views under development will have a decorator that checks the cookie value and redirects to the start page if it is unset.
There will be a special view at /preview-access/ with a passphrase form that allows the visitor to gain preview access by setting the mentioned cookie. This view will also allow preview access to be deactivated.
These are the steps to implement this:
1. Generate and store two secrets
You will need two secret values, either set manually or generated with a cryptographically secure random generator (e.g. Python's secrets module):
PREVIEW_ACCESS_PASSPHRASE- the human-readable passphrase typed into the form. Share this with the people who need site access.PREVIEW_ACCESS_TOKEN- the opaque random value stored in the cookie. Never exposed to users; only the server compares against it.
>>> import secrets
>>> print(secrets.token_urlsafe(16)) # passphrase
dI5nGNftZOBx8m-r0m6glg
>>> print(secrets.token_hex(32)) # cookie token
c1b7a76e3ad5cbfb1657fa4e9885a3c8baa6a5a869f49a136abd0e873a9be9ee
Add both to environment variables or a secrets file untracked by Git, and load them in the Django project settings:
# myproject/settings/_base.py
PREVIEW_ACCESS_PASSPHRASE = get_secret("PREVIEW_ACCESS_PASSPHRASE")
PREVIEW_ACCESS_TOKEN = get_secret("PREVIEW_ACCESS_TOKEN")
The get_secret() here is my custom function to retrieve a secret from the secrets source.
2. Create the access-control decorator
Create myproject/apps/misc/decorators.py. Every protected view will import from here.
# myproject/apps/misc/decorators.py
from functools import wraps
from django.conf import settings
from django.shortcuts import redirect
def preview_access_required(view_func):
@wraps(view_func)
def wrapper(request, *args, **kwargs):
if request.COOKIES.get("preview_access") == settings.PREVIEW_ACCESS_TOKEN:
return view_func(request, *args, **kwargs)
return redirect("misc:home_page")
return wrapper
The decorator compares the cookie against the opaque unguessable token from settings, so unless the token value is known, a random attacker cannot gain access by setting the cookie manually in DevTools.
3. Create the passphrase form
Create myproject/apps/misc/forms.py. The form will have a single required password field. Validation will reject any value that does not match the setting.
# myproject/apps/misc/forms.py
from django import forms
from django.conf import settings
from django.utils.translation import gettext_lazy as _
class PreviewAccessForm(forms.Form):
passphrase = forms.CharField(
label=_("Passphrase"),
widget=forms.PasswordInput(
attrs={"autocomplete": "current-password"}
),
required=True,
)
def clean_passphrase(self):
value = self.cleaned_data["passphrase"]
if value != settings.PREVIEW_ACCESS_PASSPHRASE:
raise forms.ValidationError(
_("Incorrect passphrase.")
)
return value
4. Build the cookie toggle view
Point your browser to /preview-access/. When access is off it shows a passphrase form; when access is on it shows a disable button.
# myproject/apps/misc/views.py
from django.conf import settings
from django.shortcuts import redirect, render
from .forms import PreviewAccessForm
def preview_access(request):
has_access = request.COOKIES.get("preview_access") == settings.PREVIEW_ACCESS_TOKEN
if request.method == "POST":
if has_access:
response = redirect("misc:home_page")
response.delete_cookie("preview_access")
return response
form = PreviewAccessForm(request.POST)
if form.is_valid():
response = redirect("misc:home_page")
response.set_cookie(
"preview_access",
settings.PREVIEW_ACCESS_TOKEN,
httponly=True,
samesite="Strict",
)
return response
else:
form = PreviewAccessForm()
return render(
request,
"preview_access/preview_access.html",
{"has_access": has_access, "form": form}
)
Key points: - Disabling never requires the passphrase - the cookie is already proof of prior access. - The cookie is set with httponly=True (not readable by JavaScript) and samesite="Strict" (not sent on cross-site requests). - The cookie value is the opaque token, not "1", so it cannot be guessed.
The template renders the passphrase input only when not has_access, and shows field-level errors from the form if the passphrase is wrong.
5. Wrap the Wagtail catch-all with the decorator
Replace the default Wagtail catch-all route handler with a thin wrapper that enforces the same cookie check.
# myproject/apps/misc/views.py
from myproject.apps.misc.decorators import preview_access_required
from wagtail.views import serve as wagtail_serve
@preview_access_required
def serve_wagtail_page(request, path=""):
return wagtail_serve(request, path)
Without this, a visitor who knows any Wagtail page URL could bypass the gate by typing it directly into the browser.
6. Build the proxy home page view
This view is the only entry point to the site. It decides what every visitor sees first.
# myproject/apps/misc/views.py
from django.conf import settings
from wagtail.models import Site
from wagtail.views import serve as wagtail_serve
def home_page(request):
if request.COOKIES.get("preview_access") == settings.PREVIEW_ACCESS_TOKEN:
# serve Wagtail directly
site = Site.find_for_request(request)
return wagtail_serve(request, "")
# redirect to waiting list
return redirect("waiting_list")
Key point: the waiting_list view and a Wagtail Site and page must exist and be matched to the request domain before wagtail_serve is called.
7. Wire up the URLs
Django project URL rules:
# myproject/urls.py
from django.conf.urls.i18n import i18n_patterns
from django.urls import re_path
from wagtail.coreutils import WAGTAIL_APPEND_SLASH
from myproject.apps.misc import views as misc_views
if WAGTAIL_APPEND_SLASH:
wagtail_serve_pattern = r"^((?:[\w\-]+/)*)$"
else:
wagtail_serve_pattern = r"^([\w\-/]*)$"
urlpatterns += i18n_patterns(
# ... all your other app URLs above ...
# Catch-all - must be last
re_path(
wagtail_serve_pattern,
misc_views.serve_wagtail_page,
name="wagtail_serve"
),
)
The misc app URLs:
# myproject/apps/misc/urls.py
from django.urls import path
from . import views
app_name = "misc"
urlpatterns = [
path("", views.home_page, name="home_page"),
path("preview-access/", views.preview_access, name="preview_access"),
]
The waiting_list app URLs:
# myproject/apps/waiting_list/urls.py
from django.urls import path
from . import views
urlpatterns = [
path("waitlist/", views.show_waiting_list_form, name="waiting_list"),
]
8. Protect every other app view
Import and apply @preview_access_required to every view that belongs to the real site. Class-based views can be wrapped at assignment time:
from myproject.apps.misc.decorators import preview_access_required
# Function-based view
@preview_access_required
def event_list(request):
...
# Class-based view
event_list = preview_access_required(
EventListView.as_view()
)
Waiting-list views, API views, social authentication views, and static/legal pages (/imprint/, /privacy/, etc.) must not receive this decorator - they need to remain publicly accessible.
Final words
You get a lot of benefits from this setup. The waitlist measures demand for your website while you are still building. Invited test users can evaluate your progress at any time. While you are developing the website, you do not necessarily need multiple servers. Launching later is also easier - no hassle or delays with domain IP updates and SSL certificates.
20 Mar 2026 5:00pm GMT
Django News - Sunsetting Jazzband - Mar 20th 2026
News
Sunsetting Jazzband
After more than a decade maintaining 80+ Python projects, Jazzband is winding down as AI-generated spam and long-standing sustainability challenges make its open, shared-maintenance model no longer viable.
Astral to join OpenAI
Astral, creators of Ruff and uv, are joining OpenAI's Codex team to push the future of AI-powered Python development while continuing to support their open source tools.
Wagtail CMS News
Wagtail Security team no longer accepts GPG-encrypted emails
Wagtail's security team has dropped GPG-encrypted email support, citing zero real-world use and modern encryption making it unnecessary while simplifying their workflow.
Updates to Django
Today, "Updates to Django" is presented by Raffaella from Djangonaut Space! π
Last week we had 18 pull requests merged into Django by 15 different contributors - including a first-time contributor! Congratulations to dcsid for having their first commits merged into Django - welcome on board!
The undocumented get_placeholder method of Field is deprecated in favor of the newly introduced get_placeholder_sql method, which has the same input signature but is expected to return a two-elements tuple composed of an SQL format string and a tuple of associated parameters. This method should now expect to be provided expressions meant to be compiled via the provided compiler argument.(#36727)
Django Newsletter
Sponsored Link 1
The deployment service for developers and teams.
Articles
DjangoCon US Talks I'd Like to See 2026 Edition
A curated wishlist of timely, thought-provoking DjangoCon US 2026 talk ideas, from Python's future and deployment wins to Rust, LLMs, and real-world team productivity.
Defense in Depth: A Practical Guide to Python Supply Chain Security
A practical, defense-in-depth guide to securing Python's supply chain, covering everything from linting and dependency pinning to SBOMs, vulnerability scanning, and trusted publishing.
Python Unplugged on PyTV Recap
A behind-the-scenes post on this first-ever digital Python conference that featured a host of Django speakers.
django-security-label: A third-party package to anonymize data in your models
Define data masking rules directly on your Django models and let PostgreSQL enforce anonymization automatically, keeping sensitive data out of your app layer by design.
Djangonaut Diaries: Week 1, part 2 - Creating and debugging a Django project - DEV Community
A hands-on guide to spinning up a local Django project, generating realistic test data, and using VS Code's debugger to step into Django internals and understand how admin delete views work.
Typing Your Django Project in 2026
Typing Django in 2026 is still a tradeoff between slower, accurate mypy + django-stubs and faster tools that struggle with Django's dynamic magic, though native typing support may finally be on the horizon.
Python 3.15's JIT is now back on track
Python 3.15's once-struggling JIT is finally delivering real speedups, thanks to a scrappy, community-driven effort and a few surprisingly lucky design bets.
Thoughts on OpenAI acquiring Astral and uv/ruff/ty
Simon Willison provides some timely insights on the recent acquisition making waves in our community.
OpenAI Acquiring Astral: A 4th Option for Funding Open Source
Thoughts on the three traditional ways to fund open source and the new fourth option (VC funding) currently makes waves.
Events
How DjangoCon US Selects Talk Proposals
A behind-the-scenes look at how DjangoCon US turns anonymous proposals and community reviews into a balanced, inclusive conference lineup.
PyCon US 2026 Conference Schedule is live!
PyCon US 2026's Conference Schedule is live!
Podcasts
Django Chat #198: PyCon US 2026 - Elaine Wong & Jon Banafato
Elaine and Jon are the chair/co-chair respectively of PyCon US, the largest Python conference in North America, happening this May in Long Beach, CA. We discuss what to expect at the conference, new additions from last year, tips on where to stay, and generally how to maximize your PyCon experience.
Django Job Board
Two standout Python roles this week include a client-facing Solutions Architect position at JetBrains and an Infrastructure Engineer opening at the Python Software Foundation.
Solutions Architect - Python (Client-facing) at JetBrains
Infrastructure Engineer at Python Software Foundation
Django Newsletter
Django Forum
Improve free-threading performance - Django Internals
A CPython core developer is proposing small but impactful changes to help Django scale better under free-threaded Python, sparking early collaboration on tackling shared state, caching, and lock contention issues.
Projects
codingjoe/django-mail-auth
Django authentication via login URLs, no passwords required.
duriantaco/skylos
Yet another static analysis tool for Python codebases written in Python that detects dead code.
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
20 Mar 2026 3:00pm GMT
19 Mar 2026
Django community aggregator: Community blog posts
OpenAI Acquiring Astral: A 4th Option for Funding Open Source
Thoughts on the recent acquisition and what it portends for open source software.
19 Mar 2026 10:56am GMT
18 Mar 2026
Django community aggregator: Community blog posts
PyCon US 2026 - Elaine Wong & Jon Banafato
π Links
- PyCon US website
- Volunteer mailing list
- Elaine on LinkedIn and Jon's personal site
π₯ YouTube
Sponsor
This episode is brought to you by Six Feet Up, the Python, Django, and AI experts who solve hard software problems. Whether it's scaling an application, deriving insights from data, or getting results from AI, Six Feet Up helps you move forward faster.
See what's possible at sixfeetup.com.
18 Mar 2026 10:00pm GMT
Tombi, pre-commit, prek and uv.lock
In almost all my Python projects, I'm using pre-commit to handle/check formatting and linting. The advantage: pre-commit is the only tool you need to install. Pre-commit itself reads its config file and installs the formatters and linters you defined in there.
Here's a typical .pre-commit-config.yaml:
default_language_version: python: python3 repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v6.0.0 hooks: - id: trailing-whitespace - id: end-of-file-fixer - id: check-yaml args: [--allow-multiple-documents] - id: check-toml - id: check-added-large-files - repo: https://github.com/astral-sh/ruff-pre-commit # Ruff version. rev: v0.15.6 hooks: # Run the linter. - id: ruff args: ["--fix"] # Run the formatter. - id: ruff-format - repo: https://github.com/tombi-toml/tombi-pre-commit rev: v0.9.6 hooks: - id: tombi-format args: ["--offline"] - id: tombi-lint args: ["--offline"]
The "tombi" at the end might be a bit curious. There's already the build-in "check-toml" toml syntax checker, right? Well, tombi also does formatting and schema validation. And in a recent project, I handled configuration through toml files.
It was for a Django website where several geographical maps were shown, each with its own title, description, legend yes/no, etcetera. I made up a .toml configuration format so that a colleague could configure all those maps without needing to deal with the python code. I created a json schema as format specification (yes, json is funnily used for that purpose). With tombi, I could make sure the config files were valid.
Oh, and tombi has an LSP plugin, so I my colleague got autocomplete and syntax help out of the box. nice.
I'm also using uv a lot. That generates an uv.lock file, in .toml format, with all the version pins. It is a toml file, but without the .toml extension. So pre-commit ignored it. Until suddenly it started complaining about the indentation. But only in a github action, not locally.
Note: the complaint about the indentation is probably correct, as there's an issue in the uv bugtracker about changing the indentation from 4 to 2 in the lockfile.
The weird thing for me was that I pin the the versions of the plugins. So the behaviour locally and on github should be the same. Some observations:
- Running tombi from the commandline on uv.lock resulted in re-formatting to two spaces, whatever the tombi version.
- Pre-commit locally did not re-format the file, but pre-commit on the server did.
- I tried it with the new rust-based alternative for pre-commit, prek (see https://github.com/j178/prek), which did re-format uv.lock.
Some further debugging showed that pre-commit was actually skipping the uv.lock file. But apparently not on github. I did some searching in pre-commit's source code and tombi's pre-commit hook definition. The only relevant part there was types: [toml]. So somehow pre-commit has a definition of what a toml file is. But I couldn't find anything.
Until I spotted that pre-commit uses identify as the means to detect file types. (Looks like a handy library, btw!). And that project had a change a couple of weeks ago that identifies uv.lock as a toml file!
- My colleague updated his pre-commit installation and yes: uv.lock was getting re-formatted.
- So: github actions had a newer version than we had.
- Weird, as I just updated my python tool install this morning. Ah: I installed it with homebrew instead of uv tool, that's why it is still older.
Anyway: small mystery solved.
18 Mar 2026 4:00am GMT
16 Mar 2026
Django community aggregator: Community blog posts
Built with Django β Weekly Roundup (Mar 09βMar 16, 2026)
Hey, Happy Monday!
Why are you getting this: *You signed up to receive this newsletter on Built with Django. I promised to send you the latest projects and jobs on the site as well as any other interesting Django content I encountered during the month. If you don't want to receive this newsletter, feel free to unsubscribe anytime.
Sponsor
This issue is sponsored by TuxSEO - your AI content team on auto-pilot.
- Plan and ship SEO content faster
- Generate practical, publish-ready drafts
- Keep your content pipeline moving every week
Projects
- Table of Contents Generator - Generate a clickable Table of Contents for any PDF in seconds. Free, private, no account required. Works with reports, eBooks, manuals, and papers.
Jobs
From the Community
- Django Admin: Building a Production-Ready Back Office Without Starting From Scratch | Medium by Anas Issath
- Beginner's Guide to Open Source Contribution | Djangonaut Space 2026 - DEV Community
- Django Tutorial - GeeksforGeeks
Support
You can support this project by using one of the affiliate links below. These are always going to be projects I use and love! No "Bluehost" crap here!
- Buttondown - Email newsletter tool I use to send you this newsletter.
- Readwise - Best reading software company out there. I you want to up your e-reading game, this is definitely for you! It also so happens that I work for Readwise. Best company out there!
- Hetzner - IMHO the best place to buy a VPS or a server for your projects. I'll be doing a tutorial on how to use this in the future.
- SaaS Pegasus is one of the best (if not the best) ways to quickstart your Django Project. If you have a business idea but don't want to set up all the boring stuff (Auth, Payments, Workers, etc.) this is for you!
16 Mar 2026 6:00pm GMT
15 Mar 2026
Django community aggregator: Community blog posts
How I deploy my projects to a single VPS with Gitea, NGINX and Docker
Hello everyone π
A few weeks ago, the team behind Jmail (a Gmail-styled interface for browsing the publicly released Epstein files) shared that they had racked up a $46,485 bill on Vercel The site had gone viral with ~450 million pageviews, and Vercel's pricing structure turned that into a five-figure invoice. Vercel's CEO ended up covering the bill personally, which is nice, but not exactly a scalable solution π
When I saw that story, my first thought was: this is an efficiency problem. Jmail is essentially a search interface on top of mostly static content. An SRE on Hacker News mentioned they handle 200x Jmail's request load on just two Hetzner servers. The whole thing could have been served from a moderately sized VPS for a fraction of the cost.
That got me thinking about my own setup. I run everything on a single VPS: my blog, my side projects, my git server, analytics, a wiki, a forum, a secret sharing tool, and more. The whole thing is held together by NGINX, Gitea, some bash scripts, and Docker. No Kubernetes, no Terraform, no CI/CD platform with a $500/month bill. Just a cheap VPS, some config files, and a deployment flow that's simple enough that I can fix it from my phone at the beach (I've written about that before).
I get asked about my deployment setup more often than I expected, so I figured I'd write it all down. Let me walk you through the whole thing.
The VPS
I'm running a Hetzner Cloud CPX21 in Nuremberg, Germany. Here are the specs:
| Spec | Value |
|---|---|
| vCPUs | 3 |
| RAM | 4 GB |
| Disk | 80 GB SSD |
| OS | Ubuntu |
| Price | ~β¬7-8/month |
The CPX21 is one of Hetzner's shared vCPU instances. It's cheap, reliable, and more than enough for what I need. I'm usually sitting at around ~10% CPU and ~2GB RAM, so there's plenty of headroom.
I set up the VPS manually. No Ansible, no configuration management, just plain old SSH and installing things by hand. I know, I know, "infrastructure as code" and all that. But for a single server that I manage myself, the overhead of automating the setup isn't worth it. If the server dies, I can set it up again in a couple of hours and restore from backups.
What's running on it
Here's everything running on this single VPS:
Bare metal (directly on the server)
| Service | Purpose |
|---|---|
| Gitea | Self-hosted git server |
| NGINX | Web server / reverse proxy |
| Certbot | SSL/TLS certificates |
| PHP-FPM | For WordPress sites |
| DokuWiki | Personal wiki |
| fail2ban | Brute force protection |
| UFW | Firewall |
| A couple WordPress sites | Various projects |
Docker
| Service | Purpose |
|---|---|
| ntfy | Push notifications |
| shhh | Secret sharing |
| SearXNG | Privacy-respecting search engine |
| WireGuard | VPN |
| phpBB | YAMS community forum |
| Umami | Privacy-respecting analytics |
| Gitea Actions runner | CI/CD runner |
| Watchtower | Automatic Docker image updates |
Static sites (Hugo, served by NGINX)
| Site | Purpose |
|---|---|
| rogs.me | This blog! |
| montevideo.restaurant | Restaurant directory |
| yams.media | YAMS documentation site |
That's a lot of stuff for a 4GB VPS. But static sites are basically free in terms of resources, and the Docker services are all lightweight. The heaviest things are probably Gitea and the WordPress sites, and even those barely register.
The web server: NGINX
Every site and service gets its own NGINX config file in /etc/nginx/conf.d/. One file per site, nice and clean. No sites-available / sites-enabled symlink dance.
Here's what a typical config looks like for one of my Hugo sites:
server {
root /var/www/rogs.me;
index index.html;
server_name rogs.me;
location / {
try_files $uri $uri/ =404;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/rogs.me/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/rogs.me/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
if ($host = rogs.me) {
return 301 https://$host$request_uri;
}
server_name rogs.me;
listen 80;
return 404;
}
Nothing fancy. Serve files from /var/www/rogs.me, redirect HTTP to HTTPS, done. The SSL bits are all managed by Certbot (more on that later).
For Docker services, the config looks slightly different because NGINX acts as a reverse proxy:
server {
server_name analytics.rogs.me;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
listen 443 ssl; # managed by Certbot
# ... SSL config same as above
}
Same pattern: one file per service, NGINX handles SSL termination, and proxies to whatever port the Docker container exposes on localhost.
SSL/TLS with Let's Encrypt
All certificates come from Let's Encrypt via Certbot. I installed it with apt and used the NGINX plugin:
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d rogs.me
Certbot modifies the NGINX config automatically to add the SSL directives (that's why you see those # managed by Certbot comments).
Certificates auto-renew daily at 3 AM via a cron job:
0 3 * * * certbot renew -q
The -q flag keeps it quiet: no output unless something goes wrong. Certbot is smart enough to only renew certificates that are close to expiring, so running it daily is fine.
Self-hosted git with Gitea
I use Gitea as my primary git server. It runs bare metal on the VPS (not in Docker) and lives at git.rogs.me.
Why Gitea instead of just using GitHub? I want to own my git infrastructure. GitHub is great for collaboration, but I like having control over where my code lives. If GitHub goes down or decides to change their terms, my repos are safe on my own server.
That said, I mirror everything to both GitHub and GitLab so other people can collaborate, open issues, and submit PRs. Best of both worlds: I own the primary, and the mirrors handle the social coding side.
Gitea Actions
Gitea has a built-in CI/CD system called Gitea Actions that's compatible with GitHub Actions workflows. The runner is the official gitea/act_runner Docker image, running on the same VPS. Pretty vanilla setup, no custom configuration.
This is the core of my deployment pipeline. Every time I push to master, Gitea Actions picks up the workflow and deploys the site.
Deploying Hugo sites
This is where it all comes together. All three of my Hugo sites follow the exact same deployment pattern. Here's the flow:
ββββββββββββ push ββββββββββββ Gitea Actions ββββββββββββ
β Local ββββββββββββββββββΆ β Gitea β βββββββββββββββββββββΆβ Runner β
β machine β β(git.rogs)β β (Docker) β
ββββββββββββ ββββββββββββ ββββββ¬ββββββ
β
SSH into same VPS
β
βΌ
ββββββββββββ
β VPS β
β git pull β
β build.sh β
ββββββ¬ββββββ
β
Hugo builds to
/var/www/domain/
β
βΌ
ββββββββββββ
β NGINX β
β serves β
ββββββββββββ
Yes, the Gitea Actions runner SSHes into the same server it's running on. I know that's a bit redundant, but I designed it this way on purpose: if I ever move my hosting somewhere else (or switch back to GitHub Actions), the workflow doesn't need to change. The SSH target is just a secret, so I swap an IP address and everything keeps working.
The Gitea Actions workflow
Here's the workflow file that lives in .gitea/workflows/deploy.yml in each repo:
name: deploy
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy via SSH
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
port: ${{ secrets.SSH_PORT }}
script: |
cd repo && git stash && git pull --force origin master && ./build.sh
It's beautifully simple:
- Push to
mastertriggers the workflow - The runner uses appleboy/ssh-action to SSH into the server
- On the server: stash any local changes, pull the latest code, and run the build script
The git stash is there as a safety net. The WebP conversion in the build script modifies tracked files (more on that in a second), so without the stash, git pull would complain about dirty working tree.
All four secrets (SSH_HOST, SSH_USER, SSH_PRIVATE_KEY, SSH_PORT) are configured in Gitea's repository settings. The SSH key has access to the server but is locked down to only what the deployment needs.
The build script
Every Hugo site has a build.sh in the repo root. Here's the one for this blog:
#!/bin/bash
# Convert all images to WebP for better performance
for file in $(git ls-files --others --cached --exclude-standard \
| grep -v '.git' \
| grep -E '\.(png|jpg|jpeg)$'); do
cwebp -lossless "$file" -o "${file%.*}.webp"
done
# Update all references from png/jpg/jpeg to webp
for tracked_file in $(git ls-files --others --cached --exclude-standard \
| grep -v '.git'); do
sed -i 's/\.webp/.webp/g' "$tracked_file"
sed -i 's/\.webp/.webp/g' "$tracked_file"
sed -i 's/\.webp/.webp/g' "$tracked_file"
done
# Build the site
hugo -s . -d /var/www/rogs.me/ --minify --cacheDir $PWD/hugo-cache
Three things happen here:
- Image optimization: Every PNG, JPG, and JPEG gets converted to WebP using
cwebp(lossless mode, so no quality loss). WebP files are significantly smaller than their originals. - Reference rewriting: All file references get updated from
.webp/.webp/.webpto.webp. This is why we needgit stashin the workflow; this step modifies tracked files. - Hugo build: Generates the static site with minification enabled and outputs it directly to
/var/www/rogs.me/. NGINX is already configured to serve from that directory, so the site is live immediately.
The --cacheDir flag keeps Hugo's build cache in the repo directory, which speeds up subsequent builds.
Each site's build.sh is essentially identical, just with a different output path (montevideo.restaurant, yams.media, etc.).
Variations across sites
While the pattern is the same, there are small differences:
- yams.media has a two-job workflow: a
test_buildjob runs Hugo in a Docker container first to make sure the build succeeds, and only then does the deploy job run. This is because the YAMS docs site has more contributors, so I want to catch build errors before they hit production. - yams.media also uses
--cleanDestinationDirand--gcflags for a cleaner build output.
Docker services and Watchtower
Most of my non-static services run in Docker with docker-compose. Each service has its own directory in /opt/:
/opt/
βββ analytics.rogs.me/ # Umami
β βββ docker-compose.yml
βββ ntfy/
β βββ docker-compose.yml
βββ shhh/
β βββ docker-compose.yml
βββ searx/
β βββ docker-compose.yml
βββ ...
For updates, I use Watchtower. It runs as a Docker container itself and periodically checks if there are newer images available for my running containers. If there are, it pulls the new image, stops the old container, and starts a new one with the same configuration.
version: "3"
services:
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped
Is this a bit risky? Sure. An automatic update could break something. But in practice, it hasn't failed me once, and the services I'm running are stable enough that breaking changes in Docker images are rare. For a personal setup, the convenience of never having to manually update containers is worth the small risk.
Security
I'm not running a bank here, but I do take basic security seriously:
- UFW (Uncomplicated Firewall): Only NGINX ports (80, 443) and SSH are open. Everything else is blocked.
- fail2ban: Watches SSH logs and bans IPs after too many failed login attempts. Essential if your SSH port is exposed to the internet.
- SSH keys only: Password authentication is disabled. If you don't have the key, you're not getting in.
- Let's Encrypt everywhere: Every site and service gets HTTPS. No exceptions.
- Docker services on localhost: All Docker containers bind to
localhost. They're only accessible through the NGINX reverse proxy, which handles SSL termination.
# Quick UFW setup
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 'Nginx Full'
sudo ufw allow ssh
sudo ufw enable
DNS
All my domains use Cloudflare for DNS. But only DNS for most of them. I'm not using Cloudflare's CDN or proxy features on my main sites. The DNS records point directly to my VPS IP with the proxy toggle set to "DNS only" (the grey cloud, not the orange one).
Why Cloudflare for DNS? Two reasons. First, it's free, fast, and the dashboard is easy to use. Second, and more importantly: if something goes wrong, I can switch to using Cloudflare's full proxy and DDoS protection with the flick of a button. Just toggle the grey cloud to orange and you're behind Cloudflare's network instantly.
I've already had to do this once. forum.yams.media (the YAMS community forum) was getting DDoSed and swarmed by bots constantly. Flipping that toggle to orange solved the problem immediately. The rest of my sites run without Cloudflare's proxy because they don't need it, but knowing I can turn it on in seconds gives me peace of mind.
Backups
This is the part that most people skip. Don't be most people.
My backup strategy has two stages:
βββββββββββββββ 11 PM cron βββββββββββββββββββββ
β VPS β ββββββββββββββββΆβ /home/backups/ β
β (services) β tar + GPG β (encrypted .gpg) β
βββββββββββββββ βββββββββββ¬ββββββββββ
β
midnight cron
(SSH pull)
β
βΌ
ββββββββββββββββββββ
β Home Server β
β (NAS + S3) β
ββββββββββββββββββββ
Stage 1: Backup on the VPS (11 PM)
Every night at 11 PM, a series of cron jobs run backup scripts for each service. Each script follows the same pattern:
#!/bin/bash
BACKUP_DIR="/home/backups/servicename"
TARGET_DIR="/path/to/service"
DATE=$(date +%Y-%m-%d-%s)
BACKUP_FILE="$BACKUP_DIR/backup-servicename-$DATE.tar.zst"
ENCRYPTED_FILE="$BACKUP_FILE.gpg"
LOG_FILE="/var/log/backup_servicename.log"
GPG_RECIPIENT="your-email@example.com"
log_message() {
echo "$(date +'%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
}
log_message "=== Starting backup ==="
mkdir -p "$BACKUP_DIR"
# For Docker services: stop containers first
docker compose stop
# Create compressed archive
tar -caf "$BACKUP_FILE" -C "$TARGET_DIR" .
# Encrypt with GPG
gpg --encrypt --armor -r "$GPG_RECIPIENT" -o "$ENCRYPTED_FILE" "$BACKUP_FILE"
rm -f "$BACKUP_FILE" # Remove unencrypted version
# For Docker services: restart containers
docker compose up -d
log_message "=== Backup completed ==="
Key points:
- Compression: I use
tar.zst(Zstandard) for compression. It's faster than gzip and produces smaller files. - Encryption: Every backup gets GPG-encrypted before it touches the network. Even if someone gets access to the backup files, they're useless without my private key.
- Docker services: For services running in Docker, the script stops the containers before backing up to ensure data consistency, then starts them again. This causes a brief downtime (usually a few seconds), which is fine for personal services at 11 PM.
- Database dumps: For services with databases (like Gitea, which uses MySQL), the script dumps the database separately with
mysqldumpbefore creating the archive. - Logging: Every step is logged to
/var/log/, so I can check if something went wrong.
Stage 2: Pull to home server (midnight)
At midnight, my home server SSHes into the VPS and pulls all the encrypted backup files to my local NAS. From there, they also get pushed to an S3 bucket.
This gives me the classic 3-2-1 backup strategy: 3 copies of the data (VPS, NAS, S3), on 2 different media types, with 1 offsite copy. If Hetzner's datacenter burns down, I have everything locally. If my house burns down, I have everything in S3.
Monitoring
I run Uptime Kuma on my home server to monitor all my services. It checks every site and service periodically and sends me a notification (via ntfy, naturally) if something goes down.
It's not fancy, but it works. I've caught a few issues before anyone else noticed them, which is the whole point.
The big picture
Here's what the whole setup looks like:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Hetzner CPX21 β
β β
β βββββββββββ ββββββββββββββββββββββββββββββββββββ β
β β Gitea β β NGINX β β
β β Actions β β ββββββββββββ ββββββββββββββββ β β
β β Runner β β β Static β β Reverse β β β
β β (Docker) β β β sites β β proxy to β β β
β ββββββ¬ββββββ β β/var/www/ β β Docker svcs β β β
β β β ββββββββββββ ββββββββββββββββ β β
β β SSH β β² β β β
β β ββββββββββΌβββββββββββββββΌβββββββββββ β
β β β β β
β βΌ β βΌ β
β βββββββββββ βββββββββ βββββββββββββ β
β β Git βββbuildβββ Hugo β β Docker β β
β β repos β β sites β β services β β
β βββββββββββ βββββββββ βββββββββββββ β
β β
β βββββββββββββββ ββββββββββββ ββββββββββββββ β
β β Gitea β β Certbot β β fail2ban β β
β β (bare metal)β β (SSL) β β + UFW β β
β βββββββββββββββ ββββββββββββ ββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Conclusion
The whole philosophy here is simplicity. There's no orchestration tool, no container registry, no deployment platform. It's just:
- Push code to Gitea
- A workflow SSHes into the server
- Git pull + bash script builds the site
- NGINX serves it
Could I make this more sophisticated? Sure. Could I use Ansible to manage the server config, or Kubernetes to orchestrate the containers, or a proper CI/CD platform with build artifacts and rollbacks? Absolutely. But for a personal setup that hosts a blog, some side projects, and a handful of services, this is more than enough.
The setup has been running for years with minimal maintenance. The most time I spend on it is writing backup scripts for new services and adding NGINX configs when I deploy something new. Everything else is automated: deployments, SSL renewals, Docker updates, backups.
If you're thinking about self-hosting your projects, my advice is: start simple. A VPS, NGINX, and a bash script can take you surprisingly far. You can always add complexity later if you need it, but in my experience, you probably won't.
If you have questions about any part of this setup, feel free to reach out on the Contact page. I'm always happy to help people get started with self-hosting.
See you in the next one!
15 Mar 2026 5:00am GMT
14 Mar 2026
Django community aggregator: Community blog posts
10 Years of Jazzband
Jazzband is sunsetting. Before moving on, here's a look at what 10 years of cooperative coding actually looked like.
By the numbers
Five years in, we had about 1,350 members and 55 projects. Here's where things stand now:
Members
- 3,135 total members over the years
- 2,133 members currently - a 68% retention rate over 10 years
- New members every year, peaking at 424 in 2022
- Members who left stayed an average of 510 days
- Based on GitHub profiles (only ~28% of members list a location), members from at least 56 countries across every continent but Antarctica - 36% Europe, 30% Asia, 22% North America, 7% South America, 3% Africa, 1% Oceania. Real numbers are likely higher. And given how widely Python is used in research, someone on Antarctica has probably pip-installed a Jazzband project at some point
Projects
- 84 projects total, 71 still active
- 13 projects left again over the years
- ~93,000 GitHub stars across all projects
- ~16,000 forks
Activity
- ~43,800 commits across all repositories
- ~15,600 pull requests
- ~12,200 issues
Releases
- 1,429 package uploads via Jazzband's release pipeline
- 1,312 releases to PyPI across 56 projects and 390 versions
- 281 MB of release artifacts total
- First upload in November 2017, most recent in March 2026
Project teams
- 470 project team memberships
- 105 lead roles across 81 project leads
- Most prolific leads: aleksihakli, hramezani, claudep, and camilonova each maintained 4 projects
How Jazzband was actually used
The numbers above only tell part of the story. Here's what's more interesting.
Not everyone used the release pipeline
20 active projects never shipped a single release through it. Projects like Watson (2,515 stars), django-rest-knox (1,255), and django-admin2 (1,187) used Jazzband as a collaborative home - for shared access, triage, and maintenance - not for releases. The pipeline was useful for the projects that used it, but it wasn't what made Jazzband work for most people.
Old projects stayed alive
django-avatar's repo was created in 2008 and shipped its most recent Jazzband release in January 2026 - a 17-year-old repo still getting releases. django-axes (2009), sorl-thumbnail (2010), django-constance (2010), and 18 other projects created before 2015 were all still getting releases in 2025 or 2026. Jazzband kept old projects alive long after their original authors moved on. That was the whole point.
Release cadence varied wildly
django-axes had the most active release cadence: 253 release files across 127 versions, peaking at 28 versions in 2019 - roughly one every 13 days. pip-tools was second at 138 releases / 69 versions.
Meanwhile, 7 active projects have no team members at all - django-permission, django-mongonaut, and five others. Nobody was actively working on them, but they had a home and stayed installable.
pip-tools was its own community
With 69 team members it dwarfed every other project (the next largest, djangorestframework-simplejwt, had 24). It was basically a sub-organization within Jazzband. And two projects joined as recently as 2024 (django-tagging, django-summernote) with single-digit stars and zero releases - people were still finding value in the model right up to the end.
The open access model was genuinely controversial
When django-newsletter transferred in, its author @dokterbob worried that giving 800 members write access would "dissolve the responsibility so much that it might actually reduce participation." I wrote a long reply defending the open model.
An earlier project, Collectfast, actually left Jazzband after a member pushed directly to master without review - merging commits the author had been holding off on. That incident led to real discussions about code review processes, branch protection, and what "open access" should actually mean. The tension between openness and control was never fully resolved.
Moderation was another solo job
Over the years I had to block 10 accounts from the GitHub organization - first crypto spammers who joined just to be in the org, then community conflicts that needed real moderation decisions, and finally the AI-driven spam that made the open model untenable. None of that is unusual for an organization this size, but it all went through one person.
The onboarding bottleneck
Every transferred project got an onboarding checklist - a webhook automatically opened an "Implement Jazzband guidelines" issue with TODOs like fixing links, adding badges, setting up CI, adding jazzband to PyPI, deciding on a project lead. 41 projects got one of these. 28 completed it. 13 are still open.
The pattern in those 13 is telling: contributors would do every item they could, then get stuck on things that required admin access - configuring webhooks, fixing CI checks, setting up the release pipeline - and wait for me. Sometimes for months.
django-user-sessions' original author pinged me five times over two months about broken CI checks only an admin could fix. Watson's lead asked twice to remove legacy CI tools blocking PR merges. The checklist was good. The bottleneck was me.
Projects that moved on
One of the earliest and most visible Jazzband projects was django-debug-toolbar, transferred in back in 2016. It grew to over 8,000 stars under Jazzband before it moved to Django Commons in 2024.
django-simple-history, django-oauth-toolkit, PrettyTable, and tablib all moved on too, for similar reasons - they needed more autonomy than Jazzband's structure could provide.
Downloads
For context on how widely these projects are used, here are some numbers from PyPI. All projects that were ever part of Jazzband account for over 150 million downloads a month. Current projects alone are around 95 million.
Top 15 by monthly downloads:
| Project | Downloads/month | Note |
|---|---|---|
| prettytable | 42.4M | left Jazzband |
| pip-tools | 23.3M | |
| contextlib2 | 10.7M | |
| django-redis | 9.6M | |
| django-debug-toolbar | 7.3M | left, now Django Commons |
| djangorestframework-simplejwt | 6.1M | |
| dj-database-url | 5.5M | |
| pathlib2 | 4.9M | |
| django-model-utils | 4.8M | |
| geojson | 4.6M | |
| tablib | 4.1M | |
| django-oauth-toolkit | 3.7M | left |
| django-simple-history | 3.1M | left, now Django Commons |
| django-silk | 2.7M | |
| django-formtools | 2.1M |
One thing that surprised me: prettytable alone accounts for 42 million downloads a month, and it isn't even a Django package. contextlib2, pathlib2, and geojson aren't either. Jazzband ended up being broader than the Django ecosystem it started in.
django-debug-toolbar ranked in the top three most used third-party packages in the Django Developers Survey and is featured in the official Django tutorial. It spent 8 years under Jazzband before moving to Django Commons.
If you've come across Jazzband projects before, it was probably through the Django News newsletter, Python Weekly, or Opensource.com's 2020 piece on how Jazzband worked.
Top 10 projects by stars
| Project | Stars |
|---|---|
| pip-tools | 7,997 |
| django-silk | 4,939 |
| tablib | 4,752 |
| djangorestframework-simplejwt | 4,310 |
| django-taggit | 3,429 |
| django-redis | 3,059 |
| django-model-utils | 2,759 |
| Watson | 2,515 |
| django-push-notifications | 2,384 |
| django-widget-tweaks | 2,165 |
14 Mar 2026 4:02pm GMT
Wind-Down Plan
This post outlines the plan for winding down Jazzband. If you haven't read them yet, see the sunsetting announcement for context on why this is happening, and the 10-year retrospective for the full story.
Timeline
The wind-down will happen in phases over the course of 2026.
Phase 1: Announcement (March 2026)
- New member signups are disabled immediately
- This announcement and wind-down plan are published
- Existing members retain access to the GitHub organization and all repositories
Phase 2: Outreach (March - May 2026)
- All 80 project leads will be contacted via email to discuss transferring their projects
- The goal is to have initial conversations with every lead before PyCon US 2026 (May 13-19 in Long Beach, CA)
- Leads who don't respond will be followed up with at PyCon US and through other channels
Phase 3: Project Transfers (June - December 2026)
- Projects will be transferred out of the Jazzband GitHub organization to their new homes - whether that's a lead's personal account, a new organization, or another collaborative group
- For each project, the transfer includes:
- GitHub repository: transferred to the new owner
- PyPI package ownership: existing maintainers added, Jazzband credentials removed
- CI/CD configuration: updated to work outside Jazzband
- Projects without an active lead or willing recipient will be archived in the Jazzband GitHub organization
Phase 4: Wind Down (Early 2027)
- Remaining repositories archived
- The Jazzband GitHub organization set to read-only
- The jazzband.co website archived (with a redirect or static notice)
- PSF Fiscal Sponsorship status concluded, remaining funds donated to the PSF general fund
What happens toβ¦
β¦existing members?
You remain a member of the GitHub organization until it is archived. No action is needed on your part. If you'd like to leave earlier, you can do so from your account dashboard.
β¦projects I contribute to?
The projects aren't going away - they're moving. Your contributions, issues, and pull requests will transfer with the repository to its new home. Git history is preserved.
β¦PyPI packages?
Package ownership on PyPI will be transferred to the project leads before the Jazzband release credentials are deactivated. If you're a project lead, we'll coordinate this with you directly.
β¦the Jazzband release pipeline?
The Jazzband-specific release pipeline (uploading via Twine to jazzband.co, then releasing to PyPI) will remain functional during the transition period. After transfer, projects will publish to PyPI directly using standard tooling.
β¦the website?
The jazzband.co website will remain online through the transition. After wind-down, it will be replaced with a static page linking to this announcement and an archive of the project list.
β¦the PSF Fiscal Sponsorship?
Jazzband's PSF Fiscal Sponsorship status will be formally concluded. Any remaining funds will be donated to the Python Software Foundation's general fund.
For project leads
If you're a project lead, here's what to expect:
- You'll receive an email with details specific to your project(s)
- Decide on a new home for your project - your personal GitHub account, a new organization, or another collaborative group like Django Commons
- Coordinate the transfer - we'll handle the GitHub repo transfer and help with PyPI ownership changes
- Update your project - CI/CD, documentation links, and any Jazzband-specific references
Several projects have already successfully transferred to Django Commons, including django-debug-toolbar and django-simple-history. If you're looking for a place with shared maintenance and multiple admins, it's a good option.
If you have questions or want to start the process early, please contact the roadies.
14 Mar 2026 4:01pm GMT
Sunsetting Jazzband
Over 10 years ago, Jazzband started as a cooperative experiment to reduce the stress of maintaining Open Source software projects. The idea was simple - everyone who joins gets access to push code, triage issues, merge pull requests. "We are all part of this."
It had a good run. More than 10 years, actually.
But it's time to wind things down.
What happened
There's a short answer and a long answer.
The slopocalypse
GitHub's slopocalypse - the flood of AI-generated spam PRs and issues - has made Jazzband's model of open membership and shared push access untenable.
Jazzband was designed for a world where the worst case was someone accidentally merging the wrong PR. In a world where only 1 in 10 AI-generated PRs meets project standards, where curl had to shut down its bug bounty because confirmation rates dropped below 5%, and where GitHub's own response was a kill switch to disable pull requests entirely - an organization that gives push access to everyone who joins simply can't operate safely anymore.
The one-roadie problem
But honestly, the cracks have been showing for much longer than that.
Jazzband was always a one-roadie operation. People asked for more roadies and offered to help over the years, and I tried a number of times to make it work - but it never stuck. I dropped the ball on organizing it properly, and when volunteers did step up they'd quietly step back after a while. That's not a criticism of them, it's just how volunteer work goes when there's no structure to support it.
The result was the same though: every release request, every project transfer, every lead assignment, every PyPI permission change - it all went through me.
The warnings
The sustainability question was raised as early as 2017. I gave a keynote at DjangoCon Europe 2021 about it - five years in. In that talk I said out loud that the "social coding" experiment had failed to create an equitable community, and that a sustainable solution didn't exist without serious financial support.
The roadmap I presented - revamp infrastructure, grow the management team, formalize guidelines, reach out for funding - none of that happened. The PSF fiscal sponsorship was the one thing that did.
In the years since, I've been on the PSF board - which faced its own crises - and now serve as PSF chair. That work matters and I don't regret prioritizing it, but it meant Jazzband got even less of my time.
GitHub went the other way
Meanwhile, GitHub moved in the opposite direction. Copilot launched in 2022, trained on open source code that maintainers were burning out maintaining for free. GitHub Sponsors participation sits at 0.0014%. 60% of maintainers are still unpaid.
The XZ Utils backdoor in 2024 showed what happens when a lone maintainer burns out and someone malicious fills the gap. And Jazzband's own infrastructure started getting in the way of the projects it was supposed to help - the release pipeline couldn't support trusted publishing, projects that needed admin access were stuck.
So projects started leaving. And that's OK - that was always supposed to be part of the deal.
Django Commons
I want to specifically thank Django Commons and Tim Schilling for picking up where Jazzband fell short. They have 5 admins, 15 active projects (including django-debug-toolbar, django-simple-history, and django-cookie-consent from Jazzband), and django-polymorphic is transferring over right now. They solved the governance problem from day one. If you're a Jazzband project lead looking for a new home for your Django project, start there.
For non-Django projects like pip-tools, contextlib2, geojson, or tablib - I'm not aware of an equivalent. If someone wants to build one for the broader Python tooling ecosystem, I'd love to see it.
By the numbers
Over 10 years, Jazzband grew to 3,135 members from every continent but Antarctica, maintained 84 projects with ~93,000 GitHub stars, and shipped 1,312 releases to PyPI.
Projects that passed through Jazzband are downloaded over 150 million times a month - pip-tools at 23 million, prettytable at 42 million. django-debug-toolbar spent 8 years under Jazzband and ended up in the official Django tutorial. django-avatar, a repo from 2008, was still getting releases in 2026. And django-axes shipped 127 versions - a release every 13 days in its peak year.
The full 10-year retrospective has all the numbers, the stories, and what actually happened.
What happens next
I'm not pulling the plug overnight. There is a detailed wind-down plan that covers the timeline, but the short version:
- New signups are disabled as of today
- Project leads will be contacted before PyCon US 2026 to coordinate transferring projects to new homes
- The GitHub organization and website will remain available during the transition period through end of 2026
If you're a project lead, expect an email soon.
Thank you
None of this would have been possible without the people who showed up - strangers on the internet who decided to maintain something together. Thanks to the 81 project leads who kept things going despite the bottlenecks I created, and to everyone who joined, contributed, filed issues, and shipped releases over the years.
I started Jazzband because maintaining Open Source alone was exhausting. The irony of then becoming a single point of failure for 71 projects is not lost on me. But the experiment worked in the ways that mattered - projects got maintained, releases got shipped, people collaborated.
Anyways, the projects will move on to new homes, and that's fine. That was always the point.
We are all part of this.
14 Mar 2026 4:00pm GMT


