27 Mar 2026
Django community aggregator: Community blog posts
Django News - Balancing the AI Flood in Django - Mar 27th 2026
News
Calling for research participants from Django, Laravel, Ruby on Rails, Next.js and Spring Boot communities
Former DSF President and researcher Anna Makarudze is seeking Django developers to share insights on dependency vulnerabilities and supply chain risks in open source.
Djangonaut Space News
Djangonaut Space Financial Report 2025
Djangonaut Space's 2025 report highlights a community-powered year of $2.2k in donations funding tools and conference access, while setting sights on sending contributors to even more events in 2026.
Djangonaut diaries, week 3 - Working on an ORM issue
A deep dive into Django's ManyToMany indexes reveals an unnecessary extra index, showing how databases already optimize with composite indexes and setting the stage for a cleaner ORM fix.
Wagtail CMS News
Wagtail Routable Pages and Layout Configuration
Build flexible Wagtail routable pages that use StreamField layouts to dynamically control how Django model data renders on detail views.
Updates to Django
Today, "Updates to Django" is presented by Raffaella from Djangonaut Space! 🚀
Last week we had 18 pull requests merged into Django by 15 different contributors - including 4 first-time contributors! Congratulations to Juho Hautala, Huwaiza, (James) Kanin Kearpimy 🚀 and Praful Gulani for having their first commits merged into Django - welcome on board!
News in Django 6.1:
- Providing
fail_silently=True,auth_user, orauth_passwordto mail sending functions (such assend_mail()) while also providing aconnectionnow raises aTypeError. assertContains()andassertNotContains()can now be called multiple times on the sameStreamingHttpResponse. Previously, they would consume the streaming response's content, causing subsequent calls to fail.- Because quoted aliases are case-sensitive, raw SQL references to aliases mixing case, such as when using
RawSQL, might have to be adjusted to also make use of quoting.
Django Newsletter
Django Fellow Reports
Fellow Report - Natalia
A significant portion of this week was dedicated to security work (yes, again). As usual, details here are intentionally kept at a high level, but the time went into triaging new reports, progressing in-flight likely confirmed issues, validating proposed fixes, and coordinating next steps with the team.
One additional challenge worth noting is the volume of near-duplicate reports; beyond triage, this often requires careful comparison across long submissions to identify what is actually new or meaningfully different.
Fellow Report - Jacob
Easy to miss in the release notes (as we only described the user-facing changes for edge cases), but last week we landed (with great joy) @charettes' defense-in-depth measure for the ORM that ensures user-provided aliases are always quoted.
In addition to the below, another steady week advancing pending security reports.
Sponsored Link 1
The deployment service for developers and teams.
Articles
Learning LLM Integration
A practical, from-scratch look at integrating LLMs into a Django app, highlighting why isolating the AI layer and writing precise prompts makes all the difference.
Give Django your time and money, not your tokens
The Django community wants to collaborate with you, not a facade of you.
Open Source Has a Bot Problem
The maintainer of awesome-mcp-servers came up with a solution, of sorts, to curating AI-generated PRs.
Why pylock.toml includes digital attestations
A Python project got hacked where malicious releases were directly uploaded to PyPI. I said on Mastodon that had the project used trusted publishing with digital attestations, then people using a pylock.toml file would have noticed something odd was going on thanks to the lock file including attestation data.
Rewriting a 20-year-old Python library
A thoughtful deep dive into rewriting a 20-year-old Python library, covering async design, API ergonomics, and how to modernize without breaking users.
Playground embedding, packages and more
The nanodjango playground has several new exciting features which transform what you can achieve with it - you can now manage packages and secrets, share scripts from the command line, and embed live Django code in your own site.
Human.json
A quick look at human.json, a lightweight protocol for sharing human-readable metadata, with a simple Django implementation and a healthy dose of skepticism about its long-term adoption.
Videos
PyCon US 2026 - Elaine Wong & Jon Banafato
A behind-the-scenes look at PyCon US 2026 with chair Elaine Wong and co-chair Jon Banafato, covering what's new, how to prepare, and tips to make the most of the biggest Python conference in North America.
Django Job Board
Solutions Architect - Python (Client-facing) at JetBrains
Django Newsletter
Django Forum
Discouraging "the voice from nowhere" (~LLMs) in documentation
Forum discussion on maintaining a human (not LLM) voice in Django's documentation.
Projects
kujov/django-tw
Zero-config Tailwind CSS v4 for Django.
VojtechPetru/django-live-translations
In-browser translation editing for Django superusers.
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
27 Mar 2026 3:00pm GMT
25 Mar 2026
Django community aggregator: Community blog posts
LLMs for Open Source maintenance: a cautious case
LLMs for Open Source maintenance: a cautious case
When ChatGPT appeared on the scene I was very annoyed at all the hype surrounding it. Since I'm working in the fast moving and low margin business of communication and campaigning agencies I'm surrounded by people eager to jump on the hype train when a tool promises to lessen the workload and take stuff from everyone's plate.
These discussions coupled with the fact that the training of these tools required unfathomable amounts of stealing were the reason for a big reluctance on my part when trying them out. I'm using the word stealing here on purpose, since that's exactly the crime Aaron Swartz was accused of by the attorney's office of the district of Massachusetts. It's frustrating that some people can get away with the same crime when it is so much bigger. For example, OpenAI and Anthropic downloaded much more data than Aaron ever did.
A somewhat related thing happened with the too-big-to-fail banks: There, the people at the top were even compensated with golden parachutes at the end. LLM companies seem to be above accountability too.
Despite all this, I have slowly started integrating these tools into my workflows. I don't remember the exact point in time, but since some time in 2025 my opinions on their utility has started to change. At the beginning, I always removed the attribution and took great care to write and rewrite the code myself, only using the LLMs for inspiration and maybe to generate integration tests. More and more I have to admit that they are useful, especially in time constrained projects with a clear focus and purpose.
Last month I fixed and/or closed all open issues in the django-tree-queries repository with the help of Claude Code. Is that a good thing? It could be argued I should have done the work myself. But I wouldn't have - I have other things I want to do with my time. I don't want to (always) work on Open Source software in the evening. I definitely also have leaned heavily on LLMs when working on django-prose-editor.
Is faster better?
We can produce more code, more features and close tickets faster than before. In my experience the speed up isn't as big as some people may want us to believe, but it's there. And contrary to what people in my LinkedIn feed say, that's not an obviously good thing. Is it a race to the bottom where we drown in LLM-generated slop in quantities impossible to maintain? It doesn't feel like that - but it's a race that could go both ways. Throwaway code can be thrown away though, and well tested code does what the tests say, which is good enough according to my rules for releasing open source software.
Speaking as someone who has put more into the training set than they've taken out so far, I don't feel all that bad using the tools. Coding agents can already be run locally with reasonable hardware requirements, at least during inference, which is where the ongoing cost sits. Maybe using them is still rationalization. But contribution and profit needing to stay in some rough balance feels like the right frame. Total abstinence isn't the only ethical choice we have.
Community tensions
What makes me less comfortable is how communities are reacting. There are real concerns within the Django world, and not just the practical one of overworked maintainers wading through hastily generated patches that don't actually fix anything. The deeper worry is about the communal nature of contribution: that working on Django is supposed to be a learning experience, a way into the community, and that using an LLM as a vehicle rather than a tool hollows out that process. Reviewers end up interacting with what is essentially a facade, unable to tell whether anyone actually understood the problem. That's a real concern and I don't want to dismiss it.
But it maps onto a different situation from what I've been describing. Using Claude Code to close issues in projects I maintain and understand is not the same as using it to paper over gaps in comprehension on a ticket in someone else's project. Whether LLM-assisted contributions to Django itself are appropriate is a difficult question; whether it's appropriate to use them when maintaining your own software less so.
There's also a harder tension around quality. Django's conservatism has real value: rigorous review, minimal magic, a coherent philosophy. The ORM and template system don't need to reinvent themselves, they work well, are still evolving while staying rock-solid for all my use cases. And reading the release notes always brings me joy. But it could be more exciting more often. Quality isn't a strictly positive thing. Everything has costs. It's not great if the price of the bar is that legitimate bugs sit open for years because nobody has a few evenings to spend on them. It happened with django-tree-queries before I went through it with Claude Code. I think the bar for contributing to Django is too high. I would value a little more motion and a little less stability, even as someone running dozens of Django websites and apps.
Then there's the pile-on dynamic that plays out on Mastodon and GitHub. When the Harfbuzz and chardet maintainers disclosed LLM usage, the reaction from some corners was something to behold. People expressing what amounted to personal grievance over tooling choices in projects they may not even use. There's a particular kind of entitlement in telling a maintainer - who is keeping software alive, possibly even in their spare time - that the way they choose to do that work is an affront. Open source is a gift, whether paid or not, and nobody has to accept it, but disclosing your tooling isn't an invitation for complaints. The ethical concerns about training data, resource use and other negative externalities are legitimate and worth raising. Performative outrage directed at individual maintainers is not the same thing.
I don't have an easy conclusion. The tools are useful, the ethics are murky, and communities are still figuring out how to respond. A cautious, honest use of them feels better to me than the alternatives.
25 Mar 2026 5:00pm GMT
Building modern Django apps with Alpine AJAX, revisited
About nine months ago I wrote an article about my quest to simplify my web development stack. How I went from SvelteKit on the frontend and Django on the backend, to an all-Django stack for a new project, using Alpine AJAX to enable partial page updates.
I've now been using this new stack for a while, and my approach -as well as my opinion- has changed significantly. Let's get into what works, what doesn't, and where I ended up.
A quick recap
Alpine AJAX is a lightweight alternative to htmx, which you can use to enhance server-side rendered HTML with a few attributes, turning <a> and <form> tags into AJAX-powered versions. No more full page refreshes when you submit a form.
The key mechanic: when a form has x-target="comments", Alpine AJAX submits the form via AJAX, finds the element with that ID in the response, and swaps it into the page. The server returns HTML, not JSON.
In the original article I used django-template-partials (since merged into Django itself) to mark sections of a template as named partials using {% partialdef %}. Combined with a custom AlpineTemplateResponse the view could automatically return just the targeted partial when the request came from Alpine AJAX.
Where I began: template partials
Let's say you have an article page with the article body parsed from Markdown, a like button, and a comment section. The template looks something like this:
article.html{% extends "base.html" %} {% block body %} <article> <h1>{{ article.title }}</h1> {{ article_html|safe }} {% partialdef like_form inline %} <form method="post" id="like_form" x-target="like_form"> {% csrf_token %} <button type="submit" name="toggle-like"> {% if article.is_liked %}Unlike{% else %}Like{% endif %} </button> </form> {% endpartialdef %} {% partialdef comments inline %} <div id="comments"> {% for comment in article.comments.all %} <div>{{ comment.user }}: {{ comment.text }}</div> {% endfor %} <form method="post" x-target="comments"> {% csrf_token %} {{ comment_form }} <button type="submit" name="add-comment">Submit</button> </form> </div> {% endpartialdef %} </article> {% endblock %}
Every form action POSTs to the same article view, which handles all the actions in one big post method:
views.pyclass ArticleView(View): def get_context(self, request, pk): article = get_object_or_404( Article.objects.prefetch_related("comments") .annotate_is_liked(request.user), pk=pk, ) return { "article": article, "article_html": markdown(article.body), "comment_form": CommentForm(), } def post(self, request, pk): context = self.get_context(request, pk) article = context["article"] if "toggle-like" in request.POST: if article.is_liked: article.unlike(request.user) article.is_liked = False else: article.like(request.user) article.is_liked = True return AlpineTemplateResponse(request, "article.html", context) if "add-comment" in request.POST: form = CommentForm(request.POST) if form.is_valid(): Comment.objects.create(article=article, user=request.user, ...) return AlpineTemplateResponse(request, "article.html", context) return redirect(article) def get(self, request, pk): context = self.get_context(request, pk) return AlpineTemplateResponse(request, "article.html", context)
The AlpineTemplateResponse from the original article takes care of returning just the targeted partial when the request comes from Alpine AJAX. It works. I thought I was being smart to prevent template duplication this way, but there are two problems:
-
The view does too much work. Every POST action calls
get_context, which fetches everything: the article, the parsed Markdown body, the comments, the like state, the comment form. When the user clicks "Like", we do all this work we'll never use in the partial template. The template partial means the response is small, but the server-side work is exactly the same as rendering the full page. -
The template is a mess. Those
{% partialdef %}blocks scattered throughout the template make it noisy and hard to read. In a small example it's fine, but in a real template with 200+ lines, it gets ugly fast.
When doubt set in: switching to Jinja2
To be honest though, the real killer of my motivation while working on this project has been the Django Template Language. I'm sorry, but I just hate it. I have since 2009, and I still do. The syntax is bad enough, but then you have to constantly fight its limitations. The fact I can't simply call a function is so incredibly annoying, and is causing way more boilerplate with tons of custom template tags and filters.
So, switch to Jinja2, right? Except that template partials aren't supported in combination with Jinja2. No more {% partialdef %}. Which means returning full page responses for AJAX requests, which isn't exactly ideal.
I did it anyway. I ripped out all the {% partialdef %} tags, migrated my templates to Jinja2, and my views just returned the full template for AJAX requests. Alpine AJAX is smart enough to extract the elements it needs by their IDs, and throws away the rest.
This was simpler and I was much happier writing Jinja2 templates. But the wastefulness got worse. Before, the server at least returned a small response. Now it rendered the entire page and sent all of it over the wire, just for the browser to use a tiny piece of it.
It was at this moment that I seriously thought about throwing the entire frontend away and rebuilding it in SvelteKit, with Django REST Framework returning JSON responses. But that seemed like a pretty big waste of effort, so instead I took a deep breath and thought about what I wanted:
- Jinja2 templates. Non-negotiable.
- Small, fast AJAX responses. No rendering the full page for a like toggle.
- No template duplication between the full page and the AJAX response.
- Simple views that only do the work they need to do.
Template partials gave me #2 and #3, but not #1 or #4. Switching to Jinja2 and returning the full template for AJAX requests gave me #1 and #3, but not #2 or #4. I needed a different approach.
Where I ended up: separate views with template includes
The answer turned out to be straightforward, and the one I initially discarded as "too much boilerplate": instead of one monolithic view handling all POST actions, split each action into its own view with its own URL. And instead of {% partialdef %}, use plain {% include %} tags to extract reusable template fragments.
Let me show you. Here's the simplified article template:
article.html{% extends "base.html" %} {% block body %} <article> <h1>{{ article.title }}</h1> {{ article.body }} {% include "articles/_like_form.html" %} {% include "articles/_comments.html" %} </article> {% endblock %}
Clean and readable. Each include is a self-contained fragment. And here's the like form:
_like_form.html<form method="post" action="{{ url('toggle-like', args=[article.id]) }}" id="like_form" x-target="like_form"> {{ csrf_input }} {% if article.is_liked %} <button type="submit">Unlike</button> {% else %} <button type="submit">Like</button> {% endif %} </form>
And finally, the view:
views.pyclass ToggleLikeView(LoginRequiredMixin, View): def post(self, request, pk): article = get_object_or_404( Article.objects.annotate_is_liked(request.user), pk=pk, ) if article.is_liked: article.unlike(request.user) article.is_liked = False article.like_count -= 1 else: article.like(request.user) article.is_liked = True article.like_count += 1 if is_alpine(request): return TemplateResponse( request, "articles/_like_form.html", {"article": article}, ) # For non-Alpine requests, we just redirect back return redirect(article)
No comment queries. No form building. No Markdown parsing. Just the like state.
The is_alpine check provides a redirect fallback for non-JavaScript POST requests, keeping things progressive. And the ArticleView itself becomes GET-only. No more branching on POST keys. No get_context method that fetches everything for every action. Each view does one thing.
The trade-offs
More templates. For the article page, I went from one template to several: the include fragments (_like_form.html, _comments.html) that are shared between the full page and the AJAX responses. When an action needs to update multiple elements on the page, you also end up with small response templates that combine the right includes. For example, if submitting a comment should update both the comment list and a comment count elsewhere on the page:
_add_comment_response.html{% include "articles/_comments.html" %} {% include "articles/_engagement_counts.html" %}
Trivial, but still a file you have to create and name.
More views and URL routes. Each action gets its own view class and its own path() entry. For a page with likes, comments, and subscriptions, that's three or four extra views.
But here's what I got in return:
Actual performance improvement. Not just smaller responses, but less work on the server. Each view only queries what it needs.
Jinja2. I'm using Jinja2 instead of the Django Template Language. I can call functions, I have proper expressions, and I don't need custom template tags for basic things. This alone was worth the switch.
Readable templates. The main article.html is short and shows the page structure at a glance. Each fragment is self-contained. No {% partialdef %} blocks scattered everywhere.
Simple views. Each view does exactly one thing. Easy to understand, easy to test, easy to optimize.
Conclusion
I went through three stages: template partials with Django Template Language, full-page responses with Jinja2, and finally separated views with template includes. Each step solved a real problem with the previous approach.
The pattern I've landed requires more files and views than I'd like, but each is simple and does one thing.
My overall feelings on Django + Alpine AJAX have also changed. I still believe there are benefits to using a simplified tech stack and using hypermedia as the engine of state. Just return HTML instead of returning JSON to a JavaScript framework which then has to turn it into HTML. Conceptually it just makes sense to me.
But the dream was to build a plain old Django application using simple views and simple templates, using old-fashioned MPA server-rendered pages. Sprinkle in a few Alpine AJAX attributes and magically your site gets SPA-like usability. And it simply hasn't played out that way for me. Yes, you could do that, if you're fine with the wastefulness of returning full pages as a response to AJAX requests. But when you want to do it better than that, you end up with more boilerplate to make it possible to return small bits of HTML.
And this isn't really about Alpine AJAX specifically; htmx would lead to the exact same place. The fundamental tension is in the HTML-over-the-wire approach itself: the server has to know which fragments of HTML to return, and that means structuring your views and templates around it. You trade the complexity of a JavaScript frontend for a different kind of complexity on the server.
Progressive enhancement adds to that complexity. Every view needs an is_alpine check with a redirect fallback, every form needs to work both as a regular POST and as an AJAX submit. If I dropped progressive enhancement and just required JavaScript, those redirect fallbacks and the branching that comes with them would disappear. The views would be simpler. But I think progressive enhancement is important enough to keep in place.
Would I use Alpine AJAX (or htmx) again? Honestly: probably not. I have a lot more fun when building frontends with SvelteKit. Building composable and reusable UI components is so much more natural there, and the performance is simply better (once the initial JS bundle has been downloaded and parsed). But am I going to throw away my current project's code and redo it all? No, I am not. Django with Alpine AJAX is a nice change of scenery, it's a nice playground I don't usually get to play in. I think I ended up with a good compromise, and hey: I still don't have to build and maintain a separate API, API docs, and frontend.
25 Mar 2026 3:16pm GMT
