07 Apr 2026
Django community aggregator: Community blog posts
I've Been the Sole Developer of a Healthcare Membership Platform for 6+ Years. Here's What It Looks Like.
A few years back, a healthcare professional association reached out to me. They regulate and support thousands of practitioners across their region: licensing, insurance, events, the whole deal. Their website couldn't keep up.
What they needed was a platform that could handle member applications, renewals, payments, event registrations, an …
07 Apr 2026 7:21am GMT
06 Apr 2026
Django community aggregator: Community blog posts
I patched GSD, and why you should patch it too
GSD (Get Shit Done) is one of the best things that's happened to my development workflow. If you haven't heard of it, it's a meta-prompting and context engineering system for Claude Code (and OpenCode). It breaks your work into milestones and phases, spawns fresh subagents with clean contexts for each task, and solves the context rot problem that kills quality in long AI sessions. I wrote about my full setup on my AI toolbox page.
GSD is great out of the box. But I wanted it to be mine.
I've been using GSD daily for weeks now, and over time I kept bumping into the same friction points: the plan review was too shallow, the verification step was too manual, and the UI audit felt incomplete. So I did what any developer would do. I patched it.
This post is about the three patches I made, how they work, why I made them, and how you can build your own. More importantly, it's about why you should be patching your tools. Not just GSD; any tool you use daily.
The philosophy: own your tools
Here's the thing about AI tools right now: we're at a point in time where you can design your own toolbox exactly the way you want it. Not just pick tools, but shape them. Customize them. Make them fit your brain, your workflow, your team.
GSD is open source. Its workflows are markdown files. Its commands are markdown files. Everything is text. That means you can read them, understand them, and rewrite the parts that don't work for you. You don't need to fork the whole project or wait for an upstream PR to get merged. You just… change the files.
The tradeoff is that GSD updates will overwrite your changes. I'll show you how I handle that. But first, let me show you what I changed and why.
What I patched
Three patches, each solving a specific pain point:
| Patch | Problem | Solution |
|---|---|---|
| Multi-model adversarial review | Stock review is shallow (5-point checklist, single model) | 6 independent AI models, 8-dimension adversarial framework |
Auto-verify (--auto flag) |
Verification is fully manual (test every item by hand) | Automated playwright + curl checks, human only for subjective items |
| Cross-AI UI review | Single auditor for inherently subjective UI evaluation | 6 models independently scoring all 6 UI pillars |
Let's go through each one.
Patch 1: Multi-model adversarial review
This was the first patch and the one that started it all.
The stock GSD review runs a single model reviewing its own plans through a 5-point checklist. It's… fine. But after using it for a while, I noticed the reviews were surface-level. They'd catch obvious things (missing tests, unclear task descriptions) but they wouldn't catch architectural blind spots, failure modes, or the kind of problems that bite you in production two weeks later.
So I replaced it with an 8-dimension adversarial review framework, executed by 6 independent AI models in parallel.
The 6 reviewers
All reviewers get the exact same prompt with the project context, phase plans, and requirements. They review independently and don't see each other's output:
- GPT 5.4, via
opencode run -m lazer/openai/gpt-5.4 - Gemini 3.1 Pro, via
opencode run -m lazer/gemini/gemini-3.1-pro-preview - MiniMax M2.5, via
opencode run -m lazer/deepinfra/MiniMaxAI/MiniMax-M2.5 - Kimi K2.5, via
opencode run -m lazer/deepinfra/moonshotai/Kimi-K2.5-Turbo - GLM-5, via
opencode run -m lazer/deepinfra/zai-org/GLM-5 - Claude Opus, via
claude -p --model opus
These models are all available through Lazer's LiteLLM proxy via OpenCode, except Claude which runs through its own CLI. The key insight here is that opencode run -m <model> lets you invoke any model as a one-shot command, which makes it perfect for this kind of parallel execution.
The 8 review dimensions
Instead of a 5-point checklist, each reviewer evaluates the plan across 8 dimensions:
- Goal Alignment: Does it actually solve the stated problem, or does it drift?
- Architecture & Design Coherence: Does it fit the existing system, or fight it?
- Failure Mode Analysis: What happens when things go wrong?
- Dependency & Ordering Risks: Are there hidden sequencing constraints?
- Security & Data Integrity: Are new attack surfaces introduced?
- Testing & Verification Strategy: Will the tests actually catch regressions?
- Operational Readiness: How will you know if it's broken in production?
- Missing Pieces: What implicit assumptions need to be explicit?
Each dimension gets a verdict: PASS, FLAG (minor concern), or BLOCK (must fix before execution). With evidence and actionable recommendations, not vague advice.
The review prompt is deliberately adversarial. It tells the reviewer:
You are a senior staff engineer conducting a deep adversarial review. Do not be polite; be precise. Your job is to find what will break, what was forgotten, and what will cause regret in 6 months. Assume the plan authors are competent but blind-spotted.
How it runs
When you run /gsd:review (or /gsd-review in OpenCode), the workflow:
- Detects which CLIs are available (
opencodeandclaude) - Gathers the phase context (PROJECT.md, ROADMAP.md, PLAN.md files, REQUIREMENTS.md, etc.)
- Builds a structured review prompt and writes it to a temp file
- Invokes all 6 reviewers in parallel; each one gets its own bash tool call
# All run simultaneously
opencode run -m lazer/openai/gpt-5.4 "$(cat /tmp/gsd-review-prompt-4.md)" > /tmp/gsd-review-gpt-5.4-4.md
opencode run -m lazer/gemini/gemini-3.1-pro-preview "$(cat /tmp/gsd-review-prompt-4.md)" > /tmp/gsd-review-gemini-pro-4.md
opencode run -m lazer/deepinfra/MiniMaxAI/MiniMax-M2.5 "$(cat /tmp/gsd-review-prompt-4.md)" > /tmp/gsd-review-minimax-4.md
opencode run -m lazer/deepinfra/moonshotai/Kimi-K2.5-Turbo "$(cat /tmp/gsd-review-prompt-4.md)" > /tmp/gsd-review-kimi-4.md
opencode run -m lazer/deepinfra/zai-org/GLM-5 "$(cat /tmp/gsd-review-prompt-4.md)" > /tmp/gsd-review-glm-5-4.md
claude -p --model opus "$(cat /tmp/gsd-review-prompt-4.md)" > /tmp/gsd-review-claude-4.md
Since they run in parallel, total review time is about 1-2 minutes regardless of how many reviewers you have. The original version ran them sequentially, which took ~6 minutes. More on that bug below.
- Combines all reviews into a
REVIEWS.mdfile with a consensus summary
The consensus summary is the real gem. It highlights blockers (issues raised by 2+ reviewers), agreed concerns, divergent views, and (most importantly) unique insights where a single reviewer caught something all others missed. Those blind spots are exactly why multi-model review exists.
I don't have a single dramatic "GLM-5 saved the day" story, but the pattern is clear across multiple uses: every review has at least one or two unique insights from a single model. Different models have different biases, different training data, and different ways of reasoning about code. When 5 out of 6 reviewers say PASS and one says BLOCK, that's worth investigating.
After the review, you feed it back into planning:
/gsd:plan-phase 4 --reviews
The planner reads the REVIEWS.md and addresses the concerns. A plan that survives adversarial review from 6 independent AI systems is much more robust than one reviewed by a single model.
Patch 2: Auto-verify with --auto
The stock verify-work workflow is fully manual: you test every single item by hand. The workflow presents each test, you check it manually, report pass or fail. For a phase with 10-15 tests, that's a lot of time spent clicking around and typing "yes" over and over for things that could obviously be automated.
My patch adds an --auto flag. Without it, the workflow is 100% identical to the original. With it, the workflow tries to automate the mechanical checks before falling through to the interactive loop.
Run it like this:
/gsd:verify-work 4 --auto
What --auto does
-
Checks for
playwright-cli. If it's not installed, warns you and offers to continue without it (UI tests become manual). If it is, you get automated browser checks. -
Auto-detects the base URL. Scans
.env,PROJECT.md,docker-compose.yml, andpackage.jsonfor common patterns. Presents options so you can confirm or change. -
Pings the URL. Makes sure the app is actually running before trying to test anything. If it's not reachable, offers retry/skip/change URL.
-
Checks for auth credentials. Looks for test tokens and credentials in
.envfiles, test fixtures, and seed scripts. For API tests, it'll ask for a bearer token or API key if it can't find one. For UI tests, it'll ask for login credentials or use a dev bypass if one exists. -
Classifies each test. Routes tests to the right tool:
| Test references | Tool |
|---|---|
| Pages, routes, visual appearance, user flows | playwright-cli |
| API endpoints, response codes, data shapes | curl |
| Form submission → API response | playwright-cli (covers both) |
| Performance feel, subjective UX | stays interactive |
-
Runs playwright smoke checks. For UI tests: navigates to the page, checks it loads without console errors, verifies key elements are visible, does basic click navigation. Runs
playwright-cli showso you can watch. -
Runs curl checks. For API tests: endpoint reachability, response shape verification, CRUD with cleanup (create a resource, verify it, update it, delete it, clean up), and error handling (invalid payload → 400, missing ID → 404).
-
Reports and continues. Shows you what passed, what failed, and what needs manual testing. Then drops into the normal interactive loop for remaining tests only.
Confidence-based failure handling
Not all automated failures are created equal. The patch distinguishes between high-confidence failures and low-confidence ones:
- High confidence (wrong status code, missing element, 500 error) → marked as issues automatically
- Low confidence (timeouts, flaky selectors, intermittent network issues) → stays pending for manual testing
The result: I typically only need to manually verify 2-3 subjective items instead of 10-15 total tests. It dramatically reduces UAT time while keeping the human in the loop for things that actually need human judgment.
Patch 3: Cross-AI UI review
Same concept as the adversarial plan review, but for frontend code.
GSD has a built-in UI auditor that runs a 6-pillar visual audit: Copywriting, Visuals, Color, Typography, Spacing, and Experience Design. Each pillar gets scored 1-4. It's useful, but it's one model's opinion about something that's inherently subjective.
My patch adds a step after the primary audit: the same 6 external models independently score all 6 pillars and challenge the primary auditor's findings. The prompt explicitly tells them:
Do not be deferential to the primary review. If you think a score is wrong, say so. If you think a critical issue was missed, flag it. Different eyes catch different things.
The result is a score comparison table appended to UI-REVIEW.md:
| Pillar | Primary | GPT-5.4 | Gemini | MiniMax | Kimi | GLM-5 | Claude | Avg |
|-------------+---------+---------+--------+---------+------+-------+--------+-----|
| Copywriting | 3/4 | 3/4 | 2/4 | 3/4 | 3/4 | 3/4 | 3/4 | 2.8 |
| Visuals | 4/4 | 3/4 | 3/4 | 4/4 | 3/4 | 4/4 | 3/4 | 3.3 |
| ... | | | | | | | | |
Plus sections for: issues the primary auditor missed (caught by 2+ cross-AI reviewers), score disagreements worth investigating, and validated findings with high confidence.
The workflow then routes you based on severity. If there are many issues (5+ fixes, any pillar ≤ 2/4, or cross-AI average below 16/24), it tells you to fix before moving on and suggests the right GSD command. If things look good, it suggests proceeding to the next phase.
The patching infrastructure
Now for the part that makes all of this sustainable: how do the patches survive GSD updates?
GSD's workflows live in ~/.claude/get-shit-done/workflows/ (for Claude Code) and ~/.config/opencode/get-shit-done/workflows/ (for OpenCode). When you run /gsd:update, those directories get wiped and replaced with the latest version. Your patches are gone.
My solution is a canonical storage system. All my patch source files live in ~/.config/gsd-patches/, versioned in my dotfiles. After any GSD update, I run one command to reapply everything.
Directory structure
~/.config/gsd-patches/
├── claude/
│ ├── commands/
│ │ ├── review.md # /gsd:review command definition
│ │ └── verify-work.md # /gsd:verify-work command definition
│ └── workflows/
│ ├── review.md # adversarial review workflow
│ ├── ui-review.md # cross-AI UI review workflow
│ └── verify-work.md # auto-verify workflow
├── opencode/
│ ├── command/
│ │ ├── gsd-review.md
│ │ └── gsd-verify-work.md
│ └── workflows/
│ ├── review.md
│ ├── ui-review.md
│ └── verify-work.md
├── bin/
│ ├── sync # copies patches to runtime locations
│ └── check # verifies drift and missing files
├── gsd-customizations.md # changelog of what changed and why
└── README.md
The Claude and OpenCode versions are nearly identical; the only differences are file paths (~/.claude/ vs ~/.config/opencode/) and command syntax (/gsd:review vs /gsd-review).
The sync script
This is the entire sync script. It's embarrassingly simple:
#!/usr/bin/env bash
set -euo pipefail
MODE="${1:-all}"
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
copy_file() {
local src="$1"
local dst="$2"
mkdir -p "$(dirname "$dst")"
cp -f "$src" "$dst"
printf 'SYNC %s -> %s\n' "$src" "$dst"
}
sync_claude() {
copy_file "$ROOT/claude/workflows/review.md" "$HOME/.claude/get-shit-done/workflows/review.md"
copy_file "$ROOT/claude/workflows/ui-review.md" "$HOME/.claude/get-shit-done/workflows/ui-review.md"
copy_file "$ROOT/claude/workflows/verify-work.md" "$HOME/.claude/get-shit-done/workflows/verify-work.md"
copy_file "$ROOT/claude/commands/review.md" "$HOME/.claude/commands/gsd/review.md"
copy_file "$ROOT/claude/commands/verify-work.md" "$HOME/.claude/commands/gsd/verify-work.md"
}
sync_opencode() {
copy_file "$ROOT/opencode/workflows/review.md" "$HOME/.config/opencode/get-shit-done/workflows/review.md"
copy_file "$ROOT/opencode/workflows/ui-review.md" "$HOME/.config/opencode/get-shit-done/workflows/ui-review.md"
copy_file "$ROOT/opencode/workflows/verify-work.md" "$HOME/.config/opencode/get-shit-done/workflows/verify-work.md"
copy_file "$ROOT/opencode/command/gsd-review.md" "$HOME/.config/opencode/command/gsd-review.md"
copy_file "$ROOT/opencode/command/gsd-verify-work.md" "$HOME/.config/opencode/command/gsd-verify-work.md"
}
case "$MODE" in
all)
sync_claude
sync_opencode
;;
claude)
sync_claude
;;
opencode)
sync_opencode
;;
*)
printf 'Usage: %s [all|claude|opencode]\n' "$0" >&2
exit 2
;;
esac
printf 'Done.\n'
After a /gsd:update:
~/.config/gsd-patches/bin/sync all
That's it. All patches reapplied in under a second.
The check script
I also have a check script that verifies whether my runtime files match the canonical source. It uses cmp -s to do a byte-for-byte comparison and reports drift:
~/.config/gsd-patches/bin/check all
# Output:
VERSION claude 1.30.0
VERSION opencode 1.30.0
OK /home/roger/.claude/get-shit-done/workflows/review.md
OK /home/roger/.claude/get-shit-done/workflows/ui-review.md
OK /home/roger/.claude/get-shit-done/workflows/verify-work.md
OK /home/roger/.claude/commands/gsd/review.md
OK /home/roger/.claude/commands/gsd/verify-work.md
OK /home/roger/.config/opencode/get-shit-done/workflows/review.md
...
Status: clean
If anything drifted (maybe I edited a runtime file directly during debugging), it shows DIFF and exits with code 1. Keeps me honest.
The changelog
I maintain a gsd-customizations.md file that tracks every patch: what changed, why, which GSD version it was patched against, and which files were modified. This is crucial. When a GSD update changes the workflow format or adds new features, I need to know exactly what I changed so I can adapt my patches to the new version.
Here's a taste of what it looks like:
## 2026-03-30 - Fix opencode hangs (remove 2>/dev/null), run reviewers in parallel
**GSD version:** 1.30.0
**Files modified:** `get-shit-done/workflows/review.md`, `get-shit-done/workflows/ui-review.md`
### What changed
- Removed `2>/dev/null` from all `opencode run` and `claude -p` invocation commands
- Changed reviewer invocation from **sequential** to **parallel**
### Why
Suppressing stderr with `2>/dev/null` caused `opencode run` to hang indefinitely;
opencode needs stderr for progress output and/or terminal detection. Removing the
redirect fixed the hangs immediately.
Honesty moment
I should mention: I haven't actually had a GSD update wipe my patches yet. I haven't updated GSD since I started patching. So the sync/check system is built and tested, but hasn't been battle-tested by a real update cycle. I'm confident it'll work (it's just cp commands) but I want to be upfront about it. When it does happen, I'll update this post.
Bugs I found along the way
Patching GSD meant reading the stock workflows carefully, and that led me to find (and fix) bugs that existed in the original:
2>/dev/null on opencode run causes hangs. The stock workflow suppressed stderr on external CLI calls. Turns out, opencode run needs stderr for progress output and/or terminal detection. Suppressing it causes the process to hang indefinitely. Removing 2>/dev/null fixed it immediately. Note: stderr suppression on other commands (like ls, node, git) is fine; it's only the interactive CLI tools that break.
claude -p doesn't support --no-input. The stock workflow passed --no-input to claude -p, which isn't a valid flag. It caused the Claude reviewer to fail silently (exit code 1, empty output). Just removing the flag fixed it.
Sequential execution was unnecessary. The stock workflow ran reviewers one at a time. Since each reviewer is an independent process with no shared state, there's no reason they can't run in parallel. Switching to parallel execution (separate bash tool calls in a single message) cut review time from ~6 minutes to ~1-2 minutes.
These fixes are now part of my patches and would benefit anyone patching GSD.
How to make your own patches
If you want to patch GSD yourself, here's how to start:
1. Find the file you want to change. GSD's workflows live in ~/.claude/get-shit-done/workflows/ (Claude Code) or ~/.config/opencode/get-shit-done/workflows/ (OpenCode). Commands are in ~/.claude/commands/gsd/ or ~/.config/opencode/command/. They're all markdown files. Read them.
2. Make your change in a canonical location. Don't edit the runtime files directly; they'll get wiped on update. Create a directory (I use ~/.config/gsd-patches/) and keep your modified versions there.
3. Write a sync script. It doesn't need to be fancy. Mine is just a series of cp commands. The point is that reapplying patches should be one command, not a manual checklist.
4. Write a check script. Optional but useful. Being able to run check all and see if your runtime matches your canonical source saves debugging time.
5. Keep a changelog. Track what you changed, why, and against which GSD version. Future you will thank present you.
6. Don't forget the command files. I missed this on my first patch. GSD has two sets of files: workflows (get-shit-done/workflows/) and commands (commands/gsd/). If you patch a workflow, check if the corresponding command file needs updating too. They're separate files that reference each other.
7. Version control it. Put your patches directory in your dotfiles. Mine are at git.rogs.me/rogs/dotfiles under .config/gsd-patches/. This means if I set up a new machine, my patches come with me.
Ideas for your own patches
You don't have to copy my patches. The beauty of this approach is that you can shape GSD to fit your workflow. Here are some ideas:
- Different reviewer models. Maybe you have access to models I don't, or you want fewer reviewers for faster reviews. Swap the model strings in
review.md. - Custom review dimensions. The 8 dimensions I use are tuned for backend work. If you're doing mobile development, you might want dimensions for offline behavior, battery impact, or app store compliance.
- Different auto-verify tools. I use
playwright-cliandcurl. If your stack uses Cypress, Selenium, orhttpie, adapt the classify/execute steps. - Notification integration. Add a step that pings your phone (via ntfy, Pushover, Telegram) when a review is complete.
- Custom UAT templates. The verify-work workflow extracts tests from SUMMARY.md files. You could add a step that also pulls from your team's QA checklist or acceptance criteria document.
Show me the code
All my patches are public. If you want to see the exact files behind everything described in this post:
- Patch files: git.rogs.me/rogs/dotfiles under
.config/gsd-patches/ - Full AI workflow: rogs.me/ai
- GSD project: github.com/gsd-build/get-shit-done
Feel free to steal whatever is useful to you. That's what dotfiles are for.
See you in the next one!
06 Apr 2026 5:00am GMT
04 Apr 2026
Django community aggregator: Community blog posts
Anthropic is pushing away its paying customers
I need to vent.
I want to start by saying this is my opinion and doesn't reflect the views of my employers or anyone else.
I've been paying $100/month for Claude Max because Claude is, without question, the best model for programming. I've built my entire AI workflow around it. I've written blog posts about it. I've recommended it to colleagues, friends, and strangers on the internet. I've been a loyal, paying customer.
And Anthropic keeps making it harder to stay.
The third-party ban
On the night of April 3, 2026, Anthropic sent an email to subscribers announcing that third-party harnesses like OpenClaw can no longer use Claude Max subscription limits. Starting April 4 at 12pm PT. That's less than 24 hours of notice.
Let that sink in. Less than 24 hours to rip out and replace the model powering my personal AI assistant, my Emacs tooling, and potentially other parts of my workflow.
My OpenClaw setup was running Opus 4.6 for personal tasks: managing my calendar, maintaining my open source projects, doing research, all through Telegram. It was perfect. Now if I want to keep using Claude with OpenClaw, I need to pay extra on top of my $100/month subscription through their new "extra usage" pay-as-you-go option.
This also killed CLIProxyAPI, which I wrote about two months ago. That tool let me use my Max subscription with Emacs packages like forge-llm and magit-gptcommit. I wrote an entire blog post about it, shared my config, helped people set it up. Dead now. Two months.
And it's not just OpenClaw and CLIProxyAPI. GSD 2, the next generation of the tool I use for all my heavy development work, is built on the Pi SDK, the same foundation OpenClaw uses. I'm over 90% sure it's also affected. That's the tool I've been watching closely and testing on weekends for my personal projects. If GSD 2 can't use my subscription, that's yet another thing Anthropic broke.
Their email said these tools "put an outsized strain on our systems" and that they need to "prioritize customers using core products". I'm paying $100/month. I am a customer. But apparently I'm not using the product the "right way."
The notice was insulting
We'd been hearing rumblings for a while. Rumors that Anthropic didn't like users accessing Claude through third-party tools. Reports on Reddit of people getting banned for using OpenClaw too aggressively. But nothing official.
Then, with less than 24 hours of notice, they made it policy.
Yes, they offered a one-time credit equal to your monthly subscription price. Yes, they're offering discounts on pre-purchased usage bundles. Yes, they're offering refunds. But none of that changes the fact that they gave paying customers less than a day to restructure their workflows.
A consumer-forward company would have given weeks of notice, not hours. A consumer-forward company would have opened a dialogue with the community before dropping the hammer. Instead, we got an email at night and a deadline the next morning.
The usage limits are a mess
This isn't even the first time Anthropic has frustrated me recently. The usage limits on Claude Code have been a disaster since late March.
Sessions that used to last hours started burning through in under 90 minutes. I'd start in the morning and hit the limit in about 45 minutes doing the same kind of work that used to last all morning. This week, I hit 50% of my weekly usage by Tuesday. My usage resets on Friday. That's terrifying when you depend on the tool for your daily work.
Anthropic acknowledged the issue. An engineer confirmed on X that limits drain faster during peak hours to "manage growing demand." A GitHub issue has been accumulating reports. Reddit threads are flooded with complaints. Someone reverse-engineered the Claude Code binary and found bugs that break prompt caching, silently inflating costs by 10-20x.
And through all of this, Anthropic has been mostly silent. I see tweets from employees saying they're working on it, but I don't see results. Meanwhile, their leadership seems more focused on shipping new features than making sure what they already have actually works. They keep shipping and shipping and not fixing what's broken.
For comparison, I've been using OpenAI's models through OpenCode as my fallback, and I have yet to hit a 5-hour usage limit. Not once. The experience is night and day.
What I did about it
I moved everything to Lazer's LiteLLM proxy (a perk we have as employees at Lazer Technologies). OpenClaw now runs GLM-5, which is a legitimately great model: open source, MIT licensed, and competitive with frontier models on agentic tasks. My Emacs tools (forge-llm, magit-gptcommit) also moved to the Lazer proxy with GLM-5 and Qwen3 Coder 480B Turbo respectively. If you don't have access to a company proxy, OpenRouter is a solid alternative, or you can use your own API keys directly.
The migration wasn't hard. It took a couple of hours. But that's not the point. The point is that I shouldn't have had to do it. I was paying for a service and they changed what I was paying for.
Where I stand
I'm very close to canceling my subscription and moving back to ChatGPT. I've been using OpenAI's models for programming through OpenCode, and they're getting really good. A little too verbose, and not quite at Opus level, but more than good enough for my workflow. And crucially, OpenAI isn't pulling the rug out from under me every other week.
Claude is still the best model for coding. I'm not going to pretend otherwise. But the best model doesn't matter if you can't use it reliably, if the limits drain in 45 minutes, and if the company keeps changing the terms on paying customers without adequate notice.
Here's where I am right now:
- If Anthropic fixes the usage limits and stops making hostile changes, I'll stay. The model quality is worth it.
- If they don't improve and someone else comes in with a competitive model and a better deal, I'm gone.
- Either way, I'm never again putting all my eggs in one provider's basket. That's the lesson here.
The decisions coming out of Anthropic lately feel like corporate decisions that shaft users, not decisions made by a company that cares about its customers. And that's frustrating, because the engineering team clearly builds incredible stuff. It's the business side that's letting them down.
I updated my AI Toolbox page with all the changes. If you want to see my current setup (post-Anthropic-rug-pull), that's the place to look.
See you in the next one. Hopefully less angry.
04 Apr 2026 5:00am GMT
03 Apr 2026
Django community aggregator: Community blog posts
Django News - Supply Chain Wake-Up Call - Apr 3rd 2026
News
Incident Report: LiteLLM/Telnyx supply-chain attacks, with guidance
A recent supply chain attack on popular PyPI packages exposed how quickly malware can spread through unpinned dependencies-and why practices like dependency locking and cooldowns are now essential for Python developers.
The PyCon US 2026 schedule is live 🌴🐍 plus security updates, community programs & more
PyCon US 2026 heads to Long Beach with its schedule now live, alongside major Python ecosystem updates spanning security improvements, new community programs, and ongoing PSF initiatives.
Django Software Foundation
DSF Board Meeting Minutes, March 12, 2026
DSF approved trademark renewal plans, advanced a long-awaited Code of Conduct update, and continued shaping community governance and outreach efforts.
Wagtail CMS News
How to Generate SEO Descriptions for Your Entire Wagtail Site at Once ⚡
Use Wagtail AI's built-in LLM pipeline to bulk-generate SEO meta descriptions across your entire site in minutes with a simple Django management command.
How to Show a Waitlist Until Your Wagtail Site Is Ready
A clever Django and Wagtail pattern for launching with a waitlist while selectively granting preview access using secure cookies and a simple passphrase gate.
Build Dynamic Campaign Landing Pages in Wagtail
Use a single Wagtail page with dynamic routing, built-in A/B testing, and campaign slug tracking to replace dozens of duplicate landing pages with one flexible, data-driven solution.
Updates to Django
Today, "Updates to Django" is presented by Hwayoung from Djangonaut Space! 🚀
Last week we had 11 pull requests merged into Django by 9 different contributors - including 4 first-time contributors! Congratulations to Georgios Verigakis, David Ansa, Vinay Datta and Sebastian Skonieczny for having their first commits merged into Django - welcome on board!
Documentation was added to clarify how database routers handle related-object access. It explains that Django uses instance.state.db by default for related lookups and provides guidance on using the instance hint in dbfor_read() to maintain routing consistency in multi-database configurations. (#29762)
Django Newsletter
Sponsored Link 1
The deployment service for developers and teams.
Articles
The Story of Python's Lazy Imports: Why It Took Three Years and Two Attempts
From PEP 690's rejection to PEP 810's unanimous acceptance - how Python finally got explicit lazy imports after three years of real-world production evidence and a fundamental design inversion
Tombi, pre-commit, prek and uv.lock
A subtle tooling mismatch reveals how a recent update made uv.lock suddenly count as TOML, causing pre-commit to reformat it unexpectedly across environments.
Claude Pitfalls: Database Indexes
A smart migration tweak reveals how AI code reviews can both catch real production risks and miss critical context, proving that combining multiple agents leads to better Django performance decisions.
Loopwerk: Building modern Django apps with Alpine AJAX, revisited
After ditching template partials and full-page AJAX hacks, this deep dive shows how splitting Django views and using template includes leads to simpler code, better performance, and a more maintainable Alpine-powered stack.
Djangonaut diaries, week 4: Eliminating a Redundant Index in Django's ORM
A deep dive into a subtle Django ORM inefficiency shows how removing a redundant many-to-many index improves database performance and highlights the real-world journey from bug report to merged PR.
SHA Pinning Is Not Enough
SHA pinning isn't a silver bullet-this deep dive shows how attackers can still slip malicious code into GitHub Actions by pointing to trusted-looking but rogue commits.
A primer on Django project structure ¤ 101% objective - always!
AI is rapidly rewriting the world's software, but without scalable verification like formal proofs, we risk deploying fast, flawed, and fundamentally untrusted code at global scale.
When AI Writes the World's Software, Who Verifies It?
AI is rapidly rewriting the world's software, but without scalable verification like formal proofs, we risk shipping faster code that no one truly understands or can trust.
So OpenAI is acquiring Astral
OpenAI's acquisition of Astral raises real concerns about the future of uv, but for now, it's still one of the fastest and most practical Python tooling choices worth sticking with.
Events
DjangoCon Europe is soon!
April 15-19 in Athens, Greece. Get a ticket if you're able to attend. Keynote speakers, workshops, and all talks available online.
PyCon US May 13-19 in Long Beach, CA
Tickets are available for this annual event now in beautiful Long Beach, California.
DjangoCon US Early Bird Tickets Now Available
Don't hesitate! If you can, join for five days of talks, workshops, and sprints once again in Chicago this August 24-28.
Videos
Boost Your GitHub DX
A lively chat with Adam Johnson on leveling up your GitHub workflow, from practical DX tips to cutting-edge Python tooling like ICU bindings.
Django Job Board
Two fresh Python roles this week: one focused on open data impact, the other on client-facing architecture with a leading developer tools company.
Python Developer at Open Data Services 🆕
Solutions Architect - Python (Client-facing) at JetBrains
Django Newsletter
Django Forum
Django sprint at Pycon DE? - Events
A call is out for someone to lead a Django sprint at PyCon DE 2026, with contributors already eager to join and help onboard newcomers.
Projects
freelawproject/django-s3-express-cache
A high-speed, low latency cache that uses S3 Express to store many objects cheaply and efficiently
kjnez/django-rclone
Django database and media backup management commands, powered by rclone.
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
03 Apr 2026 3:00pm GMT
02 Apr 2026
Django community aggregator: Community blog posts
OpenCode as a server: AI agents that work while I sleep
My main machine is a beast. Ryzen 9 9950X3D, 64 GB of RAM, RX 9060 XT, three monitors, the works. It barely ever shuts off. So at some point I started thinking: why isn't this thing working for me when I'm not sitting in front of it?
The answer is now: it does. I'm running OpenCode as a persistent server on this machine, accessible from anywhere through my WireGuard VPN. I can spin up coding sessions from my MacBook Air, my phone, wherever. And the best part? I have scheduled jobs that run overnight: adding tests, updating documentation, enforcing code conventions. I wake up to PRs waiting for my review.
Here's the full setup.
The architecture
┌─────────────────────────────────────────────────────────┐
│ roger-beast │
│ (Ryzen 9 9950X3D / 64GB) │
│ │
│ ┌──────────────────┐ ┌──────────────────────┐ │
│ │ opencode serve │◀─────│ systemd user service │ │
│ │ :4096 (web UI) │ │ (auto-start/restart) │ │
│ └────────┬─────────┘ └──────────────────────┘ │
│ │ │
│ │ ┌──────────────────────┐ │
│ │ │ opencode-scheduler │ │
│ │ │ (systemd timers) │ │
│ │ │ ┌──────────────────┐ │ │
│ │ │ │ 2am: add tests │ │ │
│ │ │ │ 3am: update docs │ │ │
│ │ │ │ 4am: conventions │ │ │
│ │ │ └──────────────────┘ │ │
│ │ └──────────────────────┘ │
│ │ │
└───────────┼─────────────────────────────────────────────┘
│
┌───────┴──────────────┐
│ Nginx Proxy │
│ Manager │
│(opencode.example.com)│
└───────┬──────────────┘
│
┌─────────┴──────────┐
│ WireGuard VPN │
│ / Local Network │
└─────────┬──────────┘
│
┌────────┴────────┐
│ │
┌──┴───┐ ┌─────┴──────┐
│ 💻 │ │ 📱 │
│ MBA │ │ Phone │
└──────┘ └────────────┘
The idea is simple: OpenCode runs as a systemd user service, Nginx Proxy Manager gives it a nice domain, and WireGuard makes sure only my devices can reach it. From any browser on any device, I just go to opencode.example.com and I'm in.
Phase 1: OpenCode server with systemd
OpenCode has a serve command that starts a web UI you can access from a browser. The trick is making it persistent so it survives reboots and restarts itself if it crashes.
First, create a systemd user service. This means it runs as your user, not as root, which is important because it needs access to your home directory, your API keys, your OpenCode config, everything.
Create the file at ~/.config/systemd/user/opencode.service:
[Unit]
Description=OpenCode headless server
After=network.target
[Service]
ExecStart=/home/roger/.opencode/bin/opencode serve --hostname 0.0.0.0 --port 4096
Restart=on-failure
RestartSec=5
[Install]
WantedBy=default.target
A few things to note:
--hostname 0.0.0.0makes it listen on all interfaces, not just localhost. This is necessary so that Nginx Proxy Manager (or other devices on your network) can reach it.--port 4096is arbitrary. Pick whatever you want, just make sure it doesn't conflict with anything else.Restart=on-failurewithRestartSec=5means if OpenCode crashes, systemd will bring it back up after 5 seconds. I've never had it crash, but it's nice to know it's there.WantedBy=default.targetmeans it starts on login. Since this machine barely ever restarts, that's basically "always on."
Enable and start it:
systemctl --user daemon-reload
systemctl --user enable opencode.service
systemctl --user start opencode.service
Verify it's running:
systemctl --user status opencode.service
You should see it active and running. If you want the service to keep running even when you're not logged in (which you probably do, since the whole point is that it runs when you're away), you need to enable lingering:
sudo loginctl enable-linger roger
Replace roger with your username. This tells systemd to keep your user services running even after you log out. Without this, systemd kills your user services when your last session closes, which defeats the entire purpose.
At this point, you should be able to open http://localhost:4096 on the machine and see the OpenCode web UI.
Phase 2: Nginx Proxy Manager + WireGuard
I use Nginx Proxy Manager as my reverse proxy. It's a Docker-based GUI for managing Nginx configs, SSL certificates, and proxy hosts. If you prefer raw Nginx configs, you can absolutely do that instead, the concept is the same: point a domain at the OpenCode port.
In Nginx Proxy Manager, I created a new proxy host:
- Domain:
opencode.example.com - Scheme:
http - Forward Hostname/IP:
192.168.x.x(the local IP of my machine) - Forward Port:
4096 - Websockets Support: enabled (OpenCode's web UI uses websockets)
For the access part, I don't need to worry too much about authentication because the domain is only accessible from two places:
- My local network: If I'm at home, my devices are already on the same network as the machine.
- My WireGuard VPN: If I'm remote, I connect to my WireGuard VPN first, which puts me on the same network. My WireGuard setup is the same one I described in my Claude Code from the beach post.
The DNS for opencode.example.com points to the internal IP of the machine running Nginx Proxy Manager. This means the domain simply doesn't resolve from the public internet. You'd have to be on my network (or VPN) for it to go anywhere.
Phase 3: Accessing from anywhere
This is the satisfying part. Once the server is running and the proxy is configured, the workflow from any device is:
- Connect to WireGuard (if I'm not already home)
- Open a browser
- Go to
opencode.example.com - Done. Full OpenCode web UI, all my agents, all my MCP servers, everything.
From my MacBook Air at a coffee shop, from my phone on the couch, doesn't matter. The web UI is the same everywhere. I can start a task on my MacBook, close the laptop, pick it up on my phone later, and everything is still there because the server is running on the beast at home.
This pairs really nicely with my Claude Code from the beach setup, but it's way friendlier. That setup uses mosh + tmux + SSH bridges through Termux to get a terminal on a remote machine. It works great for Claude Code (which is a TUI), but it's a lot of moving parts: you need Termux, SSH keys on your phone, a jump box, mosh installed everywhere. If something breaks in the chain, you're debugging SSH configs from a phone keyboard. I wrote a whole blog post about that setup and I'm proud of it, but let's be real: the fact that I needed an entire blog post to explain how to use Claude Code from my phone is kind of the problem.
With OpenCode, I just open a browser. That's it. Any browser, on any device. No Termux, no SSH keys, no jump box, no terminal emulator. My phone's regular browser works perfectly. My MacBook's browser works perfectly. If I ever get a tablet, that'll work too. The barrier to entry went from "install Termux, configure SSH, set up mosh, create fish aliases" to "open Firefox."
Hey Anthropic, if you're reading this: please give Claude Code a web UI. I love your tool, I pay $100/month for it, but the fact that OpenCode can do this out of the box and Claude Code can't is… not great. I shouldn't need a 600-word phase-by-phase guide to use my coding agent from my phone. Just saying. 🙃
I still use the Claude Code + mosh + tmux setup for Claude Code specifically (since it's terminal-only), but for OpenCode work, the web UI is a massive quality-of-life upgrade for mobile coding.
Phase 4: The overnight crew
This is my favorite part. The server runs 24/7, so why not put it to work while I sleep?
I use the opencode-scheduler plugin, which lets you schedule recurring jobs using your OS's native scheduler (systemd timers on Linux, launchd on Mac). It's an OpenCode plugin, so you set it up directly from the OpenCode UI.
First, add the plugin to your opencode.json:
{
"plugin": ["opencode-scheduler"]
}
Then, from the OpenCode UI, you just tell it what you want in natural language:
Schedule a job that runs every weekday at 2am and runs the test-gap-pr-cronjob skill
The plugin takes care of creating the systemd timer and service under ~/.config/systemd/user/. You can verify it's installed with:
systemctl --user list-timers | grep opencode
What my overnight jobs do
I have three scheduled jobs that run between 1 AM and 6 AM while I'm sleeping. Each one uses a custom OpenCode skill (similar to the planning/execution/review agents I described on my AI Toolbox page):
- 2 AM - Test gap finder: Scans the codebase for untested or under-tested code, writes the missing tests, and opens a PR.
- 3 AM - Documentation updater: Checks for outdated or missing docstrings and README sections, updates them, and opens a PR.
- 4 AM - Convention enforcer: Reviews code for style and convention violations that linters don't catch (naming patterns, architectural decisions, etc.), fixes them, and opens a PR.
Each job uses a custom skill that knows the project's conventions, testing patterns, and documentation style. The skills are the same kind of custom agents I build for my regular OpenCode workflow, just triggered on a schedule instead of manually.
The morning routine
When I log in in the morning, I usually have 1-3 PRs waiting for me. Most of them are good to go with minor tweaks. Some need more work. Either way, the tedious stuff (writing tests for edge cases, updating docstrings, fixing inconsistent naming) is already done, and I just need to review it.
It's like having a junior developer who works the night shift. They're not perfect, but they're reliable, they don't complain, and they're surprisingly good at the boring stuff.
You can check the logs for any job at any time:
# From the OpenCode UI
Show logs for test-gap-pr-cronjob
# Or directly on disk
cat ~/.config/opencode/logs/test-gap-pr-cronjob.log
The specs
For anyone curious about the machine running all of this:
OS: Manjaro Linux 26.0.4
Host: B850M Pro-A WiFi
Kernel: 6.12.77-1-MANJARO
CPU: AMD Ryzen 9 9950X3D (32) @ 5.752GHz
GPU: AMD ATI Radeon RX 9060 XT GAMING OC 16G
Memory: 64 GB DDR5
Network: WiFi 6
Uptime: usually measured in days, not hours
The machine is wildly overpowered for this. OpenCode's server uses barely any resources when idle, and even during active sessions or scheduled jobs, it doesn't break a sweat. If you have a less powerful machine that stays on, this setup will work fine for you too.
Conclusion
The whole setup took maybe 30 minutes. A systemd service, a proxy host, and a scheduler plugin. That's it.
What I love about this is that it extends my AI Toolbox in a way I didn't expect. I went from "I use OpenCode when I'm at my desk" to "OpenCode is always running and I can use it from anywhere, and it also does work for me while I sleep." The scheduled jobs alone have saved me hours of tedious work every week.
If you have a machine that stays on (even a modest home server or an old laptop), you can do this. You don't need a Ryzen 9 or 64 GB of RAM. You need a machine that doesn't turn off, a way to reach it remotely, and the willingness to let AI handle the boring stuff while you're asleep.
All my configs are public in my dotfiles: git.rogs.me/rogs/dotfiles
If you have questions, hit me up. And if you set this up and wake up to PRs you didn't write, let me know. That first morning is a great feeling.
See you in the next one!
02 Apr 2026 5:00am GMT
Python Leiden (NL) meetup: newwave, python package setup and cpython contribution - Michiel Beijen
(One of my summaries of the Leiden (NL) Python meetup).
Wave audio is more or less the original representation of digital audio. .wav format (there are competing formats like .au or .aiff). Typically it is "PCM-encoded", just as what comes out of the CD.
The .wav format was introduced by microsoft in windows 3.1 in 1992. But... it is still relevant for audio production or podcasts. And: for lab equipment. The company Michiel works for (https://samotics.com/, sponsor of the meetup's pizza, thanks!) records the sound of big motors and other big equipment for analysis purposes. They use .wav for this.
Around 1992, Python also started to exist. Version 1.0 (1994) included the "wave" module in its standard library. But: it is only intended for regular .wav usage, so two stereo channels and max 44.1 kHz frequency. He needed three channels (for sound recordings for the three phases of the electrical motor) and he needed higher resolution.
He showed that you could actually put three channels into a .wav with Python. Audacity loaded it just fine, but the "flac" encoder refused it, with an error about a missing WAVE_FORMAT_EXTENSIBLE setting. He started investigating the python bugtracker and discovered someone had already provided a fix for reading wav files in that extended format.
So he dug further. And discovered some bugs himself and reported them. And found undocumented methods and reported them. So after reports, you should try fixing them with a pull request. He discovered a half-finished PR and worked on basis of that. Lots of discussion in the ticket, but in the end it got merged. Another bugfix also got merged. Hurray!
But... those fixes will end up in Python 3.15, October 2026. And his company is just moving from 3.10 to 3.11... So he made a library out of it at https://codeberg.org/michielb/newwave . (And he put in some good words for https://codeberg.org , as that's a nice github alternative: operated by volunteers under a German Foundation instead of we-have-put-all-our-open-source-eggs-in-one-basket's Github, being owned by Microsoft/USA).
02 Apr 2026 4:00am GMT
Python Leiden (NL) meetup: creating QR codes with python - Rob Zwartenkot
(One of my summaries of the Leiden (NL) Python meetup).
Qr codes are everywhere. They're used to transport data, the best example is a link to a website, but you can use them for a lot of things. A nice different usage is a WIFI connection string, something like WIFI:T:WPA;S:NetworkName;p:password;; . Rob focuses on the url kind.
There's a standard, ISO/IEC 18004. QR codes need to be square. With a bit of a margin around it. The cells should also be square. You sometimes see somewhat rounded cells, but that's not according to the standard. You sometimes see a logo in the middle, but that actually destroys data! Luckily there's error correction in the standard, that's the only reason why it works. There's more to QR codes than you think!
He uses the segno as qr code library (instead of "qrcode"). It is more complete, allows multiple output formats, it can control error correction:
import segno
qr = segno.make("https://pythonleiden.nl/")
qr.save("leiden.png")
Such an image is very small. If you scale it, it gets blurry. And there's no border and no error correction. We can do better:
import segno
qr = segno.make("https://pythonleiden.nl/", error="h")
# "h" is the "high" level of error correction, it allows
# for up to 30% corruption.
qr.save(
"leiden.png",
scale=10,
border=4,
)
Segno can also give you the raw matix of cells. That way you can do some further processing on it. For instance with PIL (the Python Imaging Library). As an example, he placed a logo in the middle of the QR code.
How you can work with the matrix:
... same as before ...
for line in qr.matrix:
for cell in line:
....
He went totally overboard with round dots and colors and a logo in the middle. At least on my phone, it still worked! Funny.

02 Apr 2026 4:00am GMT
Python Leiden (NL) meetup: building apps with streamlit - Daniël Kentrop
(One of my summaries of the Leiden (NL) Python meetup).
Daniël is a civil engineer turned software developer. He works for a company involved in improving the 17000 km of dikes in the Netherlands. He liked programming, but interfacing with his colleagues was a bit problematic. Jupyter notebooks without a venv? Interfaces with QT (which explode in size)? PyInstaller? In the end, he often opted for data in an excel sheet and then running a python script...
He now likes to use streamlit, a Python library for creating simple web apps, prototyping, visualisation. It has lots of build-in elements and widgets. Data entry, all sorts of (plotly) charts, sliders, selectboxes, pop-ups, basically everything you need.
You can add custom components with html/css/js.
How does it work? It is basically a script. The whole page is loaded every time and re-run. A widget interaction means a re-run. Pressing a button means a re-run. There is state you can store on the server per session. He showed a demo demonstrating the problems caused by the constant re-running (and losing of state) and how to solve it with the session state.
He then showed a bigger streamlit demo. On a map, you could draw an area and select water level measurement stations and then show water levels of the last month. Nice.
An upcoming change to streamlit: they're going to move from the Tornado web runner to Starlette, which also means ASGI support.
02 Apr 2026 4:00am GMT
01 Apr 2026
Django community aggregator: Community blog posts
Boost Your GitHub DX - Adam Johnson
🔗 Links
- Adam's Books
- Introducing tprof
- Introducing icu4py
- Carlton's keynote at DjangoCon Europe
- Recent trends in the work of the Django Security Team
📦 Projects
📚 Books
- The Coming Wave by Mustafa Suleyman
- The BEAM Book: Understanding the Erlang Runtime - Erik Stenman
- The Fabric of Civilization by Virginia Postrel
🎥 YouTube
Sponsor
This episode is brought to you by Six Feet Up, the Python, Django, and AI experts who solve hard software problems. Whether it's scaling an application, deriving insights from data, or getting results from AI, Six Feet Up helps you move forward faster.
See what's possible at https://sixfeetup.com/.
01 Apr 2026 2:00pm GMT
How to Generate SEO Descriptions for Your Entire Wagtail Site at Once
Recently, I've used Wagtail AI internals to mass-generate SEO descriptions for my blog posts. 150+ pages, done in minutes - way faster than clicking through the admin UI one page at a time.
Wagtail AI package provides you with a solid set of AI tools that help with text …
01 Apr 2026 7:53am GMT
27 Mar 2026
Django community aggregator: Community blog posts
Django News - Balancing the AI Flood in Django - Mar 27th 2026
News
Calling for research participants from Django, Laravel, Ruby on Rails, Next.js and Spring Boot communities
Former DSF President and researcher Anna Makarudze is seeking Django developers to share insights on dependency vulnerabilities and supply chain risks in open source.
Djangonaut Space News
Djangonaut Space Financial Report 2025
Djangonaut Space's 2025 report highlights a community-powered year of $2.2k in donations funding tools and conference access, while setting sights on sending contributors to even more events in 2026.
Djangonaut diaries, week 3 - Working on an ORM issue
A deep dive into Django's ManyToMany indexes reveals an unnecessary extra index, showing how databases already optimize with composite indexes and setting the stage for a cleaner ORM fix.
Wagtail CMS News
Wagtail Routable Pages and Layout Configuration
Build flexible Wagtail routable pages that use StreamField layouts to dynamically control how Django model data renders on detail views.
Updates to Django
Today, "Updates to Django" is presented by Raffaella from Djangonaut Space! 🚀
Last week we had 18 pull requests merged into Django by 15 different contributors - including 4 first-time contributors! Congratulations to Juho Hautala, Huwaiza, (James) Kanin Kearpimy 🚀 and Praful Gulani for having their first commits merged into Django - welcome on board!
News in Django 6.1:
- Providing
fail_silently=True,auth_user, orauth_passwordto mail sending functions (such assend_mail()) while also providing aconnectionnow raises aTypeError. assertContains()andassertNotContains()can now be called multiple times on the sameStreamingHttpResponse. Previously, they would consume the streaming response's content, causing subsequent calls to fail.- Because quoted aliases are case-sensitive, raw SQL references to aliases mixing case, such as when using
RawSQL, might have to be adjusted to also make use of quoting.
Django Newsletter
Django Fellow Reports
Fellow Report - Natalia
A significant portion of this week was dedicated to security work (yes, again). As usual, details here are intentionally kept at a high level, but the time went into triaging new reports, progressing in-flight likely confirmed issues, validating proposed fixes, and coordinating next steps with the team.
One additional challenge worth noting is the volume of near-duplicate reports; beyond triage, this often requires careful comparison across long submissions to identify what is actually new or meaningfully different.
Fellow Report - Jacob
Easy to miss in the release notes (as we only described the user-facing changes for edge cases), but last week we landed (with great joy) @charettes' defense-in-depth measure for the ORM that ensures user-provided aliases are always quoted.
In addition to the below, another steady week advancing pending security reports.
Sponsored Link 1
The deployment service for developers and teams.
Articles
Learning LLM Integration
A practical, from-scratch look at integrating LLMs into a Django app, highlighting why isolating the AI layer and writing precise prompts makes all the difference.
Give Django your time and money, not your tokens
The Django community wants to collaborate with you, not a facade of you.
Open Source Has a Bot Problem
The maintainer of awesome-mcp-servers came up with a solution, of sorts, to curating AI-generated PRs.
Why pylock.toml includes digital attestations
A Python project got hacked where malicious releases were directly uploaded to PyPI. I said on Mastodon that had the project used trusted publishing with digital attestations, then people using a pylock.toml file would have noticed something odd was going on thanks to the lock file including attestation data.
Rewriting a 20-year-old Python library
A thoughtful deep dive into rewriting a 20-year-old Python library, covering async design, API ergonomics, and how to modernize without breaking users.
Playground embedding, packages and more
The nanodjango playground has several new exciting features which transform what you can achieve with it - you can now manage packages and secrets, share scripts from the command line, and embed live Django code in your own site.
Human.json
A quick look at human.json, a lightweight protocol for sharing human-readable metadata, with a simple Django implementation and a healthy dose of skepticism about its long-term adoption.
Videos
PyCon US 2026 - Elaine Wong & Jon Banafato
A behind-the-scenes look at PyCon US 2026 with chair Elaine Wong and co-chair Jon Banafato, covering what's new, how to prepare, and tips to make the most of the biggest Python conference in North America.
Django Job Board
Solutions Architect - Python (Client-facing) at JetBrains
Django Newsletter
Django Forum
Discouraging "the voice from nowhere" (~LLMs) in documentation
Forum discussion on maintaining a human (not LLM) voice in Django's documentation.
Projects
kujov/django-tw
Zero-config Tailwind CSS v4 for Django.
VojtechPetru/django-live-translations
In-browser translation editing for Django superusers.
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
27 Mar 2026 3:00pm GMT
25 Mar 2026
Django community aggregator: Community blog posts
LLMs for Open Source maintenance: a cautious case
LLMs for Open Source maintenance: a cautious case
When ChatGPT appeared on the scene I was very annoyed at all the hype surrounding it. Since I'm working in the fast moving and low margin business of communication and campaigning agencies I'm surrounded by people eager to jump on the hype train when a tool promises to lessen the workload and take stuff from everyone's plate.
These discussions coupled with the fact that the training of these tools required unfathomable amounts of stealing were the reason for a big reluctance on my part when trying them out. I'm using the word stealing here on purpose, since that's exactly the crime Aaron Swartz was accused of by the attorney's office of the district of Massachusetts. It's frustrating that some people can get away with the same crime when it is so much bigger. For example, OpenAI and Anthropic downloaded much more data than Aaron ever did.
A somewhat related thing happened with the too-big-to-fail banks: There, the people at the top were even compensated with golden parachutes at the end. LLM companies seem to be above accountability too.
Despite all this, I have slowly started integrating these tools into my workflows. I don't remember the exact point in time, but since some time in 2025 my opinions on their utility has started to change. At the beginning, I always removed the attribution and took great care to write and rewrite the code myself, only using the LLMs for inspiration and maybe to generate integration tests. More and more I have to admit that they are useful, especially in time constrained projects with a clear focus and purpose.
Last month I fixed and/or closed all open issues in the django-tree-queries repository with the help of Claude Code. Is that a good thing? It could be argued I should have done the work myself. But I wouldn't have - I have other things I want to do with my time. I don't want to (always) work on Open Source software in the evening. I definitely also have leaned heavily on LLMs when working on django-prose-editor.
Is faster better?
We can produce more code, more features and close tickets faster than before. In my experience the speed up isn't as big as some people may want us to believe, but it's there. And contrary to what people in my LinkedIn feed say, that's not an obviously good thing. Is it a race to the bottom where we drown in LLM-generated slop in quantities impossible to maintain? It doesn't feel like that - but it's a race that could go both ways. Throwaway code can be thrown away though, and well tested code does what the tests say, which is good enough according to my rules for releasing open source software.
Speaking as someone who has put more into the training set than they've taken out so far, I don't feel all that bad using the tools. Coding agents can already be run locally with reasonable hardware requirements, at least during inference, which is where the ongoing cost sits. Maybe using them is still rationalization. But contribution and profit needing to stay in some rough balance feels like the right frame. Total abstinence isn't the only ethical choice we have.
Community tensions
What makes me less comfortable is how communities are reacting. There are real concerns within the Django world, and not just the practical one of overworked maintainers wading through hastily generated patches that don't actually fix anything. The deeper worry is about the communal nature of contribution: that working on Django is supposed to be a learning experience, a way into the community, and that using an LLM as a vehicle rather than a tool hollows out that process. Reviewers end up interacting with what is essentially a facade, unable to tell whether anyone actually understood the problem. That's a real concern and I don't want to dismiss it.
But it maps onto a different situation from what I've been describing. Using Claude Code to close issues in projects I maintain and understand is not the same as using it to paper over gaps in comprehension on a ticket in someone else's project. Whether LLM-assisted contributions to Django itself are appropriate is a difficult question; whether it's appropriate to use them when maintaining your own software less so.
There's also a harder tension around quality. Django's conservatism has real value: rigorous review, minimal magic, a coherent philosophy. The ORM and template system don't need to reinvent themselves, they work well, are still evolving while staying rock-solid for all my use cases. And reading the release notes always brings me joy. But it could be more exciting more often. Quality isn't a strictly positive thing. Everything has costs. It's not great if the price of the bar is that legitimate bugs sit open for years because nobody has a few evenings to spend on them. It happened with django-tree-queries before I went through it with Claude Code. I think the bar for contributing to Django is too high. I would value a little more motion and a little less stability, even as someone running dozens of Django websites and apps.
Then there's the pile-on dynamic that plays out on Mastodon and GitHub. When the Harfbuzz and chardet maintainers disclosed LLM usage, the reaction from some corners was something to behold. People expressing what amounted to personal grievance over tooling choices in projects they may not even use. There's a particular kind of entitlement in telling a maintainer - who is keeping software alive, possibly even in their spare time - that the way they choose to do that work is an affront. Open source is a gift, whether paid or not, and nobody has to accept it, but disclosing your tooling isn't an invitation for complaints. The ethical concerns about training data, resource use and other negative externalities are legitimate and worth raising. Performative outrage directed at individual maintainers is not the same thing.
I don't have an easy conclusion. The tools are useful, the ethics are murky, and communities are still figuring out how to respond. A cautious, honest use of them feels better to me than the alternatives.
25 Mar 2026 5:00pm GMT
Building modern Django apps with Alpine AJAX, revisited
About nine months ago I wrote an article about my quest to simplify my web development stack. How I went from SvelteKit on the frontend and Django on the backend, to an all-Django stack for a new project, using Alpine AJAX to enable partial page updates.
I've now been using this new stack for a while, and my approach -as well as my opinion- has changed significantly. Let's get into what works, what doesn't, and where I ended up.
A quick recap
Alpine AJAX is a lightweight alternative to htmx, which you can use to enhance server-side rendered HTML with a few attributes, turning <a> and <form> tags into AJAX-powered versions. No more full page refreshes when you submit a form.
The key mechanic: when a form has x-target="comments", Alpine AJAX submits the form via AJAX, finds the element with that ID in the response, and swaps it into the page. The server returns HTML, not JSON.
In the original article I used django-template-partials (since merged into Django itself) to mark sections of a template as named partials using {% partialdef %}. Combined with a custom AlpineTemplateResponse the view could automatically return just the targeted partial when the request came from Alpine AJAX.
Where I began: template partials
Let's say you have an article page with the article body parsed from Markdown, a like button, and a comment section. The template looks something like this:
article.html{% extends "base.html" %} {% block body %} <article> <h1>{{ article.title }}</h1> {{ article_html|safe }} {% partialdef like_form inline %} <form method="post" id="like_form" x-target="like_form"> {% csrf_token %} <button type="submit" name="toggle-like"> {% if article.is_liked %}Unlike{% else %}Like{% endif %} </button> </form> {% endpartialdef %} {% partialdef comments inline %} <div id="comments"> {% for comment in article.comments.all %} <div>{{ comment.user }}: {{ comment.text }}</div> {% endfor %} <form method="post" x-target="comments"> {% csrf_token %} {{ comment_form }} <button type="submit" name="add-comment">Submit</button> </form> </div> {% endpartialdef %} </article> {% endblock %}
Every form action POSTs to the same article view, which handles all the actions in one big post method:
views.pyclass ArticleView(View): def get_context(self, request, pk): article = get_object_or_404( Article.objects.prefetch_related("comments") .annotate_is_liked(request.user), pk=pk, ) return { "article": article, "article_html": markdown(article.body), "comment_form": CommentForm(), } def post(self, request, pk): context = self.get_context(request, pk) article = context["article"] if "toggle-like" in request.POST: if article.is_liked: article.unlike(request.user) article.is_liked = False else: article.like(request.user) article.is_liked = True return AlpineTemplateResponse(request, "article.html", context) if "add-comment" in request.POST: form = CommentForm(request.POST) if form.is_valid(): Comment.objects.create(article=article, user=request.user, ...) return AlpineTemplateResponse(request, "article.html", context) return redirect(article) def get(self, request, pk): context = self.get_context(request, pk) return AlpineTemplateResponse(request, "article.html", context)
The AlpineTemplateResponse from the original article takes care of returning just the targeted partial when the request comes from Alpine AJAX. It works. I thought I was being smart to prevent template duplication this way, but there are two problems:
-
The view does too much work. Every POST action calls
get_context, which fetches everything: the article, the parsed Markdown body, the comments, the like state, the comment form. When the user clicks "Like", we do all this work we'll never use in the partial template. The template partial means the response is small, but the server-side work is exactly the same as rendering the full page. -
The template is a mess. Those
{% partialdef %}blocks scattered throughout the template make it noisy and hard to read. In a small example it's fine, but in a real template with 200+ lines, it gets ugly fast.
When doubt set in: switching to Jinja2
To be honest though, the real killer of my motivation while working on this project has been the Django Template Language. I'm sorry, but I just hate it. I have since 2009, and I still do. The syntax is bad enough, but then you have to constantly fight its limitations. The fact I can't simply call a function is so incredibly annoying, and is causing way more boilerplate with tons of custom template tags and filters.
So, switch to Jinja2, right? Except that template partials aren't supported in combination with Jinja2. No more {% partialdef %}. Which means returning full page responses for AJAX requests, which isn't exactly ideal.
I did it anyway. I ripped out all the {% partialdef %} tags, migrated my templates to Jinja2, and my views just returned the full template for AJAX requests. Alpine AJAX is smart enough to extract the elements it needs by their IDs, and throws away the rest.
This was simpler and I was much happier writing Jinja2 templates. But the wastefulness got worse. Before, the server at least returned a small response. Now it rendered the entire page and sent all of it over the wire, just for the browser to use a tiny piece of it.
It was at this moment that I seriously thought about throwing the entire frontend away and rebuilding it in SvelteKit, with Django REST Framework returning JSON responses. But that seemed like a pretty big waste of effort, so instead I took a deep breath and thought about what I wanted:
- Jinja2 templates. Non-negotiable.
- Small, fast AJAX responses. No rendering the full page for a like toggle.
- No template duplication between the full page and the AJAX response.
- Simple views that only do the work they need to do.
Template partials gave me #2 and #3, but not #1 or #4. Switching to Jinja2 and returning the full template for AJAX requests gave me #1 and #3, but not #2 or #4. I needed a different approach.
Where I ended up: separate views with template includes
The answer turned out to be straightforward, and the one I initially discarded as "too much boilerplate": instead of one monolithic view handling all POST actions, split each action into its own view with its own URL. And instead of {% partialdef %}, use plain {% include %} tags to extract reusable template fragments.
Let me show you. Here's the simplified article template:
article.html{% extends "base.html" %} {% block body %} <article> <h1>{{ article.title }}</h1> {{ article.body }} {% include "articles/_like_form.html" %} {% include "articles/_comments.html" %} </article> {% endblock %}
Clean and readable. Each include is a self-contained fragment. And here's the like form:
_like_form.html<form method="post" action="{{ url('toggle-like', args=[article.id]) }}" id="like_form" x-target="like_form"> {{ csrf_input }} {% if article.is_liked %} <button type="submit">Unlike</button> {% else %} <button type="submit">Like</button> {% endif %} </form>
And finally, the view:
views.pyclass ToggleLikeView(LoginRequiredMixin, View): def post(self, request, pk): article = get_object_or_404( Article.objects.annotate_is_liked(request.user), pk=pk, ) if article.is_liked: article.unlike(request.user) article.is_liked = False article.like_count -= 1 else: article.like(request.user) article.is_liked = True article.like_count += 1 if is_alpine(request): return TemplateResponse( request, "articles/_like_form.html", {"article": article}, ) # For non-Alpine requests, we just redirect back return redirect(article)
No comment queries. No form building. No Markdown parsing. Just the like state.
The is_alpine check provides a redirect fallback for non-JavaScript POST requests, keeping things progressive. And the ArticleView itself becomes GET-only. No more branching on POST keys. No get_context method that fetches everything for every action. Each view does one thing.
The trade-offs
More templates. For the article page, I went from one template to several: the include fragments (_like_form.html, _comments.html) that are shared between the full page and the AJAX responses. When an action needs to update multiple elements on the page, you also end up with small response templates that combine the right includes. For example, if submitting a comment should update both the comment list and a comment count elsewhere on the page:
_add_comment_response.html{% include "articles/_comments.html" %} {% include "articles/_engagement_counts.html" %}
Trivial, but still a file you have to create and name.
More views and URL routes. Each action gets its own view class and its own path() entry. For a page with likes, comments, and subscriptions, that's three or four extra views.
But here's what I got in return:
Actual performance improvement. Not just smaller responses, but less work on the server. Each view only queries what it needs.
Jinja2. I'm using Jinja2 instead of the Django Template Language. I can call functions, I have proper expressions, and I don't need custom template tags for basic things. This alone was worth the switch.
Readable templates. The main article.html is short and shows the page structure at a glance. Each fragment is self-contained. No {% partialdef %} blocks scattered everywhere.
Simple views. Each view does exactly one thing. Easy to understand, easy to test, easy to optimize.
Conclusion
I went through three stages: template partials with Django Template Language, full-page responses with Jinja2, and finally separated views with template includes. Each step solved a real problem with the previous approach.
The pattern I've landed requires more files and views than I'd like, but each is simple and does one thing.
My overall feelings on Django + Alpine AJAX have also changed. I still believe there are benefits to using a simplified tech stack and using hypermedia as the engine of state. Just return HTML instead of returning JSON to a JavaScript framework which then has to turn it into HTML. Conceptually it just makes sense to me.
But the dream was to build a plain old Django application using simple views and simple templates, using old-fashioned MPA server-rendered pages. Sprinkle in a few Alpine AJAX attributes and magically your site gets SPA-like usability. And it simply hasn't played out that way for me. Yes, you could do that, if you're fine with the wastefulness of returning full pages as a response to AJAX requests. But when you want to do it better than that, you end up with more boilerplate to make it possible to return small bits of HTML.
And this isn't really about Alpine AJAX specifically; htmx would lead to the exact same place. The fundamental tension is in the HTML-over-the-wire approach itself: the server has to know which fragments of HTML to return, and that means structuring your views and templates around it. You trade the complexity of a JavaScript frontend for a different kind of complexity on the server.
Progressive enhancement adds to that complexity. Every view needs an is_alpine check with a redirect fallback, every form needs to work both as a regular POST and as an AJAX submit. If I dropped progressive enhancement and just required JavaScript, those redirect fallbacks and the branching that comes with them would disappear. The views would be simpler. But I think progressive enhancement is important enough to keep in place.
Would I use Alpine AJAX (or htmx) again? Honestly: probably not. I have a lot more fun when building frontends with SvelteKit. Building composable and reusable UI components is so much more natural there, and the performance is simply better (once the initial JS bundle has been downloaded and parsed). But am I going to throw away my current project's code and redo it all? No, I am not. Django with Alpine AJAX is a nice change of scenery, it's a nice playground I don't usually get to play in. I think I ended up with a good compromise, and hey: I still don't have to build and maintain a separate API, API docs, and frontend.
25 Mar 2026 3:16pm GMT
23 Mar 2026
Django community aggregator: Community blog posts
Built with Django — Weekly Roundup (Mar 16–Mar 23, 2026)
Hey, Happy Monday!
Why are you getting this: *You signed up to receive this newsletter on Built with Django. I promised to send you the latest projects and jobs on the site as well as any other interesting Django content I encountered during the month. If you don't want to receive this newsletter, feel free to unsubscribe anytime.
Sponsor
This issue is sponsored by TuxSEO - your AI content team on auto-pilot.
- Plan and ship SEO content faster
- Generate practical, publish-ready drafts
- Keep your content pipeline moving every week
Projects
- Reckot - Speak and We make it, event management reinvent.
- Your Cloud Hub - YourCloudHub.ai is a technology and digital solutions company offering IT staffing and outsourcing services along with software development, website development, digital marketing.
From the Community
- Django Apps vs Projects Explained: A Complete Production Guide - DEV Community
- Building a Seamless JWT Onboarding Flow with React Router v7 and Django - DEV Community
- How to Show a Waitlist Until Your Wagtail Site Is Ready
Support
You can support this project by using one of the affiliate links below. These are always going to be projects I use and love! No "Bluehost" crap here!
- Buttondown - Email newsletter tool I use to send you this newsletter.
- Readwise - Best reading software company out there. I you want to up your e-reading game, this is definitely for you! It also so happens that I work for Readwise. Best company out there!
- Hetzner - IMHO the best place to buy a VPS or a server for your projects. I'll be doing a tutorial on how to use this in the future.
- SaaS Pegasus is one of the best (if not the best) ways to quickstart your Django Project. If you have a business idea but don't want to set up all the boring stuff (Auth, Payments, Workers, etc.) this is for you!
23 Mar 2026 6:00pm GMT
21 Mar 2026
Django community aggregator: Community blog posts
Human.json
I have seen more and more people talk about human.json lately and I think it is a pretty neat idea. From what I can tell it checks all the boxes I would expect from a protocol like this.
The fact that it relies on browser extensions right now makes sense, but might become a limiting factor in future. Or the number of extensions needs to go up beyond the two easy ones and come to mobile as well. I am not sure this will be going anywhere beyond a few enthusiastic people, but you never know.
Implementing the protocol was not much work, which is expected considering it only consists of two required values and an optional list of two more values. If you want to add it to your Django based site, I packaged everything up and you can find it on PyPI.
Should you use the package? Eh, that is not an easy question. From a supply chain perspective I would say "no". It is only a few lines of code. But you never know how the protocol will evolve, so things might look more complicated in a month. I will do my best to keep up with the protocol and not ship crypto miners.
I am still not a fan of Python packaging, but I have to admit uv makes it kind of bearable despite still not being without little gotchas.
21 Mar 2026 5:05pm GMT
Wagtail Routable Pages and Layout Configuration

If you are familiar with Wagtail CMS for Django, you know that you can create Wagtail pages and control their content and layout with blocks inside of stream fields. But what if you have entries coming from normal Django models through a routable page? In this article, I will explore how you can control the dynamic layout of a detail view in a routable page.
Routable pages in Wagtail are dynamic pages of your CMS page tree that can have their own URL subpaths and views. You can use them for filtered list and detail views, multi-step forms, multiple formats for the same data, etc. Here I will show you a routable ArticleIndexPage with a list and detail views for Article instances rendering the detail views based on the block layout in a detail_layout stream field.

1. Project Setup
Create a Wagtail project myproject and articles app:
pip install wagtail
wagtail start myproject
cd myproject
python manage.py startapp articles
Add to INSTALLED_APPS in your Django project settings:
INSTALLED_APPS = [
...
"wagtail.contrib.routable_page", # required for RoutablePage
"myproject.apps.articles",
]
2. File Structure
The articles app:
myproject/apps/articles/
├── __init__.py
├── apps.py
├── models.py # Article, Category, ArticleIndexPage
├── blocks.py # All StreamField block definitions
└── admin.py # Register Article and Category in Django admin
The articles templates:
myproject/templates/articles/
├── article_list.html # List view
├── article_detail.html # Detail view
└── blocks/
├── cover_image_block.html
├── description_block.html
└── related_articles_block.html
3. Models
myproject/apps/articles/models.py
Create the Category and Article Django models, and the ArticleIndexPage routable Wagtail page with article list and detail views:
from django.core.paginator import EmptyPage, PageNotAnInteger, Paginator
from django.db import models
from django.shortcuts import get_object_or_404
from django.utils.translation import gettext_lazy as _
from wagtail.admin.panels import FieldPanel, ObjectList, TabbedInterface
from wagtail.contrib.routable_page.models import RoutablePageMixin, path
from wagtail.fields import StreamField
from wagtail.models import Page
from .blocks import article_detail_layout_blocks
class Category(models.Model):
name = models.CharField(max_length=100, verbose_name=_("name"))
slug = models.SlugField(unique=True, verbose_name=_("slug"))
class Meta:
verbose_name = _("category")
verbose_name_plural = _("categories")
def __str__(self):
return self.name
class Article(models.Model):
title = models.CharField(max_length=255, verbose_name=_("title"))
slug = models.SlugField(unique=True, verbose_name=_("slug"))
category = models.ForeignKey(
Category,
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name="articles",
verbose_name=_("category"),
)
cover_image = models.ForeignKey(
"wagtailimages.Image",
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name="+",
verbose_name=_("cover image"),
)
description = models.TextField(blank=True, verbose_name=_("description"))
created_at = models.DateTimeField(auto_now_add=True, verbose_name=_("created at"))
class Meta:
verbose_name = _("article")
verbose_name_plural = _("articles")
def __str__(self):
return self.title
class ArticleIndexPage(RoutablePageMixin, Page):
"""
A single Wagtail page that owns:
- /articles/ → paginated list of all Articles
- /articles/<slug>/ → detail view for one Article
The StreamField is edited once in the Wagtail admin and
defines the layout for every detail view.
"""
articles_per_page = models.IntegerField(default=10, verbose_name=_("articles per page"))
detail_layout = StreamField(
article_detail_layout_blocks(),
blank=True,
use_json_field=True,
verbose_name=_("detail layout"),
help_text=_(
"Configure the layout for all article detail pages. "
"Add, remove, and reorder blocks to change what appears "
"on every article detail view."
),
)
# TabbedInterface gives List View and Detail View their own tabs.
# promote_panels and settings_panels must be added explicitly here
# because edit_handler takes full ownership of the admin UI structure.
edit_handler = TabbedInterface([
ObjectList(Page.content_panels + [FieldPanel("articles_per_page")], heading=_("List View")),
ObjectList([FieldPanel("detail_layout")], heading=_("Detail View")),
ObjectList(Page.promote_panels, heading=_("SEO / Promote")),
ObjectList(Page.settings_panels, heading=_("Settings")),
])
class Meta:
verbose_name = _("article index page")
verbose_name_plural = _("article index pages")
@path("")
def article_list(self, request):
all_articles = Article.objects.select_related("category", "cover_image").order_by("-created_at")
paginator = Paginator(all_articles, self.articles_per_page)
page_number = request.GET.get("page")
try:
articles = paginator.page(page_number)
except PageNotAnInteger:
articles = paginator.page(1)
except EmptyPage:
articles = paginator.page(paginator.num_pages)
return self.render(
request,
context_overrides={"articles": articles, "paginator": paginator},
template="articles/article_list.html",
)
@path("<slug:article_slug>/")
def article_detail(self, request, article_slug):
article = get_object_or_404(
Article.objects.select_related("category", "cover_image"),
slug=article_slug,
)
return self.render(
request,
context_overrides={"article": article},
template="articles/article_detail.html",
)
4. StreamField Blocks
myproject/apps/articles/blocks.py
Create Wagtail stream-field blocks for the cover image, description, and the related articles of an actual article. Each block can have some settings on how to represent the content of the block.
from django.utils.translation import gettext_lazy as _
from wagtail import blocks
class CoverImageBlock(blocks.StructBlock):
aspect_ratio = blocks.ChoiceBlock(
choices=[
("16-9", _("16:9 Widescreen")),
("4-3", _("4:3 Standard")),
("1-1", _("1:1 Square")),
("3-1", _("3:1 Banner")),
],
default="16-9",
label=_("Aspect ratio"),
help_text=_("Controls the cropping of the cover image."),
)
class Meta:
template = "articles/blocks/cover_image_block.html"
icon = "image"
label = _("Cover Image")
class DescriptionBlock(blocks.StructBlock):
max_lines = blocks.IntegerBlock(
min_value=0,
default=0,
label=_("Maximum lines"),
help_text=_("Clamp the description to this many lines. Set to 0 to show all."),
required=False,
)
class Meta:
template = "articles/blocks/description_block.html"
icon = "pilcrow"
label = _("Description")
class RelatedArticlesBlock(blocks.StructBlock):
sort_order = blocks.ChoiceBlock(
choices=[
("newest", _("Newest first")),
("oldest", _("Oldest first")),
("title_asc", _("Title A → Z")),
("title_desc", _("Title Z → A")),
],
default="newest",
label=_("Sort order"),
help_text=_("Order in which related articles are listed."),
)
def get_context(self, value, parent_context=None):
context = super().get_context(value, parent_context=parent_context)
article = (parent_context or {}).get("article")
if not article or not article.category_id:
context["related_articles"] = []
return context
from .models import Article
sort_map = {
"newest": "-created_at",
"oldest": "created_at",
"title_asc": "title",
"title_desc": "-title",
}
context["related_articles"] = (
Article.objects.select_related("category", "cover_image")
.filter(category=article.category)
.exclude(pk=article.pk)
.order_by(sort_map.get(value["sort_order"], "-created_at"))[:3]
)
return context
class Meta:
template = "articles/blocks/related_articles_block.html"
icon = "list-ul"
label = _("Related Articles")
def article_detail_layout_blocks():
"""
Returns the list of (name, block) tuples used in ArticleIndexPage.detail_layout.
Defined as a function so models.py can import it without circular issues.
"""
return [
("cover_image", CoverImageBlock()),
("description", DescriptionBlock()),
("related_articles", RelatedArticlesBlock()),
]
The RelatedArticlesBlock here also has a customized context where we pass related_articles variable with 3 other articles of the same category sorted by the sorting order defined in the block.
5. Templates
articles/article_list.html
This will be the template for the paginated article list. Later you could augment it with a search form and filters.
{% extends "base.html" %}
{% load wagtailcore_tags wagtailimages_tags i18n wagtailroutablepage_tags %}
{% block content %}
<main class="article-index">
<h1>{{ page.title }}</h1>
<ul class="article-list">
{% for article in articles %}
<li class="article-card">
{% if article.cover_image %}{% image article.cover_image width-400 as img %}
<img src="{{ img.url }}" alt="{{ article.title }}">
{% endif %}
<h2>
<a href="{% routablepageurl page "article_detail" article.slug %}">{{ article.title }}</a>
</h2>
{% if article.category %}<span class="badge">{{ article.category.name }}</span>{% endif %}
<p>{{ article.description|truncatewords:30 }}</p>
</li>
{% empty %}
<li>{% trans "No articles yet." %}</li>
{% endfor %}
</ul>
{% if articles.has_other_pages %}
<nav class="pagination" aria-label="{% trans 'Article pagination' %}">
{% if articles.has_previous %}
<a href="?page={{ articles.previous_page_number }}">{% trans "← Previous" %}</a>
{% endif %}
<span>{% blocktrans with num=articles.number total=articles.paginator.num_pages %}Page {{ num }} of {{ total }}{% endblocktrans %}</span>
{% if articles.has_next %}
<a href="?page={{ articles.next_page_number }}">{% trans "Next →" %}</a>
{% endif %}
</nav>
{% endif %}
</main>
{% endblock %}
articles/article_detail.html
The detail page would use the {% include_block page.detail_layout with article=article page=page %} to pass the article to the context of each block:
{% extends "base.html" %}
{% load i18n wagtailcore_tags wagtailroutablepage_tags %}
{% block content %}
<article class="article-detail">
<header>
<h1>{{ article.title }}</h1>
{% if article.category %}<span class="badge">{{ article.category.name }}</span>{% endif %}
</header>
{% include_block page.detail_layout with article=article page=page %}
<p>
<a href="{% routablepageurl page "article_list" %}">{% trans "← Back to all articles" %}</a>
</p>
</article>
{% endblock %}
articles/blocks/cover_image_block.html
Cover image block would show the article cover image with the aspect ratio set in the block:
{% load wagtailimages_tags %}
{% if article.cover_image %}
<div class="cover-image cover-image--{{ value.aspect_ratio }}">
{% image article.cover_image width-1200 as img %}
<img src="{{ img.url }}" alt="{{ article.title }}">
</div>
{% endif %}
articles/blocks/description_block.html
Description block would hide the article description text overflow based on the max lines set in the block:
<section class="article-description">
<p{% if value.max_lines > 0 %} class="line-clamp" style="-webkit-line-clamp: {{ value.max_lines }};"{% endif %}>
{{ article.description }}
</p>
</section>
articles/blocks/related_articles_block.html
The related articles block would list the related articles as defined in the extra context of the block:
{% load i18n wagtailimages_tags wagtailroutablepage_tags %}
{% if related_articles %}
<section class="related-articles">
<h2>{% trans "Related Articles" %}</h2>
<ul class="related-articles__list">
{% for rel in related_articles %}
<li class="related-card">
{% if rel.cover_image %}{% image rel.cover_image width-400 as img %}
<img src="{{ img.url }}" alt="{{ rel.title }}">
{% endif %}
<div class="related-card__body">
{% if rel.category %}<span class="badge">{{ rel.category.name }}</span>{% endif %}
<h3>
<a href="{% routablepageurl page "article_detail" rel.slug %}">{{ rel.title }}</a>
</h3>
<p>{{ rel.description|truncatewords:20 }}</p>
</div>
</li>
{% endfor %}
</ul>
</section>
{% endif %}
6. Django Admin Registration
articles/admin.py
Let's not forget to register admin views for the categories and articles so that we can add some data there:
from django.contrib import admin
from .models import Article, Category
@admin.register(Category)
class CategoryAdmin(admin.ModelAdmin):
list_display = ("name", "slug")
prepopulated_fields = {"slug": ("name",)}
@admin.register(Article)
class ArticleAdmin(admin.ModelAdmin):
list_display = ("title", "category", "created_at")
list_filter = ("category",)
search_fields = ("title", "description")
prepopulated_fields = {"slug": ("title",)}
7. Migrations and Initial Data
python manage.py makemigrations articles
python manage.py migrate
python manage.py createsuperuser
python manage.py runserver
8. Wagtail Admin Setup
- Open
http://localhost:8000/cms/and log in. - In the Pages explorer, create an Article Index Page as a child of the root page.
- Set the Slug to
articles.
- Set the Slug to
- On the List View tab, set Articles per page (e.g.
24). - On the Detail View tab, open the Detail Layout StreamField and add blocks in your preferred order:
- Cover Image - choose an aspect ratio.
- Description - optionally set a maximum line count to clamp long descriptions.
- Related Articles - choose the sort order for the three related articles shown.
- Publish the page.
- In the Django admin (
/django-admin/), create some Categories and Articles with cover images and descriptions. - Visit
http://localhost:8000/articles/for the paginated list. - Click any article to see the detail view rendered using the StreamField layout you configured in step 4.
Final words
Using stream fields we can render not only editorial content, for example, images or rich-text descriptions, but also dynamic content based on values from other models and/or the context of the given template.
The approach illustrated in this article allows us to create Wagtail pages where content editors have freedom to adjust the layouts of the pages or insert blocks, such as ads or info texts, into specific places based on real-time events.
21 Mar 2026 5:00pm GMT



