13 Feb 2026
Django community aggregator: Community blog posts
Django News - The Post-Heroku Django World - Feb 13th 2026
News
Django Steering Council 2025 Year in Review
They've been busy! A new-features repo, Community Ecosystem page, administrative bits, and more.
Read the Docs: Making search faster for all projects
Read the Docs massively improved search latency by reindexing into multiple shards, tuning Elasticsearch queries and client, and fixing Django ORM N+1s and caching.
Releases
Python Insider: Python 3.15.0 alpha 6
Python 3.15.0a6 preview highlights a new low-overhead sampling profiler, UTF-8 default encoding, JIT performance gains, unpacking in comprehensions, and typing improvements.
Python Software Foundation
Python is for Everyone
Georgi from the PSF Diversity and Inclusion Working Group talks about the history of these efforts and most importantly, why it matters for all of us.
Django Fellow Reports
Fellow Report - Natalia
3 tickets triaged, 2 reviewed, 1 authored, security work, and other misc.
Fellow Report - Jacob
8 tickets triaged, 18 reviewed, 6 authored, 2 discussed, and other misc.
Wagtail CMS News
Wagtail nominated for TWO CMS Critic Awards! π
Wagtail CMS is up for some trophies.
Updates to Django
Today, "Updates to Django" is presented by Hwayoung from Djangonaut Space! π
Last week we had 11 pull requests merged into Django by 8 different contributors - including 2 first-time contributors! Congratulations to Patryk Bratkowski and ar3ph for having their first commits merged into Django - welcome on board!
It's fixed horizontal form field alignment issues within <fieldset> in admin. (#36788)
Django Newsletter
Sponsored Link 1
PyTV - Free Online Python Conference (March 4th)
1 Day, 15 Speakers, 6 hours of live talks including from Sarah Boyce, Sheena O'Connell, Carlton Gibson, and Will Vincent. Sign up and save the date!
Articles
Django Developer Salary Report 2026
An annual report from Foxley Talent on what's actually happening in the market.
Sorting Strategies for Optional Fields in Django
How to control NULL value placement when sorting Django QuerySets using F() expressions.
How to dump Django ORM data to JSON while debugging?
Sometimes, I need to debug specific high-level tests by inspecting what gets created in the database as a side effect. I could use a debugger and poke around the Django ORM at a breakpoint - but quite often it's simply faster to dump the entire table to JSON, see what's there, and then apply fixes accordingly.
Introducing: Yapping, Yet Another Python Packaging (Manager)
Yapping automates adding dependencies to pyproject.toml and running pip-tools compile/install, providing a simple, non-lockfile Python dependency workflow for Django projects.
Python: introducing icu4py, bindings to the Unicode ICU library
icu4py provides Pythonic bindings to ICU4C for locale-aware text boundary analysis and MessageFormat pluralization, enabling precise internationalization in Django apps.
Loopwerk: It's time to leave Heroku
Heroku is winding down; migrate Django apps now to alternatives like Fly.io, Render, or self-hosted Coolify and Hetzner to regain control, reliability, and lower costs.
Heroku Is (Finally, Officially) Dead
Analyzing the official announcement and reviewing hosting alternatives in 2026.
Videos
django-bolt - Rust-powered API Framework for Django
An overview from BugBytes on the new django-bolt package, describing what it is and how to use it!
Sponsored Link 2
Sponsor This Newsletter!
Reach 4,300+ highly-engaged and experienced Django developers.
Podcasts
Django Chat #195: Improving Django with Adam Hill
Adam is the co-host of the Django Brew podcast and prolific contributor to the Django ecosystem with author of a multitude of Django projects including django-unicorn, coltrane, dj-angles, and many more.
Django Job Board
Lead Backend Engineer at TurnTable π
Python Developer REST APIs - Immediate Start at Worx-ai
Backend Software Developer at Chartwell Resource Group Ltd.
Senior Django Developer at SKYCATCHFIRE
Django Newsletter
Projects
JohananOppongAmoateng/django-migration-audit
A forensic Django tool that verifies whether a live database schema is historically consistent with its applied migrations.
G4brym/django-cf
A set of tools to integrate Django with Cloudflare Developer platform.
DjangoAdminHackers/django-linkcheck
An app that will analyze and report on links in any model that you register with it. Links can be bare (urls or image and file fields) or embedded in HTML (linkcheck handles the parsing). It's fairly easy to override methods of the Linkcheck object should you need to do anything more complicated (like generate URLs from slug fields etc).
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
13 Feb 2026 5:00pm GMT
Use your Claude Max subscription as an API with CLIProxyAPI
So here's the thing: I'm paying $100/month for Claude Max. I use it a lot, it's worth it. But then I wanted to use my subscription with my Emacs packages - specifically forge-llm (which I wrote!) for generating PR descriptions in Forge, and magit-gptcommit for auto-generating commit messages in Magit. Both packages use the llm package, which supports OpenAI-compatible endpoints.
The problem? Anthropic blocks OAuth tokens from being used directly with third-party API clients. You have to pay for API access separately. π€
That felt wrong. I'm already paying for the subscription, why can't I use it however I want?
Turns out, there's a workaround. The Claude Code CLI can use OAuth tokens. So if you put a proxy in front of it that speaks the OpenAI API format, you can use your Max subscription with basically anything that supports OpenAI endpoints. And that's exactly what CLIProxyAPI does.
Your App (Emacs llm package, scripts, whatever)
β
HTTP Request (OpenAI format)
β
CLIProxyAPI
β
OAuth Token (from your Max subscription)
β
Anthropic API
β
Response β OpenAI format β Your App
No extra API costs. Just your existing subscription. Sweet!
Why CLIProxyAPI and not something else?
I actually tried claude-max-api-proxy first. It worked! But the model list was outdated (no Opus 4.5, no Sonnet 4.5), it's a Node.js project that wraps the CLI as a subprocess, and it felt a bit⦠abandoned.
CLIProxyAPI is a completely different story:
- Single Go binary. No Node.js, no Python, no runtime dependencies. Just download and run.
- Actively maintained. Like, very actively. Frequent releases, big community, ecosystem tools everywhere (desktop GUI, web dashboard, AUR package, Docker images, the works).
- Multi-provider. Not just Claude: it also supports Gemini, OpenAI Codex, Qwen, and more. You can even round-robin between multiple OAuth accounts.
- All the latest models. It uses the full dated model names (e.g.,
claude-sonnet-4-20250514), so you're always up to date.
What you'll need
- An active Claude Max subscription ($100/month). Claude Pro works too, but with lower rate limits.
- A machine running Linux or macOS.
- A web browser for the OAuth flow (or use
--no-browserif you're on a headless server).
Installation
Linux
There's a community installer that does everything for you: downloads the latest binary to ~/cliproxyapi/, generates API keys, creates a systemd service:
curl -fsSL https://raw.githubusercontent.com/brokechubb/cliproxyapi-installer/refs/heads/master/cliproxyapi-installer | bash
If you're on Arch (btw):
yay -S cli-proxy-api-bin
macOS
Homebrew. Easy:
brew install cliproxyapi
Authenticating with Claude
Before the proxy can use your subscription, you need to log in:
# Linux
cd ~/cliproxyapi
./cli-proxy-api --claude-login
# macOS (Homebrew)
cliproxyapi --claude-login
This opens your browser for the OAuth flow. Log in with your Claude account, authorize it, done. The token gets saved to ~/.cli-proxy-api/.
If you're on a headless machine, add --no-browser and it'll print the URL for you to open elsewhere:
./cli-proxy-api --claude-login --no-browser
Configuration
The installer generates a config.yaml with random API keys. These are keys that clients use to authenticate to your proxy, not Anthropic keys.
Here's what I'm running:
# Bind to localhost only since I'm using it locally
host: "127.0.0.1"
# Server port
port: 8317
# Authentication directory
auth-dir: "~/.cli-proxy-api"
# No client auth needed for local-only use
api-keys: []
# Keep it quiet
debug: false
The important bit is api-keys: []. Setting it to an empty list disables client authentication, which means any app on your machine can hit the proxy without needing a key. This is fine if you're only using it locally.
If you're exposing the proxy to your network (e.g., you want to hit it from your phone or another machine), keep the generated API keys and also set host: "" so it binds to all interfaces. You don't want random people on your network burning through your subscription.
Starting the service
Linux (systemd)
The installer creates a systemd user service for you:
systemctl --user enable --now cliproxyapi.service
systemctl --user status cliproxyapi.service
Or just run it manually to test first:
cd ~/cliproxyapi
./cli-proxy-api
macOS (Homebrew)
brew services start cliproxyapi
Testing it
Let's make sure everything works:
# List available models
curl http://localhost:8317/v1/models
# Chat completion
curl -X POST http://localhost:8317/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514",
"messages": [{"role": "user", "content": "Say hello in one sentence."}]
}'
# Streaming (note the -N flag to disable curl buffering)
curl -N -X POST http://localhost:8317/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514",
"messages": [{"role": "user", "content": "Say hello in one sentence."}],
"stream": true
}'
If you get a response from Claude, you're golden. π
Using it with Emacs
This is the fun part. Both forge-llm and magit-gptcommit use the llm package for their LLM backend. The llm package has an OpenAI-compatible provider, so we just need to point it at our proxy.
Setting up the llm provider
First, make sure you have the llm package installed. Then configure an OpenAI provider that points to CLIProxyAPI:
(require 'llm-openai)
(setq my/claude-via-proxy
(make-llm-openai-compatible
:key "not-needed"
:chat-model "claude-sonnet-4-20250514"
:url "http://localhost:8317/v1"))
That's it. That's the whole LLM setup. Now we can use it everywhere.
forge-llm (PR descriptions)
I wrote forge-llm to generate PR descriptions in Forge using LLMs. It analyzes the git diff, picks up your repository's PR template, and generates a structured description. To use it with CLIProxyAPI:
(use-package forge-llm
:after forge
:config
(forge-llm-setup)
(setq forge-llm-llm-provider my/claude-via-proxy))
Now when you're creating a PR in Forge, you can hit SPC m g (Doom) or run forge-llm-generate-pr-description and Claude will write the description based on your diff. Using your subscription. No API key needed.
magit-gptcommit (commit messages)
magit-gptcommit does the same thing but for commit messages. It looks at your staged changes and generates a conventional commit message. Setup:
(use-package magit-gptcommit
:after magit
:config
(setq magit-gptcommit-llm-provider my/claude-via-proxy)
(magit-gptcommit-mode 1)
(magit-gptcommit-status-buffer-setup))
Now in the Magit commit buffer, you can generate a commit message with Claude. Again, no separate API costs.
Any other llm-based package
The beauty of the llm package is that any Emacs package that uses it can benefit from this setup. Just pass my/claude-via-proxy as the provider. Some other packages that use llm: ellama, ekg, llm-refactoring. They'll all work with your Max subscription through the proxy.
Using it with other tools
Since CLIProxyAPI speaks the OpenAI API format, it works with anything that supports custom OpenAI endpoints. The magic three settings are always the same:
- Base URL:
http://localhost:8317/v1 - API key:
not-needed(or your proxy key if you have auth enabled) - Model:
claude-sonnet-4-20250514,claude-opus-4-20250514, etc.
Here's a Python example using the OpenAI SDK:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8317/v1",
api_key="not-needed"
)
response = client.chat.completions.create(
model="claude-sonnet-4-20250514",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
Available models
CLIProxyAPI exposes all models available through your subscription. The names use the full dated format. You can always check the list with:
curl -s http://localhost:8317/v1/models | jq '.data[].id'
At the time of writing, you'll get Claude Opus 4, Sonnet 4, Sonnet 4.5, Haiku 4.5, and whatever else Anthropic has made available to Max subscribers.
How much does this save?
If you're already paying for Claude Max, this is basically free API access. For context:
| Usage | API Cost | With CLIProxyAPI |
|---|---|---|
| 1M input tokens/month | ~$15 | $0 (included) |
| 500K output tokens/month | ~$37.50 | $0 (included) |
| Monthly Total | ~$52.50 | $0 extra |
And those numbers add up quick when you're generating PR descriptions and commit messages all day. I was getting to the point where my API costs were approaching the subscription price, which is silly when you think about it.
Conclusion
The whole setup took me about 10 minutes. Download binary, authenticate, edit config, start service, point my Emacs llm provider at it. That's it.
What I love about CLIProxyAPI is that it's exactly the kind of tool I appreciate: a single binary, a YAML config, does one thing well, and gets out of your way. No magic, no framework, no runtime dependencies. And since it's OpenAI-compatible, it plays nicely with the entire llm package ecosystem in Emacs.
The project is at https://github.com/router-for-me/CLIProxyAPI and the community is very active. If you run into issues, their GitHub issues are responsive.
See you in the next one!
13 Feb 2026 6:00am GMT
11 Feb 2026
Django community aggregator: Community blog posts
Improving Django - Adam Hill
π Links
- Adam's Personal Website
- Django Brew podcast
- Adam's GitHub profile
- Redesigned Django Homepage
- AllDjango.com
- new-features
- django-api-frameworks
π¦ Projects
π Books
- A River Runs Through It by Norman Maclean
- Nightmare Alley film
- London Review of Books & Le Monde Diplomatique
π₯ YouTube
Sponsor
This episode was brought to you by Buttondown, the easiest way to start, send, and grow your email newsletter. New customers can save 50% off their first year with Buttondown using the coupon code DJANGO.
11 Feb 2026 5:00pm GMT
09 Feb 2026
Django community aggregator: Community blog posts
Claude Code from the beach: My remote coding setup with mosh, tmux and ntfy
I recently read this awesome post by Granda about running Claude Code from a phone, and I thought: I need this in my life. The idea is simple: kick off a Claude Code task, pocket the phone, go do something fun, and get a notification when Claude needs your help or finishes working. Async development from anywhere.
But my setup is a bit different from his. I'm not using Tailscale or a cloud VM. I already have a WireGuard VPN connecting my devices, a home server, and a self-hosted ntfy instance. So I built my own version, tailored to my infrastructure.
Here's the high-level architecture:
ββββββββββββ mosh βββββββββββββββ ssh βββββββββββββββ
β Phone βββββββββββββββββΆ β Home Server βββββββββββββββββΆ β Work PC β
β (Termux) β WireGuard β (Jump Box) β LAN β(Claude Code)β
ββββββββββββ βββββββββββββββ ββββββββ¬βββββββ
β² β
β ntfy (HTTPS) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The loop is: I'm at the beach, I type cc on my phone, I land in a tmux session with Claude Code. I give it a task, pocket the phone, and go back to whatever I was doing. When Claude has a question or finishes, my phone buzzes. I pull it out, respond, pocket it again. Development fits into the gaps of the day.
And here's what the async development loop looks like in practice:
π± Phone π» Work PC π ntfy
β β β
βββββ type 'cc' βββββββββββββΆβ β
βββββ give Claude a task ββββΆβ β
β β β
β βββββββββββββββββββ β β
β β pocket phone β β β
β βββββββββββββββββββ β β
β β β
β βββ hook fires βββββββββββββΆβ
ββββ "Claude needs input" ββββββββββββββββββββββββββββββββ
β β β
βββββ respond βββββββββββββββΆβ β
β β β
β βββββββββββββββββββ β β
β β pocket phone β β β
β βββββββββββββββββββ β β
β β β
β βββ hook fires βββββββββββββΆβ
ββββ "Task complete" βββββββββββββββββββββββββββββββββββββ
β β β
βββββ review, approve PR ββββΆβ β
β β β
Why not just use the blog post's setup?
Granda's setup uses Tailscale for VPN, a Vultr cloud VM, Termius as the mobile terminal, and Poke for notifications. It's clean and it works. But I had different constraints:
- I already have a WireGuard VPN running
wg-quickon a server that connects all my devices. No need for Tailscale. - I didn't want to pay for a cloud VM. My work PC is more than powerful enough to run Claude Code.
- I self-host ntfy for notifications, so no need for Poke or any external notification service.
- I use Termux on Android, not Termius on iOS.
If you don't have this kind of infrastructure already, Granda's approach is probably simpler. But if you're the kind of person who already has a WireGuard mesh and self-hosted services, this guide is for you.
The pieces
| Component | Purpose | Alternatives |
|---|---|---|
| WireGuard | VPN to reach home network | Tailscale, Zerotier, Nebula |
| mosh | Network-resilient shell (phone leg) | Eternal Terminal (et), plain SSH |
| SSH | Secure connection (LAN leg) | mosh (if you want it end-to-end) |
| tmux | Session persistence | screen, zellij |
| Claude Code | The actual work | - |
| ntfy | Push notifications | Pushover, Gotify, Poke, Telegram |
| Termux | Android terminal emulator | Termius, JuiceSSH, ConnectBot |
| fish shell | Shell on all machines | zsh, bash |
The key insight is that you need two different types of resilience: mosh handles the flaky mobile connection (WiFi to cellular transitions, dead zones, phone sleeping), while tmux handles session persistence (close the app, reopen hours later, everything's still there). Together they make mobile development actually viable.
Why the double SSH? Why not make the work PC a WireGuard peer?
You might be wondering: if I already have a WireGuard network, why not just add the work PC as a peer and mosh straight into it from my phone?
The short answer: it's my employer's machine. It has monitoring software installed: screen grabbing, endpoint policies, the works. Installing WireGuard on it would mean running a VPN client that tunnels traffic through my personal infrastructure, which is the kind of thing that raises flags with IT security. I don't want to deal with that conversation.
SSH, on the other hand, is standard dev tooling. An openssh-server on a Linux machine is about as unremarkable as it gets.
So instead, my home server acts as a jump box. My phone connects to the home server over WireGuard (that's all personal infrastructure, no employer involvement), and then the home server SSHs into the work PC over the local network. The work PC only needs an SSH server, no VPN client, no weird tunnels, nothing that would make the monitoring software blink.
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
β My Infrastructure β
β β
β βββββββββββββ WireGuard ββββββββββββββββ β
β β Phone βββββββββββββββββΆβ WG Server β β
β β (peer) β tunnel β β β
β βββββββ¬ββββββ ββββββββ¬ββββββββ β
β β β β
β β mosh WireGuard β β
β β (through tunnel) tunnel β β
β β β β
β βΌ βΌ β
β ββββββββββββββββ β
β β Home Server ββββββββββββββββββββββββββββββββββ
β β (peer) β β
β ββββββββ¬ββββββββ β
β β β
βββββββββββΌβββββββββββββββββββββββββββββββββββββββββ
β
β ssh (LAN)
β
βββββββββββΌβββββββββββββββββββββββββββββββββββββββββ
β βΌ β
β ββββββββββββββ β
β β Work PC β β
β β (SSH only) β Employer Infrastructure β
β ββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
As a bonus, this means the work PC has zero exposure to the public internet. It only accepts SSH from machines on my local network. Defense in depth.
Phase 1: SSH server on the work PC
My work PC is running Ubuntu 24.04. First thing: install and harden the SSH server.
sudo apt update && sudo apt install -y openssh-server
sudo systemctl enable ssh
Note: on Ubuntu 24.04 the service is called ssh, not sshd. This tripped me up.
Then harden the config. I created /etc/ssh/sshd_config with:
PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
PubkeyAuthentication yes
AllowAgentForwarding no
X11Forwarding no
UsePAM yes
MaxAuthTries 3
ClientAliveInterval 60
ClientAliveCountMax 3
Key-only auth, no root login, no password auth. Since the machine is only accessible through my local network, this is plenty secure.
Setting up SSH keys for the home server β work PC connection
On the home server, generate a key pair if you don't already have one:
ssh-keygen -t ed25519 -C "homeserver->workpc"
Accept the default path (/.ssh/id_ed25519). Then copy the public key to the work PC:
ssh-copy-id roger@<work-pc-ip>
Now restart sshd:
sudo systemctl restart ssh
Important: Test the SSH connection from your home server before closing your current session. Don't lock yourself out.
# From the home server
ssh roger@<work-pc-ip>
If it drops you into a shell without asking for a password, you're golden.
Alternative: Tailscale
If you don't have a WireGuard setup, Tailscale is the easiest way to get a private network going. Install it on your phone and your work PC, and they can see each other directly. No jump host needed, no port forwarding, no firewall rules. It's honestly magic for this kind of thing. The only reason I don't use it is because I already had WireGuard running before Tailscale existed.
Phase 2: tmux + auto-attach
The idea here is simple: every time I SSH into the work PC, I want to land directly in a tmux session. If the session already exists, attach to it. If not, create one.
First, ~/.tmux.conf:
# mouse support (essential for thumbing it on the phone)
set -g mouse on
# start window numbering at 1 (easier to reach on phone keyboard)
set -g base-index 1
setw -g pane-base-index 1
# status bar
set -g status-style 'bg=colour235 fg=colour136'
set -g status-left '#[fg=colour46][#S] '
set -g status-right '#[fg=colour166]%H:%M'
set -g status-left-length 30
# longer scrollback
set -g history-limit 50000
# reduce escape delay (makes editors snappier over SSH)
set -sg escape-time 10
# keep sessions alive
set -g destroy-unattached off
Mouse support is essential when you're using your phone. Being able to tap to select panes, scroll with your finger, and resize things makes a massive difference.
Then in ~/.config/fish/config.fish on the work PC:
if set -q SSH_CONNECTION; and not set -q TMUX
tmux attach -t claude 2>/dev/null; or tmux new -s claude -c ~/projects/my-app
end
This checks for SSH_CONNECTION so it only auto-attaches when I'm remoting in. When I'm physically at the machine, I use the terminal normally without tmux. This distinction becomes important later for notifications.
Phase 3: Claude Code hooks + ntfy
This is the fun part. Claude Code has a hook system that lets you run commands when certain events happen. We're going to hook into three events:
- AskUserQuestion: Claude needs my input. High priority notification.
- Stop: Claude finished the task. Normal priority.
- Error: Something broke. High priority.
The notification script
First, the script that sends notifications. I created ~/.claude/hooks/notify.sh:
#!/usr/bin/env bash
# Only notify if we're in an SSH-originated tmux session
if ! tmux show-environment SSH_CONNECTION 2>/dev/null | grep -q SSH_CONNECTION=; then
exit 0
fi
EVENT_TYPE="${1:-unknown}"
NTFY_URL="https://ntfy.example.com/claude-code"
NTFY_TOKEN="tk_your_token_here"
EVENT_DATA=$(cat)
case "$EVENT_TYPE" in
question)
TITLE="π€ Claude needs input"
PRIORITY="high"
MESSAGE=$(echo "$EVENT_DATA" | jq -r '.tool_input.question // .tool_input.questions[0].question // "Claude has a question for you"' 2>/dev/null)
;;
stop)
TITLE="β
Claude finished"
PRIORITY="default"
MESSAGE="Task complete"
;;
error)
TITLE="β Claude hit an error"
PRIORITY="high"
MESSAGE=$(echo "$EVENT_DATA" | jq -r '.error // "Something went wrong"' 2>/dev/null)
;;
*)
TITLE="Claude Code"
PRIORITY="default"
MESSAGE="Event: $EVENT_TYPE"
;;
esac
PROJECT=$(basename "$PWD")
curl -s \
-H "Authorization: Bearer $NTFY_TOKEN" \
-H "Title: $TITLE" \
-H "Priority: $PRIORITY" \
-H "Tags: computer" \
-d "[$PROJECT] $MESSAGE" \
"$NTFY_URL" > /dev/null 2>&1
chmod +x ~/.claude/hooks/notify.sh
The SSH_CONNECTION check at the top is crucial: it prevents notifications from firing when I'm sitting at the machine. Since I only use tmux when SSHing in remotely, the tmux environment will only have SSH_CONNECTION set when I'm remote. Neat trick.
Claude Code settings
Then in ~/.claude/settings.json:
{
"hooks": {
"PreToolUse": [
{
"matcher": "AskUserQuestion",
"hooks": [
{
"type": "command",
"command": "~/.claude/hooks/notify.sh question"
}
]
}
],
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "~/.claude/hooks/notify.sh stop"
}
]
}
]
}
}
This is the global settings file. If your project also has a .claude/settings.json, they'll be merged. No conflicts.
ntfy setup
I'm self-hosting ntfy, so I created a topic and an access token:
# Inside your ntfy server/container
ntfy token add --expires=30d your-username
ntfy access your-username claude-code rw
ntfy access everyone claude-code deny
ntfy topics are created on demand, so just subscribing to one creates it. On the Android ntfy app, I pointed it at my self-hosted instance and subscribed to the claude-code topic.
You can test the whole thing works with:
echo '{"tool_input":{"question":"Should I refactor this?"}}' | ~/.claude/hooks/notify.sh question
echo '{}' | ~/.claude/hooks/notify.sh stop
echo '{"error":"ModuleNotFoundError: No module named foo"}' | ~/.claude/hooks/notify.sh error
Three notifications, three different priorities. Very satisfying.
Alternative notification systems
If you don't want to self-host ntfy, here are some options:
- ntfy.sh: The public instance of ntfy. Free, no setup, just pick a random-ish topic name. The downside is that anyone who knows your topic name can send you notifications.
- Pushover: $5 one-time purchase per platform. Very reliable, nice API. The notification script would be almost identical, just a different curl call.
- Gotify: Self-hosted like ntfy, but uses WebSockets instead of HTTP. Good if you're already running it.
- Telegram Bot API: Free, easy to set up. Create a bot with BotFather, get your chat ID, and curl the sendMessage endpoint.
- Poke: What Granda uses in his post. Simple webhook-to-push service.
Phase 4: Termux setup
Termux is the terminal emulator on my Android phone. Here's how I set it up.
pkg update && pkg install -y mosh openssh fish
SSH into your phone (for easier setup)
Configuring all of this on a phone keyboard is painful. I set up sshd on Termux so I could configure it from my PC.
In ~/.config/fish/config.fish:
sshd 2>/dev/null
This starts sshd every time you open Termux. If it's already running, it silently fails. Termux runs sshd on port 8022 by default.
First, set a password on Termux (you'll need it for the initial key copy):
passwd
Then from your PC, copy your key and test the connection:
ssh-copy-id -p 8022 <phone-ip>
ssh -p 8022 <phone-ip>
Now you can configure Termux comfortably from your PC keyboard.
Generating SSH keys on the phone
On Termux, generate a key pair:
ssh-keygen -t ed25519 -C "phone"
Then copy it to your home server:
ssh-copy-id <your-user>@<home-server-wireguard-ip>
This gives you passwordless phone β home server. Since we already set up home server β work PC keys in Phase 1, the full chain is now passwordless.
SSH config
The SSH config is where the magic happens. On Termux:
Host home
HostName <home-server-wireguard-ip>
User <your-user>
Host work
HostName <work-pc-ip>
User roger
ProxyJump home
ProxyJump is the key: ssh work automatically hops through the home server. No manual double-SSHing.
Fish aliases
These are the aliases that make everything a one-command operation:
# Connect to work PC, land in tmux with Claude Code ready
alias cc="mosh home -- ssh -t work"
# New tmux window in the claude session
alias cn="mosh home -- ssh -t work 'tmux new-window -t claude -c \$HOME/projects/my-app'"
# List tmux windows
alias cl="ssh work 'tmux list-windows -t claude'"
cc is all I need to type. Mosh handles the phone-to-home-server connection (surviving WiFi/cellular transitions), SSH handles the home-server-to-work-PC hop over the LAN, and the fish config on the work PC auto-attaches to tmux.
Alternative: Termius
If you're on iOS (or just prefer a polished app), Termius is what Granda uses. It supports mosh natively and has a nice UI. The downside is it's a subscription for the full features. Termux is free and gives you a full Linux environment, but it's Android-only and definitely more rough around the edges.
Other options: JuiceSSH (Android, no mosh), ConnectBot (Android, no mosh). Mosh support is really the killer feature here, so Termux or Termius are the best choices.
Phase 5: The full flow
Here's what my actual workflow looks like:
- I'm at the beach/coffee shop/couch/wherever ποΈ
- Open Termux, type
cc - I'm in my tmux session on my work PC
- Start Claude Code, give it a task: "add pagination to the user dashboard API and update the tests"
- Pocket the phone
- Phone buzzes: "π€ Claude needs input - Should I use cursor-based or offset-based pagination?"
- Pull out phone, Termux is still connected (thanks mosh), type "cursor-based, use the created_at field"
- Pocket the phone again
- Phone buzzes: "β Claude finished - Task complete"
- Review the changes, approve the PR, go back to the beach
The key thing that makes this work is the combination of mosh (connection survives me pocketing the phone) + tmux (session survives even if mosh dies) + ntfy (I don't have to keep checking the screen). Without any one of these three, the experience breaks down.
Security considerations
A few things to keep in mind:
- SSH keys only: No password auth anywhere in the chain. Keys are easier to manage and impossible to brute force.
- WireGuard: The work PC is only accessible through my local network. No ports exposed to the public internet.
- ntfy token auth: The notification topic requires authentication. No one else can send you fake notifications or read your Claude Code questions.
- Claude Code in normal mode: Unlike Granda's setup where he runs permissive mode on a disposable VM, my work PC is not disposable. Claude asks before running dangerous commands, which pairs nicely with the notification system.
- tmux SSH check: Notifications only fire when I'm remote. When I'm at the machine, no unnecessary pings.
Conclusion
The whole setup took me about an hour to put together. The actual configuration is pretty minimal: an SSH server, a tmux config, a notification script, and some fish aliases.
What I love about this setup is that it's all stuff I already had. WireGuard was already running, ntfy was already self-hosted, Termux was already on my phone. I just wired them together with a few scripts and some Claude Code hooks.
If you have a similar homelab setup, you can probably get this running in 30 minutes. If you're starting from scratch, Granda's cloud VM approach is probably easier. Either way, async coding from your phone is genuinely a game changer.
See you in the next one!
09 Feb 2026 6:00am GMT
07 Feb 2026
Django community aggregator: Community blog posts
Heroku Is (Finally, Officially) Dead
Analyzing the official announcement and reviewing hosting alternatives in 2026.
07 Feb 2026 11:56am GMT
It's time to leave Heroku
Back in the day Heroku felt like magic for small Django side projects. You pushed to main, it built and deployed automatically, and the free tier was generous enough that you could experiment without ever pulling out a credit card. For a long time, Heroku was the easiest way to get something live without worrying about servers, deployment scripts, or infrastructure at all. Every Python developer I knew recommended it.
Sadly, that era is over.
The slow decline
The problems started piling up in 2022. In April, hackers stole OAuth tokens used for GitHub integration, gaining access to customer repositories. It later emerged that hashed and salted customer passwords were also exfiltrated from an internal database. Heroku forced password resets for all users. Their handling of the incident was widely criticized: they revoked all GitHub integration tokens without warning, breaking deploys for everyone, and communication was slow and vague.
Then in August 2022, Heroku announced they would eliminate all free plans, blaming "fraud and abuse." By November, free dynos, free Postgres databases, and free Redis instances were all gone. Look, I understand this wasn't sustainable for the company. But they lost an entire generation of developers who had grown up with Heroku for their side projects and hobby apps. The same developers who would recommend Heroku at work. Going from free to a minimum of $5-7/month for a dyno plus $5/month for a database doesn't sound like much, but it adds up quickly when you have a few side projects, and it broke the frictionless experience that made Heroku special.
The platform also became unstable. On June 10, 2025, Heroku suffered a massive outage lasting over 15 hours. Dashboard, CLI, and many deployed applications were completely inoperable. Even their status page went down. Eight days later, another outage lasted 8.5 hours. Multiple smaller incidents followed throughout the rest of 2025, affecting SSL, login access, API performance, and logging. As one developer put it on Hacker News: "the last 5 years have been death by a thousand cuts."
And beyond the outages, Heroku simply stopped evolving. They never adopted Docker containers or Kubernetes. Yefim Natis of Gartner described it well: "I think they got frozen in time." Jason Warner, who led engineering at Heroku from 2014 to 2017, was even more blunt: "It started to calcify under Salesforce." No open source strategy, no modern container support, limited observability tooling. Heroku in 2026 runs essentially the same way it did in 2016.
Unsurprisingly, competitors sprung up to fill the void: Fly.io, Railway, Render, DigitalOcean App Platform, and self-hosted solutions like Coolify and Dokku. Developers were already leaving in droves.
The final nail
Yesterday, Heroku published a post titled An Update on Heroku, announcing they are transitioning to a "sustaining engineering model", "with an emphasis on maintaining quality and operational excellence rather than introducing new features." They also stopped offering enterprise contracts to new customers.
The reason? Salesforce (who acquired Heroku back in 2010) is "focusing its product and engineering investments on areas where it can deliver the greatest long-term customer value, including helping organizations build and deploy enterprise-grade AI." Translation: Heroku doesn't make enough money, and Salesforce would rather invest in AI hype.
Simon Willison called the announcement "ominous" and said he plans to migrate his blog off Heroku. A former Heroku product lead described years of underinvestment: "a starvation over a six-, seven-, eight-year period" where more apps and users were supported by fewer engineers.
This is the classic pattern: stop selling new contracts, honor existing ones, then eventually wind down. If you're still on Heroku, the writing is on the wall.
What leaving Heroku looks like
I want to share a concrete example. In 2023 I started working with Sound Radix, who had a SvelteKit app with a Django backend running on Heroku. They were paying $500 per month. Five hundred dollars for what is essentially an e-commerce website. And the performance was terrible: slow builds, sluggish website.
As one of my first tasks, I set up a Debian server on Hetzner and moved everything over. A single dedicated instance running the full stack. Cost: $75/month. Yes, setting up backups and auto-deploys on push took more work than Heroku's push-button experience. But we understood our stack from top to bottom, everything got significantly faster, and we were paying 85% less.
In 2025 we moved to Coolify, a self-hosted PaaS that gives you much of Heroku's developer experience without the lock-in or the price tag. We now run two Hetzner servers: a small $5/month instance for Coolify itself, and a $99/month server for the actual application (the $75/month instance was no longer offered by Hetzner). Setting up Coolify and getting a Django project running on it is really rather easy - I've written about it in detail: Running Django on Coolify and Django static and media files on Coolify.
Final thought
Heroku was genuinely great once. It pioneered the PaaS model and made deployment accessible to an entire generation of developers. But that was a long time ago. Between the security breaches, the death of the free tier, the outages, the technological stagnation, and now the explicit admission that no new features are coming - there's simply no reason to stay.
If you're still on Heroku, don't wait for the sunset announcement. Move now, while it's on your terms.
07 Feb 2026 6:22am GMT
06 Feb 2026
Django community aggregator: Community blog posts
Django News - Django security releases issued: 6.0.2, 5.2.11, and 4.2.28 - Feb 6th 2026
News
Django security releases issued: 6.0.2, 5.2.11, and 4.2.28
Django releases 6.0.2, 5.2.11, and 4.2.28 patch multiple security bugs, including PostGIS SQL injection, ASGI and Truncator denial of service, and timing and user enumeration.
Django Commons: We're recruiting new admins!
Django Commons is recruiting new admins to manage projects, membership, governance, and infrastructure; apply via the Admin Interest Form by March 16, 2026, AOE.
Recent trends in the work of the Django Security Team
Django Security Team sees many repeat vulnerability variations, leading to consistent patching and consideration of rearchitecting areas to reduce low-impact reports.
Releases
Django HealthCheck: Migration to v4.x
Update django-health-check to v4 by removing sub-apps and HEALTH_CHECK settings, reverting test model migration, and using HealthCheckView with explicit checks.
Python Insider: Python 3.14.3 and 3.13.12 are now available!
Python 3.14.3 (and 3.13.12) was released with deferred annotations, free-threaded support, improved async tooling, and other features that impact Django development and deployment.
Python Software Foundation
Your Python. Your Voice. Join the Python Developers Survey 2026!
This year marks the ninth iteration of the official Python Developers Survey.
Wagtail CMS News
An agent skill to upgrade your Wagtail site
Wagtail published an agent skill to plan and optionally perform safe, documentation-driven upgrades to new Wagtail releases while keeping a human in the loop.
Autosave is here in Wagtail 7.3 (and many other great things!)
Wagtail 7.3 adds StreamField block settings and ordering controls for cleaner custom block UIs, plus autosave, greener image defaults, accessibility rules, and docs in markdown.
Updates to Django
Today, "Updates to Django" is presented by Raffaella from Djangonaut Space! π
Last week we had 17 pull requests merged into Django by 11 different contributors - including 2 first-time contributors! Congratulations to Jaffar Khan and Mark for having their first commits merged into Django - welcome on board!
- Fixed a regression in Django 6.0 where auto_now_add field values were not populated during
INSERToperations, due to incorrect parameters passed tofield.pre_save()(#36847). - Fixed a visual regression in Django 6.0 that caused the admin filter sidebar to wrap below the changelist when filter elements contained long text (#36850).
- Triaging tickets documentation is updated with a new "Reviewing patches" section, and the "Triage workflow" section is updated to invite more people to start the review process π
Django Newsletter
Articles
Loopwerk: Django's test runner is underrated
Recommend Django's built-in test runner for predictable, minimal magic testing; use parameterized for inputs and switch to pytest only when required.
How to Switch to ty from Mypy
How to switch project type checking from mypy to Astral's ty, including installation, configuration via pyproject.toml, CI GitHub Actions, and pre-commit workarounds.
From Good Code to Reliable Software: A Practical Guide to Production-Ready Python Packages
Practical toolchain and workflows for making Python packages ready for production: reproducible installs, testing, linting, type checking, security scans, CI, and documentation.
Why light-weight websites may one day save your life
On the importance of light-weight websites on this bloated internet.
Why using [n] on a Django QuerySet can be unsafe?
Indexing a QuerySet can return nondeterministic rows because slicing does not add ordering, unlike first, which orders by primary key.
Docs or it's built differenlty - Priming AI with atomic docs
An opinionated approach to documentation so that it works for developers and AI alike.
Django Job Board
Three new backend gigs worth a click, from shipping REST APIs to going all-in on Django:
Python Developer REST APIs - Immediate Start at Worx-ai π
Backend Software Developer at Chartwell Resource Group Ltd.
Senior Django Developer at SKYCATCHFIRE
Django Newsletter
Projects
nanorepublica/django-deadcode
A Django dead code analysis tool that tracks relationships between templates, URLs, and views to help identify and remove unused code.
FarhanAliRaza/django-hawkeye
Django BM25 full-text search using PostgreSQL - a lightweight Elasticsearch alternative.
Django (anti)patterns
A website and repo with 39 common antipatterns, listing them as well as suggested changes. Worth a look!
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
06 Feb 2026 5:00pm GMT
02 Feb 2026
Django community aggregator: Community blog posts
ModelRenderers, a possible new component to Django...
Towards the end of last year I was working with form renderers in my startup to provide a consistent interface for forms across the project. I also used template partials to override widgets, delivering consistent rendering all in one file, a nice win. This made me wonder what other common components in a project get rendered to HTML that could benefit from a single, reusable place to avoid repeating myself.
Carlton's Neapolitan package already has this to some degree. There are two template tag types: one for object detail and one for object list. We also have FormRenderers in Django which already cascade from project down to an individual form, so perhaps we could apply the same logic to render models in a dry, configurable way rather than duplicating templates and logic. This made me wonder, could we have a python class whose role is to define how a model get's rendered? Let's be clear, we're not getting into serializers here and the validation or logic that comes with them, it's similar to the separation of Forms and FormRenders.
I'm thinking that this idea allows the rendering of an object of list of objects in a template like so:
{{ object }}
{{ object_list }}
This can be controlled via a class described above:
class MyModelRenderer(ModelRenderer):
list_template = ''
detail_template = ''
form_renderer = MyFormRenderer
The above class controls how a list of objects would be rendered along with a single object and the form renderer to use. The form side opens up the idea of chaining renderers together in order to find the correct template to use. This then links to the idea of having a template snippet for rendering related models. If you have a foreign key or a many-to-many relationship, your renderer could specify how to render itself as a related field. You could chain model renderers together so that, when rendering a related field, it looks up the appropriate snippet instead of rendering the entire detail or the entire list.
This obviously would be an optional API, but a potentially interesting one. It would certainly alter the look of a Django project and of course nothing stops you from rendering by hand. To me this leans into a different approach to having shared components at the template level, pushing towards not repeating yourself where possible.
Does this peak your interest or does this scream of nothing like Django at all? Let me know your thoughts!
02 Feb 2026 6:00am GMT
31 Jan 2026
Django community aggregator: Community blog posts
Django's test runner is underrated
Every podcast, blog post, Reddit thread, and every conference talk seems to agree: "just use pytest". Real Python says most developers prefer it. Brian Okken's popular book calls it "undeniably the best choice". It's treated like a rite of passage for Python developers: at some point you're supposed to graduate from the standard library to the "real" testing framework.
I never made that switch for my Django projects. And after years of building and maintaining Django applications, I still don't feel like I'm missing out.
What I actually want from tests
Before we get into frameworks, let me be clear about what I need from a test suite:
-
Readable failures. When something breaks, I want to understand why in seconds, not minutes.
-
Predictable setup. I want to know exactly what state my tests are running against.
-
Minimal magic. The less indirection between my test code and what's actually happening, the better.
-
Easy onboarding. New team members should be able to write tests on day one without learning a new paradigm.
Django's built-in test framework delivers all of this. And honestly? That's enough for most projects.
Django tests are just Python's unittest
Here's something that surprises a lot of developers: Django's test framework isn't some exotic Django-specific system. Under the hood, it's Python's standard unittest module with a thin integration layer on top.
TestCase extends unittest.TestCase. The assertEqual, assertRaises, and other assertion methods? Straight from the standard library. Test discovery, setup and teardown, skip decorators? All standard unittest behavior.
What Django adds is integration: Database setup and teardown, the HTTP client, mail outbox, settings overrides.
This means when you choose Django's test framework, you're choosing Python's defaults plus Django glue. When you choose pytest with pytest-django, you're replacing the assertion style, the runner, and the mental model, then re-adding Django integration on top.
Neither approach is wrong. But it's objectively more layers.
The self.assert* complaint
A common argument I hear against unittest-style tests is: "I can't remember all those assertion methods". But let's be honest. We're not writing tests in Notepad in 2026. Every editor has autocomplete. Type self.assert and pick from the list.
And in practice, how many assertion methods do you actually use? In my tests, it's mostly assertEqual and assertRaises. Maybe assertTrue, assertFalse, and assertIn once in a while. That's not a cognitive burden.
Here's the same test in both styles:
# Django / unittest
self.assertEqual(total, 42)
with self.assertRaises(ValidationError):
obj.full_clean()
# pytest
assert total == 42
with pytest.raises(ValidationError):
obj.full_clean()
Yes, pytest's assert is shorter. It's a bit easier on the eyes. And I'll be honest: pytest's failure messages are better too. When an assertion fails, pytest shows you exactly what values differed with nice diffs. That's genuinely useful.
But here's what makes that work: pytest rewrites your code. It hooks into Python's AST and transforms your test files before they run so it can produce those detailed failure messages from plain assert statements. That's not necessarily bad - it's been battle-tested for over a decade. But it is a layer of transformation between what you write and what executes, and I prefer to avoid magic when I can.
For me, unittest's failure messages are good enough. When assertEqual fails, it tells me what it expected and what it got. That's usually all I need. Better failure messages are nice, but they're not worth adding dependencies and an abstraction layer for.
The missing piece: parametrized tests
If there's one pytest feature people genuinely miss when using Django's test framework, it's parametrization. Writing the same test multiple times with different inputs feels wasteful.
But you really don't need to switch to pytest just for that. The parameterized package solves this cleanly:
from django.test import SimpleTestCase
from parameterized import parameterized
class SlugifyTests(SimpleTestCase):
@parameterized.expand([
("Hello world", "hello-world"),
("Django's test runner", "djangos-test-runner"),
(" trim ", "trim"),
])
def test_slugify(self, input_text, expected):
self.assertEqual(slugify(input_text), expected)
Compare that to pytest:
import pytest
@pytest.mark.parametrize("input_text,expected", [
("Hello world", "hello-world"),
("Django's test runner", "djangos-test-runner"),
(" trim ", "trim"),
])
def test_slugify(input_text, expected):
assert slugify(input_text) == expected
Both are readable. Both work well. The difference is that parameterized is a tiny, focused library that does one thing. It doesn't replace your test runner, introduce a new fixture system, or bring an ecosystem of plugins. It's a decorator, not a paradigm shift.
Once I added parameterized, I realized pytest no longer solved a problem I actually had.
Side by side: common test patterns
Let's look at how typical Django tests compare to pytest's approach.
Database tests
# Django
from django.test import TestCase
from myapp.models import Article
class ArticleTests(TestCase):
def test_article_str(self):
article = Article.objects.create(title="Hello")
self.assertEqual(str(article), "Hello")
# pytest + pytest-django
import pytest
from myapp.models import Article
@pytest.mark.django_db
def test_article_str():
article = Article.objects.create(title="Hello")
assert str(article) == "Hello"
With Django, database access simply works. TestCase wraps every test in a transaction and rolls it back afterward, giving you a clean slate without extra decorators. pytest-django takes the opposite approach: database access is opt-in. Different philosophies, but I find theirs annoying since most of my tests touch the database anyway, so I'd end up with @pytest.mark.django_db on almost every test.
View tests
# Django
from django.test import TestCase
from django.urls import reverse
class ViewTests(TestCase):
def test_home_page(self):
response = self.client.get(reverse("home"))
self.assertEqual(response.status_code, 200)
# pytest + pytest-django
from django.urls import reverse
def test_home_page(client):
response = client.get(reverse("home"))
assert response.status_code == 200
In Django, self.client is right there on the test class. If you want to know where it comes from, follow the inheritance tree to TestCase. In pytest, client appears because you named your parameter client. That's how fixtures work: injection happens by naming convention. If you didn't know that, the code would be puzzling. And if you want to find where a fixture is defined, you might be hunting through conftest.py files across multiple directory levels.
What about fixtures?
Pytest's fixture system is the other big feature people bring up. Fixtures compose, they handle setup and teardown automatically, and they can be scoped to function, class, module, or session.
But the mechanism is implicit. You've already seen the implicit injection in the view test example: name a parameter client and it appears, add db to your function signature and you get database access. Powerful, but also magic you need to learn.
For most Django tests, you need some objects in the database before your test runs. Django gives you two ways to do this:
setUp()runs before each test methodsetUpTestData()runs once per test class, which is faster for read-only data
class ArticleTests(TestCase):
@classmethod
def setUpTestData(cls):
cls.author = User.objects.create(username="kevin")
def test_article_creation(self):
article = Article.objects.create(title="Hello", author=self.author)
self.assertEqual(article.author.username, "kevin")
If you need more sophisticated object creation, factory-boy works great with either framework.
The fixture system solves a real problem - complex cross-cutting setup that needs to be shared and composed. My projects just haven't needed that level of sophistication. And I'd rather not add the indirection until I do.
The hidden cost of flexibility
Pytest's flexibility is a feature. It's also a liability.
In small projects, pytest feels lightweight. But as projects grow, that flexibility can accumulate into complexity. Your conftest.py starts small, then grows into its own mini-framework. You add pytest-xdist for parallel tests (Django has --parallel built-in). You write custom fixtures for DRF's APIClient (Django's APITestCase just works). You add a plugin for coverage, another for benchmarking. Each one makes sense in isolation.
Then a test fails in CI but not locally, and you're debugging the interaction between three plugins and a fixture that depends on two other fixtures.
Django's test framework doesn't have this problem because it doesn't have this flexibility. There's one way to set up test data. There's one test client. There's one way to run tests in parallel. Boring, but predictable.
When I'm debugging a test failure, I want to debug my code, not my test infrastructure.
When I would recommend pytest
I'm not anti-pytest. If your team already has deep pytest expertise and established patterns, switching to Django's runner would be a net negative. Switching costs are real. If I join a project that uses pytest? I use pytest. This is a preference for new projects, not a religion.
It's also worth noting that pytest can run unittest-style tests without modification. You don't have to rewrite everything if you want to try it. That's a genuinely nice feature.
But if you're starting fresh, or you're the one making the decision? Make it a conscious choice. "Everyone uses pytest" can be a valid consideration, but it shouldn't be the whole argument.
My rule of thumb
Start with Django's test runner. It's boring, it's stable, and it works.
Add parameterized when you need parametrized tests.
Switch to pytest only when you can name the specific problem Django's framework can't solve. Not because a podcast told you to, but because you've hit an actual wall.
I've been building Django applications for a long time. I've tried both approaches. And I keep choosing boring.
Boring is a feature in test infrastructure.
31 Jan 2026 2:21am GMT
30 Jan 2026
Django community aggregator: Community blog posts
Django News - Python Developers Survey 2026 - Jan 30th 2026
News
Python Developers Survey 2026
This is the ninth iteration of the official Python Developers Survey. It is run by the PSF (Python Software Foundation) to highlight the current state of the Python ecosystem and help with future goals.
Note that the official Django Developers Survey is currently being finalized and will come out hopefully in March or April.
The French government is building an entire productivity ecosystem using Django
In a general push for removing Microsoft, Google and any US or other non-EU dependency, the French government has been rapidly creating an open source set of productivity tools called "LaSuite", in collaboration with the Netherlands & Germany.
Django Packages : π§βπ¨ A Fresh, Mobile-Friendly Look with Tailwind CSS
As we announced last week, Django Packages released a new design, and Maksudul Haque, who led the effort, wrote about the changes.
Python Software Foundation
Dispatch from PyPI Land: A Year (and a Half!) as the Inaugural PyPI Support Specialist
A look back on the first year and a half as the inaugural PyPI Support Specialist.
Django Fellow Reports
Fellows Report - Natalia
By far, the bulk of my week went into integrating the checklist-generator into djangoproject.com, which required a fair amount of coordination and follow-through. Alongside that, security work ramped up again, with a noticeable increase in incoming reports that needed timely triage and prioritization. Everything else this week was largely in support of keeping those two tracks moving forward.
Fellows Report - Jacob
Engaged in a fair number of security reports this week. Release date and number of issues for 6.0.2 to be finalized and publicized tomorrow.
Wagtail CMS News
Wagtail's new Security Announcements Channel
Wagtail now publishes security release notifications via a dedicated GitHub Security Announcements discussion category, with early alerts, RSS feed, and advisory links.
40% smaller images, same quality
Wagtail 7.3 ships with smarter image compression defaults that deliver roughly 40% smaller images with no visible quality loss, improving page speed, SEO, and reducing energy use out of the box.
Updates to Django
Today, "Updates to Django" is presented by Hwayoung from Djangonaut Space! π
Last week we had 11 pull requests merged into Django by 7 different contributors - including 2 first-time contributors! Congratulations to Sean Helveyπ and James Fysh for having their first commits merged into Django - welcome on board!
- Added support for rendering named groups of choices using
<optgroup>elements in admin select widgets (#13883). - Dropped support for MariaDB 10.6-10.10 (#36812).
Django Newsletter
Sponsored Link 1
Sponsor Django News
Reach 4,300 highly engaged Django developers!
Articles
Django: profile memory usage with Memray
Use Memray to profile Django startup, identify heavy imports like numpy, and reduce memory by deferring, lazy importing, or replacing dependencies.
Some notes on starting to use Django
Julia Evans explains why Django is well-suited to small projects, praising its explicit structure, built-in admin, ORM, automatic migrations, and batteries-included features.
Quirks in Django's template language part 3
Lily explores Django template edge cases: now tag format handling, numeric literal parsing, and the lorem tag with negative counts, proposing stricter validation and support for format variables.
Testing: exceptions and caches
Nicer ways to test exceptions and to test cached function results.
I run a server farm in my closet (and you can too!)
One woman's quest to answer the question: does JIT go brrr?
Speeding up Pillow's open and save
Not strictly Django but from Python 3.15 release manager Hugo playing around with Tachyon, the new "high-frequency statistical sampling profiler" coming in Python 3.15.
Events
DjangoCon Europe CFP Closes February 8
DjangoCon Europe 2026 opens CFP for April 15 to 19 in Athens; submit technical and community-focused Django and Python talks by February 8, 2026.
Opportunity Grants Application for DjangoCon US 2026
Opportunity Grants Application for DjangoCon US 2026 is open through March 16, 2026, at 11:00 am Central Daylight Time (America/Chicago). Decision notifications will be sent out by July 1, 2026.
Videos
django-bolt - Rust-powered API Framework for Django
From the BugBytes channel, an 18-minute look at django-bolt including how and why you might use it.
Podcasts
Django Chat #194: Inverting the Testing Pyramid - Brian Okken
Brian is a software engineer, podcaster, and author. We discuss recent tooling changes in Python, using AI effectively, inverting the traditional testing pyramid, and more.
Django Job Board
Two new Django roles this week, ranging from hands-on backend development to senior-level leadership, building scalable web applications.
Backend Software Developer at Chartwell Resource Group Ltd. π
Senior Django Developer at SKYCATCHFIRE
Django Newsletter
Django Codebase
Django Features
This was first introduced last year, but it's worth bringing renewed attention to this: a new feature proposals for Django and the third-party ecosystem.
Projects
FarhanAliRaza/django-rapid
Msgspec based serialization for Django.
adamghill/dj-toml-settings
Load Django settings from a TOML file.
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
30 Jan 2026 6:00pm GMT
Customizing error code for Cloudflare mTLS cert check
Summary
The Bitwarden mobile app wipes its local cache when receiving an HTTP 403 error. By default, a WAF rule in a Cloudflare free account can only return a 403. This guide shows how to use Cloudflare Workers to validate mTLS certificates and return an HTTP 404 instead.
Background
For accessing internal services remotely, I have been a big fan of Cloudflare Tunnels. This system provides an easy mechanism to provide access without opening up firewall ports, and the ability to take advantage of Cloudflare security controls like their Web Application Firewall (WAF) and Zero Trust access control.
Previous authorization model
Up until recently, I have secured my internal applications with a simple OAuth identity validation through Zero Trust. Depending on the identity provider used, this can provide a significant amount of protection, but it can cause issues with non-web based applications like mobile apps.
One simple alternative I've used is Service Tokens, which can be provided via custom HTTP headers - if the app I'm trying to use supports that type of customization. However, this is quite rare, so I started looking for a more universal approach.
mTLS authorization
Historically, during client/server communication, the focus has been on ensuring the server endpoint is trustworthy, and where SSL/TLS is a standard requirement. There are many use cases where the server may need to assess the trustworthiness of the device communicating with it. For a very sensitive system like a password manager, validating the device in addition to the user credentials provides an extra layer of security.
Mutual TLS (mTLS) is a process that provides the missing validation, by having the client provide its certificate to the server (after the client has validated the server's certificate).
Cloudflare mTLS support on free plans
I am currently using Cloudflare's free plan, as the pricing for their Pro and Business plans is quite lofty for a single household use case.
With the free plan, you can create certificates via the Cloudflare dashboard, which ends up creating a certificate signed by a Cloudflare-managed, account-level root CA. Then, the client certificate validation can be referenced in a WAF rule.
However, there are some important limitations here:
- You cannot bring your own CA, which means the certificates cannot be used in Zero Trust access controls.
- You cannot get the cert for your individual Cloudflare-managed CA, which means you cannot bundle it into the client certificate, which is a requirement to use it some situations (like Chrome on Android).
Another limitation, not strictly related to mTLS certs, is that the Cloudflare WAF rules will always return HTTP 403 when they block traffic. Customizing the response, including the response code, is limited to Pro plans (and above).
Need for customizing the response code
I have a password manager application (Bitwarden) that caches passwords locally, which is a helpful feature when I don't have an Internet connection. However, this app will always try to sync passwords if it can. As a security mechanism, if it is able to reach the server (Vaultwarden, in my case), and the server returns a 401 or 403 response code, the app will immediately clear the local cache.
I encountered this scenario when testing my mTLS configuration. I immediately thought of future situations where this could cause significant distress - let's say I'm outside my network and need access to a password. Something has happened on my device: maybe the mTLS cert was accidentally uninstalled, or maybe it expired and I haven't realized it yet. I try to access the password, everything vanishes, and I'm stuck.
I wanted to prevent this from being a possibility, which is why I wanted to alter the response code from 403 to 404. As mentioned in the previous section, this isn't possible on the free plan.
Enter Cloudflare Workers
However, there is another option within reach of the free plan, which is Cloudflare Workers. Workers allow you to deploy some custom code that can be mapped to a specific URL path. The free plan currently allows up to 100,000 Workers requests per day, which is plenty for household use.
To get this setup, I did the following:
Create and configure the mTLS certificate
Create an mTLS certificate and enable it for the targeted domain names.
Remove any WAF rules
I first made sure any WAF rules I had did not apply to the hostname in question. In the request lifecycle, these will be executed early on.
Add a Zero Trust bypass rule (optional)
Zero Trust access rules also run prior to Workers, so you need to make sure this won't block access to your application.
If you only have Zero Trust applications set up for specific subdomains, you may not need to do anything here. I happen to have a wildcard application setup, so that anything in my domain would route to the Zero Trust handling by default. This means I needed to specify an exception just for the subdomain that I wanted to protect with an mTLS cert.
To do this, I created a "bypass all rule": 
Create the worker
Create a new worker, using the provided "Hello World" template. Replace the code with the following:
export default {
async fetch(request, env, ctx) {
// Get the certificate status from the Cloudflare edge
const clientTrust = request.cf?.tlsClientAuth;
// Check that the mTLS has been presented and verified by Cloudflare.
if (!clientTrust
|| clientTrust.certPresented !== "1"
|| clientTrust.certVerified !== "SUCCESS") {
return new Response("Not Found", {
status: 404,
headers: { "Content-Type": "text/plain" }
});
}
// If valid, forward the fetch to the backend server.
return fetch(request);
},
};Optional: If you would normally use a WAF rule to block other types of traffic, you likely can incorporate that logic into the Worker as well. For example, you could add a regional restriction like this:
if (request.cf?.country !== "US") {
return new Response("Not Found", {
status: 404,
headers: { "Content-Type": "text/plain" }
});
}Set up route and test the new worker
After deploying the worker, you can then configure the appropriate route by following the Cloudflare documentation.
Then, try accessing the application you have protected, on devices with and without the mTLS certificate installed.
Future wishes
Ideally, I would really like to be able to target mTLS certificates as part of the Zero Trust access policies. This would allow different combinations to be created, like supporting either mTLS auth or an OAuth identity provider. Depending on the application being protected, this type of configuration may provide adequate security, while providing greater flexibility - allowing web browser access with just OAuth, but falling back to mTLS auth for devices where it is the best option.
There is rich support for mTLS auth in Zero Trust, but this support is limited to Business accounts only! At hundreds of dollars per month, that's a very high price to pay given that I don't need most of what that plan provides.
If anyone from Cloudflare is listening, it'd be great to expand the availability of this feature - potentially they could offer individual feature plans, like what is currently done for Workers. I'd happily pay a reasonable amount just to get support to bring my own root CA and use it as part of my Zero Trust access policies.
Final thoughts
Securing a server shouldn't come at the cost of usability. By moving logic from the limited WAF rules in a Cloudflare free account into a Cloudflare Worker, I believe I've managed to keep the high-security of using mTLS while smoothing out the quirks of the password manager application I'm using. It's an easy-to-configure change that avoids a potential lockout.
Are you running a similar setup? Have suggestions on other ways to leverage Cloudflare's copious free functionality? Leave a comment below or send me an email.
30 Jan 2026 11:20am GMT
29 Jan 2026
Django community aggregator: Community blog posts
Django: profile memory usage with Memray
Memory usage can be hard to keep under control in Python projects. The language doesn't make it explicit where memory is allocated, module imports can have signficant costs, and it's all too easy to create a global data structure that accidentally grows unbounded, leaking memory. Django projects can be particularly susceptible to memory bloat, as they may import many large dependencies like numpy, even if they're only used in a few places.
One tool to help understand your program's memory usage is Memray, a memory profiler for Python created by developers at Bloomberg. Memray tracks where memory is allocated and deallocated during program execution. It can then present that data in various ways, including spectucular flame graphs, collapsing many stack traces into a chart where bar width represents memory allocation size.
Profile a Django project
Memray can profile any Python command with its memray run command. For a Django project, I suggest you start by profiling the check management command, which loads your project and then runs system checks. This is a good approximation of the minimum work required to start up your Django app, imposed on every server load and management command execution.
To profile check, run:
$ memray run manage.py check
Writing profile results into memray-manage.py.4579.bin
System check identified no issues (0 silenced).
[memray] Successfully generated profile results.
You can now generate reports from the stored allocation records.
Some example commands to generate reports:
/.../.venv/bin/python3 -m memray flamegraph memray-manage.py.4579.bin
The command completes as normal, outputting System check identified no issues (0 silenced).. Around that, Memray outputs information about its profiling, saved in a .bin file featuring the process ID, and a suggestion to follow up by generating a flame graph.
The flame graph is great, so go ahead and make it:
$ memray flamegraph memray-manage.py.4579.bin
Wrote memray-flamegraph-manage.py.4579.html
The result is a .html file you can open in your browser, which will look something like this:
The header of the page contains some controls, along with a mini graph tracking resident and heap memory over time. Underneath it is the main flame graph, showing memory allocations over time.
By default, the graph is actually an "icicle" graph, with frames stacked downward like icicles, rather than upward like flames. This matches Python's stack trace representation, where the most recent call is at the bottom. Toggle between flame and icicle views with the buttons in the header.
The Stats button at the top opens a dialog with several details, including the peak memory usage:
Frames in the graph display the line of code running at the time, and their width is proportional to the amount of memory allocated at that point. Hover a frame to reveal its details: filename, line number, memory allocated, and number of allocations:

Make an improvement
In the above example, I already narrowed in on a potential issue. The line from numpy.random import ... allocates 5.7 MB of memory, about 23% of the peak usage of 25.2 MB. This import occurs in example/utils.py, on line 3. Let's look at that code now:
from colorsys import hls_to_rgb
from numpy.random import MT19937, Generator
def generate_icon_colours(number: int) -> list[str]:
"""
Generate a list of distinct colours for the given number of icons.
"""
colours = []
for i in range(number):
hue = i / number
lightness = 0.5
saturation = 0.7
rgb = hls_to_rgb(hue, lightness, saturation)
hex_colour = "#" + "".join(f"{int(c * 255):02x}" for c in rgb)
colours.append(hex_colour)
Generator(MT19937(42)).shuffle(colours)
return colours
The code uses the Generator.shuffle() method from numpy.random to shuffle a list of generated colours. Since importing numpy is costly, and this colour generation is only used in a few code paths (imagine we'd checked), we have a few options:
-
Delete the code-this is always an option if the function isn't used or can be replaced with something simpler, like pregenerated lists of colours.
-
Defer the import until needed, by moving it within the function:
def generate_icon_colours(number: int) -> list[str]: """ Generate a list of distinct colours for the given number of icons. """ from numpy.random import MT19937, Generator ...
Doing so will avoid the import cost until the first time the function is called, or something else imports
numpy.random. -
Use a lazy import:
lazy from numpy.random import MT19937, Generator def generate_icon_colours(number: int) -> list[str]: ...
This syntax should become available from Python 3.15 (expected October 2026), following the implementation of PEP 810. It makes a given imported module or name only get imported on first usage.
Until it's out, an alternative is available in
wrapt.lazy_import(), which creates an import-on-use module proxy:from wrapt import lazy_import npr = lazy_import("numpy.random") def generate_icon_colours(number: int) -> list[str]: ... npr.Generator(npr.MT19937(42)).shuffle(colours) ...
-
Use a lighter-weight alternative, for example Python's built-in
random.shuffle()function:import random ... def generate_icon_colours(number: int) -> list[str]: ... random.shuffle(colours) ...
In this case, I would go with option 4, as it avoids the heavy numpy dependency altogether, will provide almost equivalent results, and doesn't need any negotiation about changing functionality. We will see an improvement in startup memory usage as long as no other startup code path also imports numpy.random.
After making an edit, re-profile and look for changes:
In this case, it seems the change worked and memory usage has reduced. The flame graph looks like the right ~75% of the previous one, with "icicles" for regular parts of Django's startup process, such as importing django.db.models and running configure_logging(). And the Stats dialog shows a lower peak value.
A drop from 25.2 MB to 19.4 MB, or 22% overall reduction!
(If the change hadn't worked, we would probably have revealed another module that is loaded at startup and imports numpy.random. Removing or deferring that import could then yield the saving.)
A Zsh one-liner to speed up checking results
If you use Zsh, you chain memray run, memray flamegraph, and opening the HTML result file with:
$ memray run manage.py check && memray flamegraph memray-*.bin(om[1]) && open -a Firefox memray-flamegraph-*.html(om[1])
This can really speed up doing multiple iterations measuring potential improvements. I covered the (om[1]) globbing syntax in this previous Zsh-specific post.
29 Jan 2026 6:00am GMT
28 Jan 2026
Django community aggregator: Community blog posts
Inverting the Testing Pyramid - Brian Okken
π Links
- PythonTest website
- pytest book and Lean TDD
- Python Bytes podcast
- Test and Code podcast
- Python People podcast
- Django Packages website
- django-msgspec-field
- Ruff rules
- How to Hide an Empire book
- Python Polars - The Definitive Guide
- Little Brother by Cory Doctorow + There There by Tommy Orange
π₯ YouTube
Sponsor
This episode is brought to you by Six Feet Up, the Python, Django, and AI experts who solve hard software problems. Whether it's scaling an application, deriving insights from data, or getting results from AI, Six Feet Up helps you move forward faster.
See what's possible at sixfeetup.com.
28 Jan 2026 6:00pm GMT
23 Jan 2026
Django community aggregator: Community blog posts
Django News - Djangonaut Space Session 6 Applications Open! - Jan 23rd 2026
News
uvx.sh by Astral
Astral, makers of uv, have a new "install Python tools with a single command" website.
Python Software Foundation
Announcing Python Software Foundation Fellow Members for Q4 2025!
The PSF announces new PSF Fellows for Q4 2025, recognizing community leaders who contribute projects, education, events, and mentorship worldwide.
Departing the Python Software Foundation (Staff)
Ee Durbin is stepping down as PSF Director of Infrastructure, transitioning PyPI and infrastructure responsibilities to staff while providing 20% support for six months.
Djangonaut Space News
Announcing Djangonaut Space Session 6 Applications Open!
Djangonaut Space Session 6 opens applications for an eight-week mentorship program to contribute to Django core, accessibility, third-party projects, and new BeeWare documentation.
New Admins and Advisors for Djangonaut Space
Djangonaut Space appoints Lilian Tran and Raffaella Suardini as admins and Priya Pahwa as advisor, strengthening Django community leadership and contributor support.
Wagtail CMS News
llms.txt - preparing Wagtail docs for AI tools
Wagtail publishes developer and user documentation in llms.txt to provide authoritative, AI-friendly source files for LLMs, improving accessibility and evaluation for smaller models.
Updates to Django
Today, "Updates to Django" is presented by Pradhvan from Djangonaut Space! π
Last week we had 16 pull requests merged into Django by 11 different contributors - including 3 first-time contributors! Congratulations to Kundan Yadav, Parth Paradkar, and Rudraksha Dwivedi for having their first commits merged into Django - welcome on board! π₯³
This week's Django highlights: π¦
ModelIterablenow checks if foreign key fields are deferred before attempting optimization, avoiding N+1 queries when using.only()on related managers. (#35442)- The XML deserializer now raises errors for invalid nested elements instead of silently processing them, preventing potential performance issues from malformed fixtures. (#36769)
- Error messages now clearly indicate when annotated fields are excluded by earlier
.values()calls in chained queries. (#36352) - Improved performance in
construct_change_message()by avoiding unnecessarytranslation_override()calculation when logging additions. (#36801)
Django Newsletter
Articles
Unconventional PostgreSQL Optimizations
Use PostgreSQL check constraints, function-based or virtual generated columns, and hash-based exclusion constraints to reduce scans, shrink indexes, and enforce uniqueness efficiently.
Django 6.0 Tasks: a framework without a worker
Django 6.0 adds a native tasks abstraction but only supports one-off tasks without scheduling, retries, persistence, or a worker backend, limiting real-world utility.
I Created a Game Engine for Django?
Multiplayer Snake implemented in Django using Django LiveView, 270 lines of Python, server side game state, WebSocket driven HTML updates, no custom JavaScript.
Django Icon packs with template partials
Reusable SVG icon pack using Django template partialdefs and dynamic includes to render configurable icons with classes, avoiding custom template tags.
Building Critical Infrastructure with htmx: Network Automation for the Paris 2024 Olympics
HTMX combined with Django, Celery, and procedural server-side views enabled rapid, maintainable network automation tools for Paris 2024, improving developer productivity and AI-assisted code generation.
Don't Let Old Migrations Haunt Your Codebase
Convert old data migrations that have already run into noop RunPython migrations to preserve the migration graph while preventing test slowdowns and legacy breakage.
Django Time-Based Lookups: A Performance Trap
Your "simple" __date filter might be turning a millisecond query into a 30-second table scan-here's the subtle Django ORM trap and the one-line fix that restores index-level performance.
Podcasts
Django Brew
DjangoCon US 2025 recap covering conference highlights, community discussions on a REST story, SQLite in production, background tasks, and frontend tools like HTMX.
Django Job Board
Two new senior roles just hit the Django Job Board, one focused on building Django apps at SKYCATCHFIRE and another centered on Python work with data heavy systems at Dun & Bradstreet.
Senior Django Developer at SKYCATCHFIRE π
Senior Python Developer at Cial Dun & Bradstreet
Django Newsletter
Projects
quertenmont/django-msgspec-field
Django JSONField with msgspec structs as a Schema.
radiac/django-nanopages
Generate Django pages from Markdown, HTML, and Django template files.
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
23 Jan 2026 5:00pm GMT
22 Jan 2026
Django community aggregator: Community blog posts
Python Leiden meetup: PostgreSQL + Python in 2026 -- Aleksandr Dinu
(One of my summaries of the Python Leiden meetup in Leiden, NL).
He's going to revisit common gotchas of Python ORM usage. Plus some Postgresql-specific tricks.
ORM (object relational mappers) define tables, columns etc using Python concepts: classes, attributes and methods. In your software, you work with objects instead of rows. They can help with database schema management (migrations and so). It looks like this:
class Question(models.Model):
question = models.Charfield(...)
answer = models.Charfield(...)
You often have Python "context managers" for database sessions.
ORMs are handy, but you must be beware of what you're fetching:
# Bad, grabs all objects and then takes the length using python: questions_count = len(Question.objects.all()) # Good: let the database do it, # the code does the equivalent of "SELECT COUNT(*)": questions_count = Question.objects.all().count()
Relational databases allow 1:M and N:M relations. You use them with JOIN in SQL. If you use an ORM, make sure you use the database to follow the relations. If you first grab the first set of objects and then grab the second kind of objects with python, your code will be much slower.
"Migrations" generated by your ORM to move from one version of your schema to the next are real handy. But not all SQL concepts can be expressed in an ORM. Custom types, stored procedures. You have to handle them yourselves. You can get undesired behaviour as specific database versions can take a long time rebuilding after a change.
Migrations are nice, but they can lead to other problems from a database maintainer's point of view, like the performance suddenly dropping. And optimising is hard as often you don't know which server is connecting how much and also you don't know what is queried. Some solutions for postgresql:
- log_line_prefix = '%a %u %d" to show who is connecting to which database.
- log_min_duration_statement = 1000 logs every query taking more than 1000ms.
- log_lock_waits = on for feedback on blocking operations (like migrations).
- Handy: feedback on the number of queries being done, as simple programming errors can translate into lots of small queries instead of one faster bigger one.
If you've found a slow query, run that query with EXPLAIN (ANALYZE, BUFFERS) the-query. BUFFERS tells you how many pages of 8k the server uses for your query (and whether those were memory or disk pages). This is so useful that they made it the default in postgresql 18.
Some tools:
- RegreSQL: performance regression testing. You feed it a list of queries that you worry about. It will store how those queries are executed and compare it with the new version of your code and warn you when one of those queries suddenly takes a lot more time.
- Squawk: tells you (in CI, like github actions) which migrations are backward-incompatible or that might take a long time.
- You can look at one of the branching tools: aimed at getting access to production databases for testing. Like running your migration against a "branch"/copy of production. There are several tricks that are used, like filesystem layers. "pg_branch" and "pgcow" are examples. Several DB-as-a-service products also provide it (Databricks Lakebase, Neon, Heroku, Postgres.ai).
22 Jan 2026 5:00am GMT
Python Leiden meetup: PR vs ROC curves, which to use - Sultan K. Imangaliyev
(One of my summaries of the Python Leiden meetup in Leiden, NL).
Precision-recall (PR) versus Receiver Operating Characteristics (ROC) curves: which one to use if data is imbalanced?
Imbalanced data: for instance when you're investigating rare diseases. "Rare" means few people have them. So if you have data, most of the data will be of healthy people, there's a huge imbalance in the data.
Sensitivity versus specificity: sensitive means you find most of the sick people, specificity means you want as few false negatives and false positives as possible. Sensitivity/specificity looks a bit like precision/recall.
- Sensitivity: true positive rate.
- Specificity: false positive rate
If you classify, you can classify immediately into healthy/sick, but you can also use a probabilistic classifier which returns a chance (percentage) that someone can be classified as sick. You can then tweak which threshold you want to use: how sensitive and/or specific do you want to be?
PR and ROC curves (curve = graph showing the sensitivity/specificity relation on two axis) are two ways of measuring/visualising the sensitivity/specificity relation. He showed some data: if the data is imbalanced, PR is much better at evaluating your model. He compared balanced and imbalanced data with ROC and there was hardly a change in the curve.
He used scikit-learn for his data evaluations and demos.
22 Jan 2026 5:00am GMT


