27 Feb 2026

feedDjango community aggregator: Community blog posts

Using tox to Test a Django App Across Multiple Django Versions

Recently, I developed a reusable Django app django-clearplaintext for normalizing plain text in Django templates. And to package and test it properly, I had a fresh look to Tox.

Tox is the standard testing tool that creates isolated virtual environments, installs the exact dependencies you specify, and runs your test suite in each one - all from a single command.

This post walks through a complete, working setup using a minimal example app called django-shorturl.

The Example App: django-shorturl

django-shorturl is a self-contained Django app with one model and one view.

shorturl/models.py

from django.db import models
from django.utils.translation import gettext_lazy as _

class ShortLink(models.Model):
    slug = models.SlugField(_("slug"), unique=True)
    target_url = models.URLField(_("target URL"))
    created_at = models.DateTimeField(_("created at"), auto_now_add=True)

    class Meta:
        verbose_name = _("short link")
        verbose_name_plural = _("short links")

    def __str__(self):
        return self.slug

shorturl/views.py

from django.shortcuts import get_object_or_404, redirect

from .models import ShortLink

def redirect_link(request, slug):
    link = get_object_or_404(ShortLink, slug=slug)
    return redirect(link.target_url)

shorturl/urls.py

from django.urls import path

from . import views

urlpatterns = [
    path("<slug:slug>/", views.redirect_link, name="redirect_link"),
]

shorturl/admin.py

from django.contrib import admin
from .models import ShortLink

admin.site.register(ShortLink)

Project Layout

django-shorturl/
├── src/
│   └── shorturl/
│       ├── __init__.py
│       ├── admin.py
│       ├── models.py
│       ├── views.py
│       └── urls.py
├── tests/
│   ├── __init__.py
│   └── test_views.py
├── pyproject.toml
├── test_settings.py
└── tox.ini

The source lives under src/ and the tests are at the top level, separate from the package. This separation prevents the tests from accidentally being shipped inside the installed package.

Packaging: pyproject.toml

Tox needs a properly packaged app to install into each environment. With isolated_build = true (more on that below), Tox builds a wheel from your pyproject.toml before running any tests.

pyproject.toml

[project]
name = "django-shorturl"
version = "1.0.0"
requires-python = ">=3.8"
dependencies = [
    "Django>=4.2",
]

[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"

[tool.setuptools.packages.find]
where = ["src"]

The dependencies list here declares the runtime minimum - your app needs Django, but you don't pin a specific version because that is Tox's job during testing.

For the [build-system] section, we can also use uv_build to gain some performance improvements:

[build-system]
requires = ["uv_build >= 0.10.0, <0.11.0"]
build-backend = "uv_build"

[tool.uv.build-backend]
module-name = "shorturl"

Here module-name lets uv_build not to get confused between django-shorturl and shorturl.

Test Settings: test_settings.py

Django requires a settings module to run. As we don't have an associated project, we have to create a minimal one by defining project settings in the project's settings, create a minimal one dedicated to testing. It lives at the repo root so it's easy to point to from anywhere.

test_settings.py

SECRET_KEY = "test"

INSTALLED_APPS = [
    "shorturl",
]

DATABASES = {
    "default": {
        "ENGINE": "django.db.backends.sqlite3",
        "NAME": ":memory:",
    }
}

ROOT_URLCONF = "shorturl.urls"

DEFAULT_AUTO_FIELD = "django.db.models.AutoField"

A few deliberate choices here:

The Core: tox.ini

This is where Tox is configured.

tox.ini

[tox]
envlist =
    py{38,39,310,311,312}-django42,
    py{310,311,312}-django50,
    py{310,311,312,313}-django51,
    py{310,311,312,313,314}-django52,
    py{312,313,314}-django60

isolated_build = true

[testenv]
deps =
    django42: Django>=4.2,<4.3
    django50: Django>=5.0,<5.1
    django51: Django>=5.1,<5.2
    django52: Django>=5.2,<6.0
    django60: Django>=6.0,<6.1
commands =
    python -m django test
setenv =
    DJANGO_SETTINGS_MODULE = test_settings

envlist - the matrix

py{38,39,310,311,312}-django42 is a shortcut used in Tox.

The numbers inside {} are expanded automatically. Tox combines each Python version with django42, creating 5 environments:

The full envlist simply lists all Python and Django combinations you want to test, so you can check that your project works in each setup.

Each part separated by a dash in an environment name is called a "factor". You can have as many factors as you like, and they can be named anything. py* factors are a convention for Python versions. Others need to be defined in the [testenv] deps section.

isolated_build = true

This tells tox to build a proper wheel from your pyproject.toml before installing into each environment. Without it, tox would try to install your package with pip install -e ., which bypasses the build system and can hide packaging bugs. With it, each environment tests the package exactly as a user would receive it after pip install django-shorturl.

deps - conditional dependencies

The django42: prefix is a Tox factor condition: the dependency on that line is only installed when the environment name contains the django42 factor. This is how a single [testenv] block handles all Django versions without needing a separate section for each one.

Tox also installs your package itself into each environment (because of isolated_build), so you don't need to list it here.

commands

commands =
    python -m django test

python -m django test is Django's built-in test runner. It discovers tests by looking for files matching test*.py under the current directory, which picks up everything in your tests/ folder automatically.

setenv

setenv =
    DJANGO_SETTINGS_MODULE = test_settings

Django refuses to run without a settings module. This environment variable tells it where to find yours. Because test_settings.py is at the repo root and tox runs from the repo root, the module name test_settings resolves correctly without any path manipulation.

Writing the Tests

Create test cases for each (critical) component of your app. For example, if you have models, views, and template tags, create tests/test_models.py, tests/test_views.py, and tests/test_templatetags.py.

tests/test_views.py

from django.test import TestCase
from django.urls import reverse

from shorturl.models import ShortLink


class RedirectLinkViewTest(TestCase):
    def setUp(self):
        ShortLink.objects.create(
            slug="dt",
            target_url="https://www.djangotricks.com",
        )

    def test_redirects_to_target_url(self):
        response = self.client.get(
            reverse(
                "redirect_link", kwargs={"slug": "dt"}
            )
        )
        self.assertRedirects(
            response,
            "https://www.djangotricks.com",
            fetch_redirect_response=False,
        )

    def test_returns_404_for_unknown_slug(self):
        response = self.client.get(
            reverse(
                "redirect_link", kwargs={"slug": "nope"}
            )
        )
        self.assertEqual(response.status_code, 404)

Installing Python Versions with pyenv

Tox needs the actual Python binaries for every version in your envlist. If you try to run tox without them installed, it will fail immediately with an InterpreterNotFound error. pyenv is the standard way to install and manage multiple Python versions side by side.

Install pyenv

Use Homebrew on macOS (or follow the official instructions for Linux):

brew install pyenv

Add the following to your shell config (~/.zshrc, ~/.bashrc, etc.) and restart your shell:

export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"

Install each Python version

Install every version that appears in your envlist:

pyenv install 3.8
pyenv install 3.9
pyenv install 3.10
pyenv install 3.11
pyenv install 3.12
pyenv install 3.13
pyenv install 3.14

Make them all reachable at once

Tox resolves py312 by looking for a binary named python3.12 on PATH. The trick is pyenv global, which accepts multiple versions and places all of their binaries on your PATH simultaneously:

pyenv global 3.14 3.13 3.12 3.11 3.10 3.9 3.8

List the first (the one python3 and python resolve to) and work downward. After running this, confirm every interpreter is visible:

python3.8 --version   # Python 3.8.x
python3.9 --version   # Python 3.9.x
python3.10 --version   # Python 3.10.x
python3.11 --version   # Python 3.11.x
python3.12 --version   # Python 3.12.x
python3.13 --version   # Python 3.13.x
python3.14 --version   # Python 3.14.x

Now tox can find all of them and the full matrix will run without InterpreterNotFound errors.

Running tox

Run the full matrix:

tox

Or run a single environment:

tox -e py312-django52

tox will print a summary at the end showing which environments passed and which failed.

  py38-django42: OK (3.25=setup[2.32]+cmd[0.93] seconds)
  py39-django42: OK (2.88=setup[2.16]+cmd[0.72] seconds)
  py310-django42: OK (2.61=setup[2.02]+cmd[0.59] seconds)
  py311-django42: OK (2.70=setup[2.09]+cmd[0.61] seconds)
  py312-django42: OK (3.28=setup[2.46]+cmd[0.82] seconds)
  py310-django50: OK (2.67=setup[2.09]+cmd[0.58] seconds)
  py311-django50: OK (2.61=setup[2.02]+cmd[0.59] seconds)
  py312-django50: OK (2.85=setup[2.25]+cmd[0.60] seconds)
  py310-django51: OK (2.81=setup[2.27]+cmd[0.54] seconds)
  py311-django51: OK (2.85=setup[2.30]+cmd[0.55] seconds)
  py312-django51: OK (2.70=setup[2.09]+cmd[0.61] seconds)
  py313-django51: OK (2.97=setup[2.29]+cmd[0.68] seconds)
  py310-django52: OK (3.03=setup[2.31]+cmd[0.72] seconds)
  py311-django52: OK (2.88=setup[2.22]+cmd[0.66] seconds)
  py312-django52: OK (2.80=setup[2.13]+cmd[0.67] seconds)
  py313-django52: OK (4.70=setup[3.66]+cmd[1.04] seconds)
  py314-django52: OK (6.41=setup[5.18]+cmd[1.23] seconds)
  py312-django60: OK (5.13=setup[4.06]+cmd[1.07] seconds)
  py313-django60: OK (5.35=setup[4.15]+cmd[1.21] seconds)
  py314-django60: OK (6.01=setup[4.65]+cmd[1.37] seconds)
  congratulations :) (70.59 seconds)

Final Words

What makes this setup robust?

This setup is not the only way to test a Django app with Tox, but it is a solid starting point that balances comprehensiveness with maintainability. With a little effort upfront, you can ensure your app works across a wide range of Python and Django versions - and catch packaging bugs before they hit real users.

27 Feb 2026 6:00pm GMT

Django News - Google Summer of Code 2026 with Django - Feb 27th 2026

News

Google Summer of Code 2026 with Django

All the information you need to apply for Django's 21st consecutive year in the program.

djangoproject.com

Django Software Foundation

DSF member of the month - Baptiste Mispelon

Baptiste is a long-time Django and Python contributor who co-created the Django Under the Hood conference series and serves on the Ops team maintaining its infrastructure. He has been a DSF member since November 2014. You can learn more about Baptiste by visiting Baptiste's website and his GitHub Profile.

djangoproject.com

Wagtail CMS News

The *1000 most popular* Django packages

Based on GitHub stars and PyPI download numbers.

wagtail.org

Updates to Django

Today, "Updates to Django" is presented by Johanan from Djangonaut Space! 🚀

Last week we had 11 pull requests merged into Django by 10 different contributors - including 4 first-time contributors! Congratulations to Saish Mungase, Marco Aurélio da Rosa Haubrich, 조형준 and Muhammad Usman for having their first commits merged into Django - welcome on board!

This week's Django highlights:

Django Newsletter

Django Fellow Reports

Django Fellow Report - Jacob

A short week with a US holiday and some travel to visit family, but still 4 tickets triaged, 12 reviewed, 3 authored, security report, and more.

djangoproject.com

Django Fellow Report - Natalia

Roughly 70% of my time this week went into security work, which continues being quite demanding. The remaining time was primarily dedicated to Mike's excellent write-up on the dictionary-based EMAIL_PROVIDERS implementation and migration, along with a smaller amount of ticket triage and PR review.

Also 2 tickets triaged, 9 reviewed, and other misc.

djangoproject.com

Sponsored Link 1

PyTV - Free Online Python Conference (March 4th)

1 Day, 15 Speakers, 6 hours of live talks including from Sarah Boyce, Sheena O'Connell, Carlton Gibson, and Will Vincent. Sign up and save the date!

jetbrains.com

Articles

⭐ Django ORM Standalone⁽¹⁾: Querying an existing database

A practical step-by-step guide to using Django ORM in standalone mode to connect to and query an existing database using inspectdb.

paulox.net

Using tox to Test a Django App Across Multiple Django Versions

A practical, production-ready guide to using tox to test your reusable Django app across multiple Python and Django versions, complete with packaging, minimal test settings, and a full version matrix.

djangotricks.com

How I Use django-simple-nav for Dashboards, Command Palettes, and More

Jeff shares how he uses django-simple-nav to define navigation once in Python and reuse it across dashboards and even a lightweight HTMX-powered command palette.

webology.dev

Serving Private Files with Django and S3

Django's FileField and ImageField are good at storing files, but on their own they don't let us control access. When …

lincolnloop.com

CLI subcommands with lazy imports

In case you didn't hear, PEP 810 got accepted which means Python 3.15 is going to support lazy imports! One of the selling points of lazy imports is with code that has a CLI so that you only import code as necessary, making the app a bit more snappy

snarky.ca

Events

DjangoCon US Updated Dates

The conference is now August 24-28, 2026 in Chicago, Illinois. The Call for Proposals (CFP) is open until March 16. And Early Bird Tickets are now available!

djangocon.us

Sponsored Link 2

Sponsor Django News

Reach 4,300+ highly-engaged and experienced Django developers.

django-news.com

Podcasts

Django Chat #196: Freelancing & Community - Andrew Miller

Andrew is a prolific software developer based out of Cambridge, UK. He runs the solo agency Software Crafts, writes regularly, is a former Djangonaut, and co-founder of the AI banking startup Hamilton Rock.

djangochat.com

PyPodcats Episode 11 with Sheena O'Connell

Sheena O'Connell tells us about her journey, the importance of community and good practices for teachers and educators in Python, and organizational psychology. We talk about how to enable a 10x team and how to enable the community through guild of educators.

pypodcats.live

Django Job Board

This week there is a very rare Infrastructure Engineer position for the PSF.

Infrastructure Engineer at Python Software Foundation 🆕

Lead Backend Engineer at TurnTable

Backend Software Developer at Chartwell Resource Group Ltd.

Django Newsletter

Projects

yassi/dj-control-room

The control room for your Django app.

github.com

adamchainz/icu4py

Python bindings to the ICU (International Components for Unicode) library (ICU4C).

github.com

matagus/awesome-django-articles

📚 Articles explaining topics about Django like admin, ORM, views, forms, scaling, performance, testing, deployments, APIs, and more!

github.com

Sponsorship

🚀 Reach 4,300+ Django Developers Every Week

Want to reach developers who actually read what they subscribe to?

Django News is opened by thousands of engaged Django and Python developers every week. A 52% open rate and 15% click rate means your message lands in front of people who pay attention.

Support the newsletter and promote your product, service, event, or job to builders who use Django daily.

👉 Explore sponsorship options: https://django-news.com/sponsorship

django-news.com


This RSS feed is published on https://django-news.com/. You can also subscribe via email.

27 Feb 2026 5:00pm GMT

25 Feb 2026

feedDjango community aggregator: Community blog posts

Freelancing & Community - Andrew Miller

🔗 Links

📦 Projects

📚 Books

🎥 YouTube

Sponsor

This episode is brought to you by Six Feet Up, the Python, Django, and AI experts who solve hard software problems. Whether it's scaling an application, deriving insights from data, or getting results from AI, Six Feet Up helps you move forward faster.

See what's possible at sixfeetup.com.

25 Feb 2026 6:00pm GMT

I Checked 5 Security Skills for Claude Code. Only One Is Worth Installing

I'm writing this in late February 2026. The skills ecosystem for Claude Code is moving fast, and the specific numbers and repos here will probably be outdated within a month. But the thinking still applies, so consider this a snapshot.

If you're using Claude Code, you've probably wondered: can …

Read now

25 Feb 2026 10:51am GMT

20 Feb 2026

feedDjango community aggregator: Community blog posts

Django News - Contributor Covenant, Security Team Expansion, and Django 6.1 Updates - Feb 20th 2026

Introduction

📣 Sponsor Django News

Reach 4,305 engaged Django developers with a single weekly placement. High open rates. Real clicks. Only two sponsor spots per issue.

👉 Book your spot

django-news.com

Django Software Foundation

Plan to Adopt Contributor Covenant 3 as Django's New Code of Conduct

Django establishes a transparent community-driven process and advances the adoption of Contributor Covenant 3 as its Code of Conduct with staged policy updates.

djangoproject.com

Python Software Foundation

Join the Python Security Response Team!

Python core adds public governance and onboarding for the Python Security Response Team, enabling broader community nominations and coordinated CVE and OSV vulnerability remediation.

blogspot.com

Wagtail CMS News

Open source AI we use to work on Wagtail

Wagtail team recommends using open source AI models and inference providers like Scaleway, Neuralwatt, Ollama, and Mistral to power Wagtail AI integrations.

wagtail.org

Updates to Django

Today, "Updates to Django" is presented by Raffaella from Djangonaut Space! 🚀

Last week we had 25 pull requests merged into Django by 13 different contributors - including 2 first-time contributors! Congratulations to 93578237 and Hossam Hassan for having their first commits merged into Django - welcome on board!

News in Django 6.1:

It's also fixed for Django 5.2 NameError when inspecting functions making use of deferred annotations in Python 3.14 (#36903).

Is deprecated in Django 6.0: Passing a string to the delimiter argument of the (deprecated) PostgreSQL StringAgg class is deprecated. Use a Value or expression instead to prepare for compatibility with the generally available StringAgg class.

Django Newsletter

Sponsored Link 1

PyTV - Free Online Python Conference (March 4th)

1 Day, 15 Speakers, 6 hours of live talks including from Sarah Boyce, Sheena O'Connell, Carlton Gibson, and Will Vincent. Sign up and save the date!

jetbrains.com

Articles

Checking Django Settings

Use Python type hints and runtime Django checks to validate core settings types and provide typed helpers for structured settings to catch misconfigurations early.

dev.to

Difference Between render() and HttpResponse() in Django (With Practical Examples)

render() loads and renders templates with context and returns an HttpResponse, while HttpResponse returns raw content directly, best for simple or API responses.

dev.to

A CLI to fight GitHub spam

gh triage provides gh CLI extensions to automate marking GitHub issues and PRs as spam or invalid and bulk unassigning reviewers and assignees.

hugovk.dev

Deploying a project to the world

Outlines IaC and deployment pipeline practices: state-aware deployments, environment separation, and bootstrap management to deploy applications reliably with Pulumi at scale.

softwarecrafts.co.uk

Tech Hiring Has a Fraud Problem

Fraudulent and AI deepfake candidates are increasingly infiltrating Python and Django hiring pipelines, requiring earlier screening, identity checks, and community verification.

foxleytalent.com

Events

DjangoCon Europe 2026 Opportunity Grants

Need financial support to attend DjangoCon Europe 2026?

Apply for an opportunity grant by March 1st, 2026.

djangocon.eu

PyCon US 2026: Maintainers Summit

The Maintainers Summit at PyCon US 2026 invites Python project leaders to gather in Long Beach on May 16 to share real-world insights on building sustainable projects and thriving communities.

pycon.org

Django Job Board

Infrastructure Engineer at Python Software Foundation 🆕

Software Engineer (Python / Django) at Mirvie 🆕

Python Developer REST APIs at Worx-ai 🆕

Lead Backend Engineer at TurnTable

Backend Software Developer at Chartwell Resource Group Ltd.

Django Newsletter

Projects

RealOrangeOne/django-tasks-db

An ORM-based backend for Django Tasks.

github.com

RealOrangeOne/django-tasks-rq

A Django Tasks backend which uses RQ as its underlying queue.

github.com

UnknownPlatypus/djangofmt

A fast, HTML aware, Django template formatter, written in Rust.

github.com

yassi/dj-urls-panel

Visualize Django URL routing inside the Django Admin, including patterns, views, namespaces, and conflicts.

github.com

Sponsorship

🚀 Reach 4,300+ Django Developers Every Week

Django News is read by thousands of engaged Django and Python developers each week. With a 52% open rate and 15% click-through rate, our audience doesn't just subscribe. They pay attention.

Put your product, service, event, or job in front of developers who build with Django every day.

👉 Explore sponsorship options

django-news.com


This RSS feed is published on https://django-news.com/. You can also subscribe via email.

20 Feb 2026 5:00pm GMT

Django ORM Standalone⁽¹⁾: Querying an existing database

A practical step-by-step guide to using Django ORM in standalone mode to connect to and query an existing database using inspectdb.

20 Feb 2026 5:00am GMT

18 Feb 2026

feedDjango community aggregator: Community blog posts

Deploying a project to the world

At the end of January, I was building out the deployment and infrastructure components for the startup project, so figured it would be an appropriate time to document how I think about these concepts at high level, perhaps they will help others. Generally I think about these processes two ways. First, is to create an environment, such as a virtual machine, PaaS, or container with a code spaced hole in it for your application, then create a process that moves the code from source control into that code spaced hole environment. This represents the initial deployment at a high level. Second, I think of deployments as pipelines. With the rise of infrastructure as code over the past decade, traditional CI/CD pipelines have become cyclical: code is pushed, deployed to production, and the cycle repeats. Infrastructure code is similar to application code, but its cadence is much slower. While a typical application deployment aims for multiple pushes per day-or at least a few per week-Infrastructure as Code (IaC) is usually deployed far less frequently, often annually. Early in a project, or when creating environments for feature branches, infrastructure deployments may occur more often, but they remain cyclical: a code push triggers an action that updates the infrastructure.

Both application and infrastructure code require state management. Application code often involves database migrations, where the current state is known and migrations are applied directly. In contrast, infrastructure can drift over time, requiring tools to read the existing state and apply only the necessary changes. Managing this state is crucial; for example, you wouldn't redeploy an entire domain each time-some elements, like DNS records, must remain consistent to avoid breaking the system.

I like to think of IaC as building with Legos: components such as networking, load balancers, instances, databases, and caches are assembled into an application, which is then placed into an environment like staging or production. Some resources, like DNS records or mail settings, exist outside these environments to keep them in a global environment and reduce blast radius if something fails. This separation ensures that a failure in one environment doesn't affect an entire company. Finally, a bootstrap or management environment provides out‑of‑band control for emergency recovery, enforcing the principle of least privilege.

This high‑level view covers the initial deployment cycle; ongoing operation, monitoring, and maintenance are separate concerns. Ideally, I would like to see IaC repos that could be treated like a pipeline, allowing continuous deployment despite the need to read existing state rather than simply overwriting it, but then I am not an expert into the internals of these systems and have no desire to be an expert at this stage in my career. However the above concepts allows me to from zero to deployed with Pulumi code (having never used it before) in a matter of days rather than weeks.

18 Feb 2026 6:00am GMT

Adding analytics to my blog

Hey everyone, quick heads up: I'm adding analytics to the blog.

Before you reach for your adblocker, hear me out. I'm using Umami, which is open source, privacy-respecting, and doesn't use cookies. It doesn't track you across sites, doesn't collect personal data, and is fully open source so you can verify that yourself.

On top of that, I'm self-hosting it on my own infrastructure, so the data never touches a third party. No Google Analytics, no Cloudflare analytics, no one else sees anything.

I mainly want to know which posts are actually useful to people and which ones are just me yelling into the void. That's it.

If you have any questions or concerns, you know where to find me on the Contact page.

18 Feb 2026 6:00am GMT

16 Feb 2026

feedDjango community aggregator: Community blog posts

AI and readable APIs

In the AI age the importance of readable APIs goes up, as this can mean the difference between not reading the code because it's too much, and easily reading it to verify it is correct because it's tiny. It's been pretty clear that one of the superpowers of AI development is that it happily deals with enormous amounts of boilerplate and workarounds in a way that would drive a human insane. But we need to be careful of this, and notice that this is what is happening.

High level APIs with steep learning curves (like iommi) are now just as easy to use as simpler APIs, since the cost of initial learning is moved from the human to the AI. Since we also invested heavily in great error messages and validating as much as possible up front, the feedback to the AI models is great. We've been banging the drum of "no silent fixes!" for a decade, and nothing kills human or AI productivity as silent failures.

This is the time to focus our attention as humans to making APIs that are succinct and clear. It was vital before, but it's growing in importance for every day.

16 Feb 2026 6:00am GMT

AI and readable APIs

In the AI age the importance of readable APIs goes up, as this can mean the difference between not reading the code because it's too much, and easily reading it to verify it is correct because it's tiny. It's been pretty clear that one of the superpowers of AI development is that it happily deals with enormous amounts of boilerplate and workarounds in a way that would drive a human insane. But we need to be careful of this, and notice that this is what is happening.

High level APIs with steep learning curves (like iommi) are now just as easy to use as simpler APIs, since the cost of initial learning is moved from the human to the AI. Since we also invested heavily in great error messages and validating as much as possible up front, the feedback to the AI models is great. We've been banging the drum of "no silent fixes!" for a decade, and nothing kills human or AI productivity as silent failures.

This is the time to focus our attention as humans to making APIs that are succinct and clear. It was vital before, but it's growing in importance for every day.

16 Feb 2026 6:00am GMT

15 Feb 2026

feedDjango community aggregator: Community blog posts

Using Claude for spellchecking and grammar

On the pytest discord channel Sviatoslav mentioned a pull request with a bunch of spelling and grammar fixes. We had a discussion about the morality of not disclosing that it was an AI driven pull request up front, but what was pretty clear was that the quality was surprisingly high.

Since I have a project with extensive documentation that I've spelled checked thoroughly this interested me. I write all the documentation with PyCharm which has built in spelling and grammar checks, so I was thinking it would be hard to find many errors.

I sent this prompt to Claude:

Go through the docs directory. Strings marked with # language: rst will be visible as normal text in the documentation. Suggest spelling, grammar, and language clarity improvements.

Claude fires up ~8 sub agents and found a surprising amount of things. Every single change was good.

A funny detail was that Claude ignored my request to only check the docs directory and found some issues in docstrings in the main source code. I can't be angry about that :P

The funniest mistake was that the docs had the word "underling" instead of "underlying" in one place ("feature set of the underling Query and Form classes"). Perfectly fine spelling and grammar, but Claude correctly spots that this is mistake.

If you have some documentation, you definitely should give this a shot.

15 Feb 2026 6:00am GMT

djust 0.3.0 — "Phoenix Rising" 🔥

The biggest djust release yet with 20+ major features. Authentication, server-push, multi-tenancy, PWA support, AI tooling, automatic change tracking, CSS framework support, and security hardening make 0.3 production-ready.

15 Feb 2026 2:06am GMT

13 Feb 2026

feedDjango community aggregator: Community blog posts

Django News - The Post-Heroku Django World - Feb 13th 2026

News

Django Steering Council 2025 Year in Review

They've been busy! A new-features repo, Community Ecosystem page, administrative bits, and more.

djangoproject.com

Read the Docs: Making search faster for all projects

Read the Docs massively improved search latency by reindexing into multiple shards, tuning Elasticsearch queries and client, and fixing Django ORM N+1s and caching.

readthedocs.com

Releases

Python Insider: Python 3.15.0 alpha 6

Python 3.15.0a6 preview highlights a new low-overhead sampling profiler, UTF-8 default encoding, JIT performance gains, unpacking in comprehensions, and typing improvements.

blogspot.com

Python Software Foundation

Python is for Everyone

Georgi from the PSF Diversity and Inclusion Working Group talks about the history of these efforts and most importantly, why it matters for all of us.

georgiker.com

Django Fellow Reports

Fellow Report - Natalia

3 tickets triaged, 2 reviewed, 1 authored, security work, and other misc.

djangoproject.com

Fellow Report - Jacob

8 tickets triaged, 18 reviewed, 6 authored, 2 discussed, and other misc.

djangoproject.com

Wagtail CMS News

Wagtail nominated for TWO CMS Critic Awards! 🏆

Wagtail CMS is up for some trophies.

wagtail.org

Updates to Django

Today, "Updates to Django" is presented by Hwayoung from Djangonaut Space! 🚀

Last week we had 11 pull requests merged into Django by 8 different contributors - including 2 first-time contributors! Congratulations to Patryk Bratkowski and ar3ph for having their first commits merged into Django - welcome on board!

It's fixed horizontal form field alignment issues within <fieldset> in admin. (#36788)

Django Newsletter

Sponsored Link 1

PyTV - Free Online Python Conference (March 4th)

1 Day, 15 Speakers, 6 hours of live talks including from Sarah Boyce, Sheena O'Connell, Carlton Gibson, and Will Vincent. Sign up and save the date!

jetbrains.com

Articles

Django Developer Salary Report 2026

An annual report from Foxley Talent on what's actually happening in the market.

foxleytalent.com

Sorting Strategies for Optional Fields in Django

How to control NULL value placement when sorting Django QuerySets using F() expressions.

blog.maksudul.bd

How to dump Django ORM data to JSON while debugging?

Sometimes, I need to debug specific high-level tests by inspecting what gets created in the database as a side effect. I could use a debugger and poke around the Django ORM at a breakpoint - but quite often it's simply faster to dump the entire table to JSON, see what's there, and then apply fixes accordingly.

github.io

Introducing: Yapping, Yet Another Python Packaging (Manager)

Yapping automates adding dependencies to pyproject.toml and running pip-tools compile/install, providing a simple, non-lockfile Python dependency workflow for Django projects.

jovell.dev

Python: introducing icu4py, bindings to the Unicode ICU library

icu4py provides Pythonic bindings to ICU4C for locale-aware text boundary analysis and MessageFormat pluralization, enabling precise internationalization in Django apps.

adamj.eu

Loopwerk: It's time to leave Heroku

Heroku is winding down; migrate Django apps now to alternatives like Fly.io, Render, or self-hosted Coolify and Hetzner to regain control, reliability, and lower costs.

loopwerk.io

Heroku Is (Finally, Officially) Dead

Analyzing the official announcement and reviewing hosting alternatives in 2026.

wsvincent.com

Videos

django-bolt - Rust-powered API Framework for Django

An overview from BugBytes on the new django-bolt package, describing what it is and how to use it!

youtube.com

Sponsored Link 2

Sponsor This Newsletter!

Reach 4,300+ highly-engaged and experienced Django developers.

django-news.com

Podcasts

Django Chat #195: Improving Django with Adam Hill

Adam is the co-host of the Django Brew podcast and prolific contributor to the Django ecosystem with author of a multitude of Django projects including django-unicorn, coltrane, dj-angles, and many more.

djangochat.com

Django Job Board

Lead Backend Engineer at TurnTable 🆕

Python Developer REST APIs - Immediate Start at Worx-ai

Backend Software Developer at Chartwell Resource Group Ltd.

Senior Django Developer at SKYCATCHFIRE

Django Newsletter

Projects

JohananOppongAmoateng/django-migration-audit

A forensic Django tool that verifies whether a live database schema is historically consistent with its applied migrations.

github.com

G4brym/django-cf

A set of tools to integrate Django with Cloudflare Developer platform.

github.com

DjangoAdminHackers/django-linkcheck

An app that will analyze and report on links in any model that you register with it. Links can be bare (urls or image and file fields) or embedded in HTML (linkcheck handles the parsing). It's fairly easy to override methods of the Linkcheck object should you need to do anything more complicated (like generate URLs from slug fields etc).

github.com


This RSS feed is published on https://django-news.com/. You can also subscribe via email.

13 Feb 2026 5:00pm GMT

Use your Claude Max subscription as an API with CLIProxyAPI

So here's the thing: I'm paying $100/month for Claude Max. I use it a lot, it's worth it. But then I wanted to use my subscription with my Emacs packages - specifically forge-llm (which I wrote!) for generating PR descriptions in Forge, and magit-gptcommit for auto-generating commit messages in Magit. Both packages use the llm package, which supports OpenAI-compatible endpoints.

The problem? Anthropic blocks OAuth tokens from being used directly with third-party API clients. You have to pay for API access separately. 🤔

That felt wrong. I'm already paying for the subscription, why can't I use it however I want?

Turns out, there's a workaround. The Claude Code CLI can use OAuth tokens. So if you put a proxy in front of it that speaks the OpenAI API format, you can use your Max subscription with basically anything that supports OpenAI endpoints. And that's exactly what CLIProxyAPI does.

Your App (Emacs llm package, scripts, whatever)
↓
HTTP Request (OpenAI format)
↓
CLIProxyAPI
↓
OAuth Token (from your Max subscription)
↓
Anthropic API
↓
Response → OpenAI format → Your App

No extra API costs. Just your existing subscription. Sweet!

Why CLIProxyAPI and not something else?

I actually tried claude-max-api-proxy first. It worked! But the model list was outdated (no Opus 4.5, no Sonnet 4.5), it's a Node.js project that wraps the CLI as a subprocess, and it felt a bit… abandoned.

CLIProxyAPI is a completely different story:

What you'll need

Installation

Linux

There's a community installer that does everything for you: downloads the latest binary to ~/cliproxyapi/, generates API keys, creates a systemd service:

curl -fsSL https://raw.githubusercontent.com/brokechubb/cliproxyapi-installer/refs/heads/master/cliproxyapi-installer | bash

If you're on Arch (btw):

yay -S cli-proxy-api-bin

macOS

Homebrew. Easy:

brew install cliproxyapi

Authenticating with Claude

Before the proxy can use your subscription, you need to log in:

# Linux
cd ~/cliproxyapi
./cli-proxy-api --claude-login

# macOS (Homebrew)
cliproxyapi --claude-login

This opens your browser for the OAuth flow. Log in with your Claude account, authorize it, done. The token gets saved to ~/.cli-proxy-api/.

If you're on a headless machine, add --no-browser and it'll print the URL for you to open elsewhere:

./cli-proxy-api --claude-login --no-browser

Configuration

The installer generates a config.yaml with random API keys. These are keys that clients use to authenticate to your proxy, not Anthropic keys.

Here's what I'm running:

# Bind to localhost only since I'm using it locally
host: "127.0.0.1"

# Server port
port: 8317

# Authentication directory
auth-dir: "~/.cli-proxy-api"

# No client auth needed for local-only use
api-keys: []

# Keep it quiet
debug: false

The important bit is api-keys: []. Setting it to an empty list disables client authentication, which means any app on your machine can hit the proxy without needing a key. This is fine if you're only using it locally.

If you're exposing the proxy to your network (e.g., you want to hit it from your phone or another machine), keep the generated API keys and also set host: "" so it binds to all interfaces. You don't want random people on your network burning through your subscription.

Starting the service

Linux (systemd)

The installer creates a systemd user service for you:

systemctl --user enable --now cliproxyapi.service
systemctl --user status cliproxyapi.service

Or just run it manually to test first:

cd ~/cliproxyapi
./cli-proxy-api

macOS (Homebrew)

brew services start cliproxyapi

Testing it

Let's make sure everything works:

# List available models
curl http://localhost:8317/v1/models

# Chat completion
curl -X POST http://localhost:8317/v1/chat/completions \
 -H "Content-Type: application/json" \
 -d '{
 "model": "claude-sonnet-4-20250514",
 "messages": [{"role": "user", "content": "Say hello in one sentence."}]
 }'

# Streaming (note the -N flag to disable curl buffering)
curl -N -X POST http://localhost:8317/v1/chat/completions \
 -H "Content-Type: application/json" \
 -d '{
 "model": "claude-sonnet-4-20250514",
 "messages": [{"role": "user", "content": "Say hello in one sentence."}],
 "stream": true
 }'

If you get a response from Claude, you're golden. 🎉

Using it with Emacs

This is the fun part. Both forge-llm and magit-gptcommit use the llm package for their LLM backend. The llm package has an OpenAI-compatible provider, so we just need to point it at our proxy.

Setting up the llm provider

First, make sure you have the llm package installed. Then configure an OpenAI provider that points to CLIProxyAPI:

(require 'llm-openai)

(setq my/claude-via-proxy
 (make-llm-openai-compatible
 :key "not-needed"
 :chat-model "claude-sonnet-4-20250514"
 :url "http://localhost:8317/v1"))

That's it. That's the whole LLM setup. Now we can use it everywhere.

forge-llm (PR descriptions)

I wrote forge-llm to generate PR descriptions in Forge using LLMs. It analyzes the git diff, picks up your repository's PR template, and generates a structured description. To use it with CLIProxyAPI:

(use-package forge-llm
 :after forge
 :config
 (forge-llm-setup)
 (setq forge-llm-llm-provider my/claude-via-proxy))

Now when you're creating a PR in Forge, you can hit SPC m g (Doom) or run forge-llm-generate-pr-description and Claude will write the description based on your diff. Using your subscription. No API key needed.

magit-gptcommit (commit messages)

magit-gptcommit does the same thing but for commit messages. It looks at your staged changes and generates a conventional commit message. Setup:

(use-package magit-gptcommit
 :after magit
 :config
 (setq magit-gptcommit-llm-provider my/claude-via-proxy)
 (magit-gptcommit-mode 1)
 (magit-gptcommit-status-buffer-setup))

Now in the Magit commit buffer, you can generate a commit message with Claude. Again, no separate API costs.

Any other llm-based package

The beauty of the llm package is that any Emacs package that uses it can benefit from this setup. Just pass my/claude-via-proxy as the provider. Some other packages that use llm: ellama, ekg, llm-refactoring. They'll all work with your Max subscription through the proxy.

Using it with other tools

Since CLIProxyAPI speaks the OpenAI API format, it works with anything that supports custom OpenAI endpoints. The magic three settings are always the same:

Here's a Python example using the OpenAI SDK:

from openai import OpenAI

client = OpenAI(
 base_url="http://localhost:8317/v1",
 api_key="not-needed"
)

response = client.chat.completions.create(
 model="claude-sonnet-4-20250514",
 messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Available models

CLIProxyAPI exposes all models available through your subscription. The names use the full dated format. You can always check the list with:

curl -s http://localhost:8317/v1/models | jq '.data[].id'

At the time of writing, you'll get Claude Opus 4, Sonnet 4, Sonnet 4.5, Haiku 4.5, and whatever else Anthropic has made available to Max subscribers.

How much does this save?

If you're already paying for Claude Max, this is basically free API access. For context:

Usage API Cost With CLIProxyAPI
1M input tokens/month ~$15 $0 (included)
500K output tokens/month ~$37.50 $0 (included)
Monthly Total ~$52.50 $0 extra

And those numbers add up quick when you're generating PR descriptions and commit messages all day. I was getting to the point where my API costs were approaching the subscription price, which is silly when you think about it.

Conclusion

The whole setup took me about 10 minutes. Download binary, authenticate, edit config, start service, point my Emacs llm provider at it. That's it.

What I love about CLIProxyAPI is that it's exactly the kind of tool I appreciate: a single binary, a YAML config, does one thing well, and gets out of your way. No magic, no framework, no runtime dependencies. And since it's OpenAI-compatible, it plays nicely with the entire llm package ecosystem in Emacs.

The project is at https://github.com/router-for-me/CLIProxyAPI and the community is very active. If you run into issues, their GitHub issues are responsive.

See you in the next one!

13 Feb 2026 6:00am GMT

11 Feb 2026

feedDjango community aggregator: Community blog posts

Improving Django - Adam Hill

🔗 Links

📦 Projects

📚 Books

🎥 YouTube

Sponsor

This episode was brought to you by Buttondown, the easiest way to start, send, and grow your email newsletter. New customers can save 50% off their first year with Buttondown using the coupon code DJANGO.

11 Feb 2026 5:00pm GMT

09 Feb 2026

feedDjango community aggregator: Community blog posts

Claude Code from the beach: My remote coding setup with mosh, tmux and ntfy

I recently read this awesome post by Granda about running Claude Code from a phone, and I thought: I need this in my life. The idea is simple: kick off a Claude Code task, pocket the phone, go do something fun, and get a notification when Claude needs your help or finishes working. Async development from anywhere.

But my setup is a bit different from his. I'm not using Tailscale or a cloud VM. I already have a WireGuard VPN connecting my devices, a home server, and a self-hosted ntfy instance. So I built my own version, tailored to my infrastructure.

Here's the high-level architecture:

┌──────────┐ mosh ┌─────────────┐ ssh ┌─────────────┐
│ Phone │───────────────▶ │ Home Server │───────────────▶ │ Work PC │
│ (Termux) │ WireGuard │ (Jump Box) │ LAN │(Claude Code)│
└──────────┘ └─────────────┘ └──────┬──────┘
▲ │
│ ntfy (HTTPS) │
└─────────────────────────────────────────────────────────────┘

The loop is: I'm at the beach, I type cc on my phone, I land in a tmux session with Claude Code. I give it a task, pocket the phone, and go back to whatever I was doing. When Claude has a question or finishes, my phone buzzes. I pull it out, respond, pocket it again. Development fits into the gaps of the day.

And here's what the async development loop looks like in practice:

 📱 Phone 💻 Work PC 🔔 ntfy
│ │ │
│──── type 'cc' ────────────▶│ │
│──── give Claude a task ───▶│ │
│ │ │
│ ┌─────────────────┐ │ │
│ │ pocket phone │ │ │
│ └─────────────────┘ │ │
│ │ │
│ │── hook fires ────────────▶│
│◀── "Claude needs input" ───────────────────────────────│
│ │ │
│──── respond ──────────────▶│ │
│ │ │
│ ┌─────────────────┐ │ │
│ │ pocket phone │ │ │
│ └─────────────────┘ │ │
│ │ │
│ │── hook fires ────────────▶│
│◀── "Task complete" ────────────────────────────────────│
│ │ │
│──── review, approve PR ───▶│ │
│ │ │

Why not just use the blog post's setup?

Granda's setup uses Tailscale for VPN, a Vultr cloud VM, Termius as the mobile terminal, and Poke for notifications. It's clean and it works. But I had different constraints:

If you don't have this kind of infrastructure already, Granda's approach is probably simpler. But if you're the kind of person who already has a WireGuard mesh and self-hosted services, this guide is for you.

The pieces

Component Purpose Alternatives
WireGuard VPN to reach home network Tailscale, Zerotier, Nebula
mosh Network-resilient shell (phone leg) Eternal Terminal (et), plain SSH
SSH Secure connection (LAN leg) mosh (if you want it end-to-end)
tmux Session persistence screen, zellij
Claude Code The actual work -
ntfy Push notifications Pushover, Gotify, Poke, Telegram
Termux Android terminal emulator Termius, JuiceSSH, ConnectBot
fish shell Shell on all machines zsh, bash

The key insight is that you need two different types of resilience: mosh handles the flaky mobile connection (WiFi to cellular transitions, dead zones, phone sleeping), while tmux handles session persistence (close the app, reopen hours later, everything's still there). Together they make mobile development actually viable.

Why the double SSH? Why not make the work PC a WireGuard peer?

You might be wondering: if I already have a WireGuard network, why not just add the work PC as a peer and mosh straight into it from my phone?

The short answer: it's my employer's machine. It has monitoring software installed: screen grabbing, endpoint policies, the works. Installing WireGuard on it would mean running a VPN client that tunnels traffic through my personal infrastructure, which is the kind of thing that raises flags with IT security. I don't want to deal with that conversation.

SSH, on the other hand, is standard dev tooling. An openssh-server on a Linux machine is about as unremarkable as it gets.

So instead, my home server acts as a jump box. My phone connects to the home server over WireGuard (that's all personal infrastructure, no employer involvement), and then the home server SSHs into the work PC over the local network. The work PC only needs an SSH server, no VPN client, no weird tunnels, nothing that would make the monitoring software blink.

 ┌──────────────────────────────────────────────────┐
│ My Infrastructure │
│ │
│ ┌───────────┐ WireGuard ┌──────────────┐ │
│ │ Phone │◀──────────────▶│ WG Server │ │
│ │ (peer) │ tunnel │ │ │
│ └─────┬─────┘ └──────┬───────┘ │
│ │ │ │
│ │ mosh WireGuard │ │
│ │ (through tunnel) tunnel │ │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────────┐ │
│ │ Home Server │◀───────────────────────────────│
│ │ (peer) │ │
│ └──────┬───────┘ │
│ │ │
└─────────┼────────────────────────────────────────┘
│
│ ssh (LAN)
│
┌─────────┼────────────────────────────────────────┐
│ ▼ │
│ ┌────────────┐ │
│ │ Work PC │ │
│ │ (SSH only) │ Employer Infrastructure │
│ └────────────┘ │
└──────────────────────────────────────────────────┘

As a bonus, this means the work PC has zero exposure to the public internet. It only accepts SSH from machines on my local network. Defense in depth.

Phase 1: SSH server on the work PC

My work PC is running Ubuntu 24.04. First thing: install and harden the SSH server.

sudo apt update && sudo apt install -y openssh-server
sudo systemctl enable ssh

Note: on Ubuntu 24.04 the service is called ssh, not sshd. This tripped me up.

Then harden the config. I created /etc/ssh/sshd_config with:

PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
PubkeyAuthentication yes
AllowAgentForwarding no
X11Forwarding no
UsePAM yes
MaxAuthTries 3
ClientAliveInterval 60
ClientAliveCountMax 3

Key-only auth, no root login, no password auth. Since the machine is only accessible through my local network, this is plenty secure.

Setting up SSH keys for the home server → work PC connection

On the home server, generate a key pair if you don't already have one:

ssh-keygen -t ed25519 -C "homeserver->workpc"

Accept the default path (/.ssh/id_ed25519). Then copy the public key to the work PC:

ssh-copy-id roger@<work-pc-ip>

Now restart sshd:

sudo systemctl restart ssh

Important: Test the SSH connection from your home server before closing your current session. Don't lock yourself out.

# From the home server
ssh roger@<work-pc-ip>

If it drops you into a shell without asking for a password, you're golden.

Alternative: Tailscale

If you don't have a WireGuard setup, Tailscale is the easiest way to get a private network going. Install it on your phone and your work PC, and they can see each other directly. No jump host needed, no port forwarding, no firewall rules. It's honestly magic for this kind of thing. The only reason I don't use it is because I already had WireGuard running before Tailscale existed.

Phase 2: tmux + auto-attach

The idea here is simple: every time I SSH into the work PC, I want to land directly in a tmux session. If the session already exists, attach to it. If not, create one.

First, ~/.tmux.conf:

# mouse support (essential for thumbing it on the phone)
set -g mouse on

# start window numbering at 1 (easier to reach on phone keyboard)
set -g base-index 1
setw -g pane-base-index 1

# status bar
set -g status-style 'bg=colour235 fg=colour136'
set -g status-left '#[fg=colour46][#S] '
set -g status-right '#[fg=colour166]%H:%M'
set -g status-left-length 30

# longer scrollback
set -g history-limit 50000

# reduce escape delay (makes editors snappier over SSH)
set -sg escape-time 10

# keep sessions alive
set -g destroy-unattached off

Mouse support is essential when you're using your phone. Being able to tap to select panes, scroll with your finger, and resize things makes a massive difference.

Then in ~/.config/fish/config.fish on the work PC:

if set -q SSH_CONNECTION; and not set -q TMUX
 tmux attach -t claude 2>/dev/null; or tmux new -s claude -c ~/projects/my-app
end

This checks for SSH_CONNECTION so it only auto-attaches when I'm remoting in. When I'm physically at the machine, I use the terminal normally without tmux. This distinction becomes important later for notifications.

Phase 3: Claude Code hooks + ntfy

This is the fun part. Claude Code has a hook system that lets you run commands when certain events happen. We're going to hook into three events:

The notification script

First, the script that sends notifications. I created ~/.claude/hooks/notify.sh:

#!/usr/bin/env bash

# Only notify if we're in an SSH-originated tmux session
if ! tmux show-environment SSH_CONNECTION 2>/dev/null | grep -q SSH_CONNECTION=; then
 exit 0
fi

EVENT_TYPE="${1:-unknown}"
NTFY_URL="https://ntfy.example.com/claude-code"
NTFY_TOKEN="tk_your_token_here"

EVENT_DATA=$(cat)

case "$EVENT_TYPE" in
 question)
 TITLE="🤔 Claude needs input"
 PRIORITY="high"
 MESSAGE=$(echo "$EVENT_DATA" | jq -r '.tool_input.question // .tool_input.questions[0].question // "Claude has a question for you"' 2>/dev/null)
 ;;
 stop)
 TITLE="✅ Claude finished"
 PRIORITY="default"
 MESSAGE="Task complete"
 ;;
 error)
 TITLE="❌ Claude hit an error"
 PRIORITY="high"
 MESSAGE=$(echo "$EVENT_DATA" | jq -r '.error // "Something went wrong"' 2>/dev/null)
 ;;
 *)
 TITLE="Claude Code"
 PRIORITY="default"
 MESSAGE="Event: $EVENT_TYPE"
 ;;
esac

PROJECT=$(basename "$PWD")

curl -s \
 -H "Authorization: Bearer $NTFY_TOKEN" \
 -H "Title: $TITLE" \
 -H "Priority: $PRIORITY" \
 -H "Tags: computer" \
 -d "[$PROJECT] $MESSAGE" \
 "$NTFY_URL" > /dev/null 2>&1
chmod +x ~/.claude/hooks/notify.sh

The SSH_CONNECTION check at the top is crucial: it prevents notifications from firing when I'm sitting at the machine. Since I only use tmux when SSHing in remotely, the tmux environment will only have SSH_CONNECTION set when I'm remote. Neat trick.

Claude Code settings

Then in ~/.claude/settings.json:

{
 "hooks": {
 "PreToolUse": [
 {
 "matcher": "AskUserQuestion",
 "hooks": [
 {
 "type": "command",
 "command": "~/.claude/hooks/notify.sh question"
 }
 ]
 }
 ],
 "Stop": [
 {
 "hooks": [
 {
 "type": "command",
 "command": "~/.claude/hooks/notify.sh stop"
 }
 ]
 }
 ]
 }
}

This is the global settings file. If your project also has a .claude/settings.json, they'll be merged. No conflicts.

ntfy setup

I'm self-hosting ntfy, so I created a topic and an access token:

# Inside your ntfy server/container
ntfy token add --expires=30d your-username
ntfy access your-username claude-code rw
ntfy access everyone claude-code deny

ntfy topics are created on demand, so just subscribing to one creates it. On the Android ntfy app, I pointed it at my self-hosted instance and subscribed to the claude-code topic.

You can test the whole thing works with:

echo '{"tool_input":{"question":"Should I refactor this?"}}' | ~/.claude/hooks/notify.sh question
echo '{}' | ~/.claude/hooks/notify.sh stop
echo '{"error":"ModuleNotFoundError: No module named foo"}' | ~/.claude/hooks/notify.sh error

Three notifications, three different priorities. Very satisfying.

Alternative notification systems

If you don't want to self-host ntfy, here are some options:

Phase 4: Termux setup

Termux is the terminal emulator on my Android phone. Here's how I set it up.

pkg update && pkg install -y mosh openssh fish

SSH into your phone (for easier setup)

Configuring all of this on a phone keyboard is painful. I set up sshd on Termux so I could configure it from my PC.

In ~/.config/fish/config.fish:

sshd 2>/dev/null

This starts sshd every time you open Termux. If it's already running, it silently fails. Termux runs sshd on port 8022 by default.

First, set a password on Termux (you'll need it for the initial key copy):

passwd

Then from your PC, copy your key and test the connection:

ssh-copy-id -p 8022 <phone-ip>
ssh -p 8022 <phone-ip>

Now you can configure Termux comfortably from your PC keyboard.

Generating SSH keys on the phone

On Termux, generate a key pair:

ssh-keygen -t ed25519 -C "phone"

Then copy it to your home server:

ssh-copy-id <your-user>@<home-server-wireguard-ip>

This gives you passwordless phone → home server. Since we already set up home server → work PC keys in Phase 1, the full chain is now passwordless.

SSH config

The SSH config is where the magic happens. On Termux:

Host home
HostName <home-server-wireguard-ip>
User <your-user>
Host work
HostName <work-pc-ip>
User roger
ProxyJump home

ProxyJump is the key: ssh work automatically hops through the home server. No manual double-SSHing.

Fish aliases

These are the aliases that make everything a one-command operation:

# Connect to work PC, land in tmux with Claude Code ready
alias cc="mosh home -- ssh -t work"

# New tmux window in the claude session
alias cn="mosh home -- ssh -t work 'tmux new-window -t claude -c \$HOME/projects/my-app'"

# List tmux windows
alias cl="ssh work 'tmux list-windows -t claude'"

cc is all I need to type. Mosh handles the phone-to-home-server connection (surviving WiFi/cellular transitions), SSH handles the home-server-to-work-PC hop over the LAN, and the fish config on the work PC auto-attaches to tmux.

Alternative: Termius

If you're on iOS (or just prefer a polished app), Termius is what Granda uses. It supports mosh natively and has a nice UI. The downside is it's a subscription for the full features. Termux is free and gives you a full Linux environment, but it's Android-only and definitely more rough around the edges.

Other options: JuiceSSH (Android, no mosh), ConnectBot (Android, no mosh). Mosh support is really the killer feature here, so Termux or Termius are the best choices.

Phase 5: The full flow

Here's what my actual workflow looks like:

  1. I'm at the beach/coffee shop/couch/wherever 🏖️
  2. Open Termux, type cc
  3. I'm in my tmux session on my work PC
  4. Start Claude Code, give it a task: "add pagination to the user dashboard API and update the tests"
  5. Pocket the phone
  6. Phone buzzes: "🤔 Claude needs input - Should I use cursor-based or offset-based pagination?"
  7. Pull out phone, Termux is still connected (thanks mosh), type "cursor-based, use the created_at field"
  8. Pocket the phone again
  9. Phone buzzes: "✅ Claude finished - Task complete"
  10. Review the changes, approve the PR, go back to the beach

The key thing that makes this work is the combination of mosh (connection survives me pocketing the phone) + tmux (session survives even if mosh dies) + ntfy (I don't have to keep checking the screen). Without any one of these three, the experience breaks down.

Security considerations

A few things to keep in mind:

Conclusion

The whole setup took me about an hour to put together. The actual configuration is pretty minimal: an SSH server, a tmux config, a notification script, and some fish aliases.

What I love about this setup is that it's all stuff I already had. WireGuard was already running, ntfy was already self-hosted, Termux was already on my phone. I just wired them together with a few scripts and some Claude Code hooks.

If you have a similar homelab setup, you can probably get this running in 30 minutes. If you're starting from scratch, Granda's cloud VM approach is probably easier. Either way, async coding from your phone is genuinely a game changer.

See you in the next one!

09 Feb 2026 6:00am GMT