27 Feb 2026
Django community aggregator: Community blog posts
Using tox to Test a Django App Across Multiple Django Versions

Recently, I developed a reusable Django app django-clearplaintext for normalizing plain text in Django templates. And to package and test it properly, I had a fresh look to Tox.
Tox is the standard testing tool that creates isolated virtual environments, installs the exact dependencies you specify, and runs your test suite in each one - all from a single command.
This post walks through a complete, working setup using a minimal example app called django-shorturl.
The Example App: django-shorturl
django-shorturl is a self-contained Django app with one model and one view.
shorturl/models.py
from django.db import models
from django.utils.translation import gettext_lazy as _
class ShortLink(models.Model):
slug = models.SlugField(_("slug"), unique=True)
target_url = models.URLField(_("target URL"))
created_at = models.DateTimeField(_("created at"), auto_now_add=True)
class Meta:
verbose_name = _("short link")
verbose_name_plural = _("short links")
def __str__(self):
return self.slug
shorturl/views.py
from django.shortcuts import get_object_or_404, redirect
from .models import ShortLink
def redirect_link(request, slug):
link = get_object_or_404(ShortLink, slug=slug)
return redirect(link.target_url)
shorturl/urls.py
from django.urls import path
from . import views
urlpatterns = [
path("<slug:slug>/", views.redirect_link, name="redirect_link"),
]
shorturl/admin.py
from django.contrib import admin
from .models import ShortLink
admin.site.register(ShortLink)
Project Layout
django-shorturl/
├── src/
│ └── shorturl/
│ ├── __init__.py
│ ├── admin.py
│ ├── models.py
│ ├── views.py
│ └── urls.py
├── tests/
│ ├── __init__.py
│ └── test_views.py
├── pyproject.toml
├── test_settings.py
└── tox.ini
The source lives under src/ and the tests are at the top level, separate from the package. This separation prevents the tests from accidentally being shipped inside the installed package.
Packaging: pyproject.toml
Tox needs a properly packaged app to install into each environment. With isolated_build = true (more on that below), Tox builds a wheel from your pyproject.toml before running any tests.
pyproject.toml
[project]
name = "django-shorturl"
version = "1.0.0"
requires-python = ">=3.8"
dependencies = [
"Django>=4.2",
]
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[tool.setuptools.packages.find]
where = ["src"]
The dependencies list here declares the runtime minimum - your app needs Django, but you don't pin a specific version because that is Tox's job during testing.
For the [build-system] section, we can also use uv_build to gain some performance improvements:
[build-system]
requires = ["uv_build >= 0.10.0, <0.11.0"]
build-backend = "uv_build"
[tool.uv.build-backend]
module-name = "shorturl"
Here module-name lets uv_build not to get confused between django-shorturl and shorturl.
Test Settings: test_settings.py
Django requires a settings module to run. As we don't have an associated project, we have to create a minimal one by defining project settings in the project's settings, create a minimal one dedicated to testing. It lives at the repo root so it's easy to point to from anywhere.
test_settings.py
SECRET_KEY = "test"
INSTALLED_APPS = [
"shorturl",
]
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": ":memory:",
}
}
ROOT_URLCONF = "shorturl.urls"
DEFAULT_AUTO_FIELD = "django.db.models.AutoField"
A few deliberate choices here:
SECRET_KEY = "test"- A fixed value, fine for tests, but never use this in production.INSTALLED_APPS- Only include apps that your tests actually need. Nodjango.contrib.admin, no auth, nothing extra.- SQLite in-memory database -
":memory:"means the database is created fresh for every test run and disappears when the process exits. No files left behind, no teardown needed, and it is fast. ROOT_URLCONF- The test client resolves URLs through this setting. Without it,reverse()raisesNoReverseMatchand the test client has no URL configuration to dispatch against. Point it at your app'surls.py.DEFAULT_AUTO_FIELD- Suppresses Django's system check warning about the implicit primary key type. Setting it explicitly keeps the test output clean and makes the expectation clear.
The Core: tox.ini
This is where Tox is configured.
tox.ini
[tox]
envlist =
py{38,39,310,311,312}-django42,
py{310,311,312}-django50,
py{310,311,312,313}-django51,
py{310,311,312,313,314}-django52,
py{312,313,314}-django60
isolated_build = true
[testenv]
deps =
django42: Django>=4.2,<4.3
django50: Django>=5.0,<5.1
django51: Django>=5.1,<5.2
django52: Django>=5.2,<6.0
django60: Django>=6.0,<6.1
commands =
python -m django test
setenv =
DJANGO_SETTINGS_MODULE = test_settings
envlist - the matrix
py{38,39,310,311,312}-django42 is a shortcut used in Tox.
The numbers inside {} are expanded automatically. Tox combines each Python version with django42, creating 5 environments:
py38-django42py39-django42py310-django42py311-django42py312-django42
The full envlist simply lists all Python and Django combinations you want to test, so you can check that your project works in each setup.
Each part separated by a dash in an environment name is called a "factor". You can have as many factors as you like, and they can be named anything. py* factors are a convention for Python versions. Others need to be defined in the [testenv] deps section.
isolated_build = true
This tells tox to build a proper wheel from your pyproject.toml before installing into each environment. Without it, tox would try to install your package with pip install -e ., which bypasses the build system and can hide packaging bugs. With it, each environment tests the package exactly as a user would receive it after pip install django-shorturl.
deps - conditional dependencies
The django42: prefix is a Tox factor condition: the dependency on that line is only installed when the environment name contains the django42 factor. This is how a single [testenv] block handles all Django versions without needing a separate section for each one.
Tox also installs your package itself into each environment (because of isolated_build), so you don't need to list it here.
commands
commands =
python -m django test
python -m django test is Django's built-in test runner. It discovers tests by looking for files matching test*.py under the current directory, which picks up everything in your tests/ folder automatically.
setenv
setenv =
DJANGO_SETTINGS_MODULE = test_settings
Django refuses to run without a settings module. This environment variable tells it where to find yours. Because test_settings.py is at the repo root and tox runs from the repo root, the module name test_settings resolves correctly without any path manipulation.
Writing the Tests
Create test cases for each (critical) component of your app. For example, if you have models, views, and template tags, create tests/test_models.py, tests/test_views.py, and tests/test_templatetags.py.
tests/test_views.py
from django.test import TestCase
from django.urls import reverse
from shorturl.models import ShortLink
class RedirectLinkViewTest(TestCase):
def setUp(self):
ShortLink.objects.create(
slug="dt",
target_url="https://www.djangotricks.com",
)
def test_redirects_to_target_url(self):
response = self.client.get(
reverse(
"redirect_link", kwargs={"slug": "dt"}
)
)
self.assertRedirects(
response,
"https://www.djangotricks.com",
fetch_redirect_response=False,
)
def test_returns_404_for_unknown_slug(self):
response = self.client.get(
reverse(
"redirect_link", kwargs={"slug": "nope"}
)
)
self.assertEqual(response.status_code, 404)
Installing Python Versions with pyenv
Tox needs the actual Python binaries for every version in your envlist. If you try to run tox without them installed, it will fail immediately with an InterpreterNotFound error. pyenv is the standard way to install and manage multiple Python versions side by side.
Install pyenv
Use Homebrew on macOS (or follow the official instructions for Linux):
brew install pyenv
Add the following to your shell config (~/.zshrc, ~/.bashrc, etc.) and restart your shell:
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
Install each Python version
Install every version that appears in your envlist:
pyenv install 3.8
pyenv install 3.9
pyenv install 3.10
pyenv install 3.11
pyenv install 3.12
pyenv install 3.13
pyenv install 3.14
Make them all reachable at once
Tox resolves py312 by looking for a binary named python3.12 on PATH. The trick is pyenv global, which accepts multiple versions and places all of their binaries on your PATH simultaneously:
pyenv global 3.14 3.13 3.12 3.11 3.10 3.9 3.8
List the first (the one python3 and python resolve to) and work downward. After running this, confirm every interpreter is visible:
python3.8 --version # Python 3.8.x
python3.9 --version # Python 3.9.x
python3.10 --version # Python 3.10.x
python3.11 --version # Python 3.11.x
python3.12 --version # Python 3.12.x
python3.13 --version # Python 3.13.x
python3.14 --version # Python 3.14.x
Now tox can find all of them and the full matrix will run without InterpreterNotFound errors.
Running tox
Run the full matrix:
tox
Or run a single environment:
tox -e py312-django52
tox will print a summary at the end showing which environments passed and which failed.
py38-django42: OK (3.25=setup[2.32]+cmd[0.93] seconds)
py39-django42: OK (2.88=setup[2.16]+cmd[0.72] seconds)
py310-django42: OK (2.61=setup[2.02]+cmd[0.59] seconds)
py311-django42: OK (2.70=setup[2.09]+cmd[0.61] seconds)
py312-django42: OK (3.28=setup[2.46]+cmd[0.82] seconds)
py310-django50: OK (2.67=setup[2.09]+cmd[0.58] seconds)
py311-django50: OK (2.61=setup[2.02]+cmd[0.59] seconds)
py312-django50: OK (2.85=setup[2.25]+cmd[0.60] seconds)
py310-django51: OK (2.81=setup[2.27]+cmd[0.54] seconds)
py311-django51: OK (2.85=setup[2.30]+cmd[0.55] seconds)
py312-django51: OK (2.70=setup[2.09]+cmd[0.61] seconds)
py313-django51: OK (2.97=setup[2.29]+cmd[0.68] seconds)
py310-django52: OK (3.03=setup[2.31]+cmd[0.72] seconds)
py311-django52: OK (2.88=setup[2.22]+cmd[0.66] seconds)
py312-django52: OK (2.80=setup[2.13]+cmd[0.67] seconds)
py313-django52: OK (4.70=setup[3.66]+cmd[1.04] seconds)
py314-django52: OK (6.41=setup[5.18]+cmd[1.23] seconds)
py312-django60: OK (5.13=setup[4.06]+cmd[1.07] seconds)
py313-django60: OK (5.35=setup[4.15]+cmd[1.21] seconds)
py314-django60: OK (6.01=setup[4.65]+cmd[1.37] seconds)
congratulations :) (70.59 seconds)
Final Words
What makes this setup robust?
- No shared state between environments. Each Tox environment is its own virtualenv with its own Django installation.
- The package is built, not symlinked.
isolated_build = truecatches packaging mistakes before they reach users. - The database never persists between runs. SQLite in-memory means no stale data, no cleanup scripts, no CI-specific teardown.
- The test settings are minimal by design. Fewer installed apps means faster startup, fewer implicit dependencies, and tests that fail for clear, local reasons rather than configuration noise from elsewhere in the project.
This setup is not the only way to test a Django app with Tox, but it is a solid starting point that balances comprehensiveness with maintainability. With a little effort upfront, you can ensure your app works across a wide range of Python and Django versions - and catch packaging bugs before they hit real users.
27 Feb 2026 6:00pm GMT
Planet Python
Real Python: The Real Python Podcast – Episode #286: Overcoming Testing Obstacles With Python's Mock Object Library
Do you have complex logic and unpredictable dependencies that make it hard to write reliable tests? How can you use Python's mock object library to improve your tests? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
27 Feb 2026 12:00pm GMT
Real Python: Quiz: Dependency Management With Python Poetry
Test your understanding of Dependency Management With Python Poetry.
You'll revisit how to install Poetry the right way, create new projects, manage virtual environments, declare and group dependencies, work with lock files, and keep your dependencies up to date.
[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
27 Feb 2026 12:00pm GMT
Ned Batchelder: Pytest parameter functions
Pytest's parametrize is a great feature for writing tests without repeating yourself needlessly. (If you haven't seen it before, read Starting with pytest's parametrize first). When the data gets complex, it can help to use functions to build the data parameters.
I've been working on a project involving multi-line data, and the parameterized test data was getting awkward to create and maintain. I created helper functions to make it nicer. The actual project is a bit gnarly, so I'll use a simpler example to demonstrate.
Here's a function that takes a multi-line string and returns two numbers, the lengths of the shortest and longest non-blank lines:
def non_blanks(text: str) -> tuple[int, int]:
"""Stats of non-blank lines: shortest and longest lengths."""
lengths = [len(ln) for ln in text.splitlines() if ln]
return min(lengths), max(lengths)
We can test it with a simple parameterized test with two test cases:
import pytest
from non_blanks import non_blanks
@pytest.mark.parametrize(
"text, short, long",
[
("abcde\na\nabc\n", 1, 5),
("""\
A long line
The next line is blank:
Short.
Much much longer line, more than anyone thought.
""", 6, 48),
]
)
def test_non_blanks(text, short, long):
assert non_blanks(text) == (short, long)
I really dislike how the multi-line string breaks the indentation flow, so I wrap strings like that in textwrap.dedent:
@pytest.mark.parametrize(
"text, short, long",
[
("abcde\na\nabc\n", 1, 5),
(textwrap.dedent("""\
A long line
The next line is blank:
Short.
Much much longer line, more than anyone thought.
"""),
6, 48),
]
)
(For brevity, this and following examples only show the parametrize decorator, the test function itself stays the same.)
This looks nicer, but I have to remember to use dedent, which adds a little bit of visual clutter. I also need to remember that first backslash so that the string won't start with a newline.
As the test data gets more elaborate, I might not want to have it all inline in the decorator. I'd like to have some of the large data in its own file:
@pytest.mark.parametrize(
"text, short, long",
[
("abcde\na\nabc\n", 1, 5),
(textwrap.dedent("""\
A long line
The next line is blank:
Short.
Much much longer line, more than anyone thought.
"""),
6, 48),
(Path("gettysburg.txt").read_text(), 18, 80),
]
)
Now things are getting complicated. Here's where a function can help us. Each test case needs a string and three numbers. The string is sometimes provided explicitly, sometimes read from a file.
We can use a function to create the correct data for each case from its most convenient form. We'll take a string and use it as either a file name or literal data. We'll deal with the initial newline, and dedent the multi-line strings:
def nb_case(text, short, long):
"""Create data for test_non_blanks."""
if "\n" in text:
# Multi-line string: it's actual data.
if text[0] == "\n": # Remove a first newline
text = text[1:]
text = textwrap.dedent(text)
else:
# One-line string: it's a file name.
text = Path(text).read_text()
return (text, short, long)
Now the test data is more direct:
@pytest.mark.parametrize(
"text, short, long",
[
nb_case("abcde\na\nabc\n", 1, 5),
nb_case("""
A long line
The next line is blank:
Short.
Much much longer line, more than anyone thought.
""",
6, 48),
nb_case("gettysburg.txt", 18, 80),
]
)
One nice thing about parameterized tests is that pytest creates a distinct id for each one. The helps with reporting failures and with selecting tests to run. But the id is made from the test data. Here, our last test case has an id using the entire Gettysburg Address, over 1500 characters. It was very short for a speech, but it's very long for an id!
This is what the pytest output looks like with our current ids:
test_non_blank.py::test_non_blanks[abcde\na\nabc\n-1-5] PASSED
test_non_blank.py::test_non_blanks[A long line\nThe next line is blank:\n\nShort.\nMuch much longer line, more than anyone thought.\n-6-48] PASSED
test_non_blank.py::test_non_blanks[Four score and seven years ago our fathers brought forth on this continent, a\nnew nation, conceived in Liberty, and dedicated to the proposition that all men\nare created equal.\n\nNow we are engaged in a great civil war, testing whether that nation, or any\nnation so conceived and so dedicated, can long endure. We are met on a great\nbattle-field of that war. We have come to dedicate a portion of that field, as a\nfinal resting place for those who here gave their lives that that nation might\nlive. It is altogether fitting and proper that we should do this.\n\nBut, in a larger sense, we can not dedicate \u2013 we can not consecrate we can not\nhallow \u2013 this ground. The brave men, living and dead, who struggled here, have\nconsecrated it far above our poor power to add or detract. The world will little\nnote, nor long remember what we say here, but it can never forget what they did\nhere. It is for us the living, rather, to be dedicated here to the unfinished\nwork which they who fought here have thus far so nobly advanced. It is rather\nfor us to be here dedicated to the great task remaining before us that from\nthese honored dead we take increased devotion to that cause for which they gave\nthe last full measure of devotion \u2013 that we here highly resolve that these dead\nshall not have died in vain that this nation, under God, shall have a new birth\nof freedom \u2013 and that government of the people, by the people, for the people,\nshall not perish from the earth.\n-18-80] PASSED
Even that first shortest test has an awkward and hard to use test name.
For more control over the test data, instead of creating tuples to use as test cases, you can use pytest.param to create the internal parameters object that pytest needs. Each of these can have an explicit id assigned. Pytest will still assign an id if you don't provide one.
Here's an updated nb_case() function using pytest.param:
def nb_case(text, short, long, id=None):
if "\n" in text:
# Multi-line string: it's actual data.
if text[0] == "\n": # Remove a first newline
text = text[1:]
text = textwrap.dedent(text)
else:
# One-line string: it's a file name.
id = id or text
text = Path(text).read_text()
return pytest.param(text, short, long, id=id)
Now we can provide ids for test cases. The ones reading from a file will use the file name as the id:
@pytest.mark.parametrize(
"text, short, long",
[
nb_case("abcde\na\nabc\n", 1, 5, id="little"),
nb_case("""
A long line
The next line is blank:
Short.
Much much longer line, more than anyone thought.
""",
6, 48, id="four"),
nb_case("gettysburg.txt", 18, 80),
]
)
Now our tests have useful ids:
test_non_blank.py::test_non_blanks[little] PASSED
test_non_blank.py::test_non_blanks[four] PASSED
test_non_blank.py::test_non_blanks[gettysburg.txt] PASSED
The exact details of my case() function aren't important here. Your tests will need different helpers, and you might make different decisions about what to do for these tests. But a function like this lets you write your complex test cases in the way you like best to make your tests as concise, expressive and readable as you want.
27 Feb 2026 11:53am GMT
25 Feb 2026
Django community aggregator: Community blog posts
Freelancing & Community - Andrew Miller
🔗 Links
- Personal website
- GitHub, Mastodon, and LinkedIn
- In Progress podcast
- Hamilton Rock
- Comprehension Debt
- Builder Methods
📦 Projects
📚 Books
- How to Build a LLM From Scratch by Sebastian Raschka
- World of Astrom
- Rob Walling books
- Jonathan Stark books
- David Kadavy books
- Manifesto of winning without pitching
🎥 YouTube
Sponsor
This episode is brought to you by Six Feet Up, the Python, Django, and AI experts who solve hard software problems. Whether it's scaling an application, deriving insights from data, or getting results from AI, Six Feet Up helps you move forward faster.
See what's possible at sixfeetup.com.
25 Feb 2026 6:00pm GMT
I Checked 5 Security Skills for Claude Code. Only One Is Worth Installing
I'm writing this in late February 2026. The skills ecosystem for Claude Code is moving fast, and the specific numbers and repos here will probably be outdated within a month. But the thinking still applies, so consider this a snapshot.
If you're using Claude Code, you've probably wondered: can …
25 Feb 2026 10:51am GMT
19 Feb 2026
Planet Twisted
Donovan Preston: Wello Horld.
Onovanday Restonpay is going to logbay here again. It's time to take back the rss-source-rss-reader web of links
19 Feb 2026 2:36am GMT
22 Jan 2026
Planet Plone - Where Developers And Integrators Write
Maurits van Rees: Mikel Larreategi: How we deploy cookieplone based projects.

We saw that cookieplone was coming up, and Docker, and as game changer uv making the installation of Python packages much faster.
With cookieplone you get a monorepo, with folders for backend, frontend, and devops. devops contains scripts to setup the server and deploy to it. Our sysadmins already had some other scripts. So we needed to integrate that.
First idea: let's fork it. Create our own copy of cookieplone. I explained this in my World Plone Day talk earlier this year. But cookieplone was changing a lot, so it was hard to keep our copy updated.
Maik Derstappen showed me copier, yet another templating language. Our idea: create a cookieplone project, and then use copier to modify it.
What about the deployment? We are on GitLab. We host our runners. We use the docker-in-docker service. We develop on a branch and create a merge request (pull request in GitHub terms). This activates a piple to check-test-and-build. When it is merged, bump the version, use release-it.
Then we create deploy keys and tokens. We give these access to private GitLab repositories. We need some changes to SSH key management in pipelines, according to our sysadmins.
For deployment on the server: we do not yet have automatic deployments. We did not want to go too fast. We are testing the current pipelines and process, see if they work properly. In the future we can think about automating deployment. We just ssh to the server, and perform some commands there with docker.
Future improvements:
- Start the docker containers and curl/wget the
/okendpoint. - lock files for the backend, with pip/uv.
22 Jan 2026 9:43am GMT
Maurits van Rees: Jakob Kahl and Erico Andrei: Flying from one Plone version to another

This is a talk about migrating from Plone 4 to 6 with the newest toolset.
There are several challenges when doing Plone migrations:
- Highly customized source instances: custom workflow, add-ons, not all of them with versions that worked on Plone 6.
- Complex data structures. For example a Folder with a Link as default page, with pointed to some other content which meanwhile had been moved.
- Migrating Classic UI to Volto
- Also, you might be migrating from a completely different CMS to Plone.
How do we do migrations in Plone in general?
- In place migrations. Run migration steps on the source instance itself. Use the standard upgrade steps from Plone. Suitable for smaller sites with not so much complexity. Especially suitable if you do only a small Plone version update.
- Export - import migrations. You extract data from the source, transform it, and load the structure in the new site. You transform the data outside of the source instance. Suitable for all kinds of migrations. Very safe approach: only once you are sure everything is fine, do you switch over to the newly migrated site. Can be more time consuming.
Let's look at export/import, which has three parts:
- Extraction: you had collective.jsonify, transmogrifier, and now collective.exportimport and plone.exportimport.
- Transformation: transmogrifier, collective.exportimport, and new: collective.transmute.
- Load: Transmogrifier, collective.exportimport, plone.exportimport.
Transmogrifier is old, we won't talk about it now. collective.exportimport: written by Philip Bauer mostly. There is an @@export_all view, and then @@import_all to import it.
collective.transmute is a new tool. This is made to transform data from collective.exportimport to the plone.exportimport format. Potentially it can be used for other migrations as well. Highly customizable and extensible. Tested by pytest. It is standalone software with a nice CLI. No dependency on Plone packages.
Another tool: collective.html2blocks. This is a lightweight Python replacement for the JavaScript Blocks conversion tool. This is extensible and tested.
Lastly plone.exportimport. This is a stripped down version of collective.exportimport. This focuses on extract and load. No transforms. So this is best suited for importing to a Plone site with the same version.
collective.transmute is in alpha, probably a 1.0.0 release in the next weeks. Still missing quite some documentation. Test coverage needs some improvements. You can contribute with PRs, issues, docs.
22 Jan 2026 9:43am GMT
Maurits van Rees: Fred van Dijk: Behind the screens: the state and direction of Plone community IT

This is a talk I did not want to give.
I am team lead of the Plone Admin team, and work at kitconcept.
The current state: see the keynotes, lots happening on the frontend. Good.
The current state of our IT: very troubling and daunting.
This is not a 'blame game'. But focussing on resources and people this conference should be a first priority. We are a real volunteer organisation, nobody is pushing anybody around. That is a strength, but also a weakness. We also see that in the Admin team.
The Admin team is 4 senior Plonistas as allround admin, 2 release managers, 2 CI/CD experts. 3 former board members, everyone overburdened with work. We had all kinds of plans for this year, but we have mostly been putting out fires.
We are a volunteer organisation, and don't have a big company behind us that can throw money at the problems. Strength and weakness. In all society it is a problem that volunteers are decreasing.
Root causes:
- We failed to scale down in time in our IT landscape and usage.
- We have no clean role descriptions, team descriptions, we can't ask a minimum effort per week or month.
- The trend is more communication channels, platforms to join and promote yourself, apps to use.
Overview of what have have to keep running as admin team:
- Support main development process: github, CI/CD, Jenkins main and runners, dist.plone.org.
- Main communication, documentation: pone.org, docs.plone.org, training.plone.org, conf and country sites, Matomo.
- Community office automation: Google docds, workspacae, Quaive, Signal, Slack
- Broader: Discourse and Discord
The first two are really needed, the second we already have some problems with.
Some services are self hosted, but also a lot of SAAS services/platforms. In all, it is quite a bit.
The Admin team does not officially support all of these, but it does provide fallback support. It is too much for the current team.
There are plans for what we can improve in the short term. Thank you to a lot of people that I have already talked to about this. 3 areas: GitHub setup and config, Google Workspace, user management.
On GitHub we have a sponsored OSS plan. So we have extra features for free, but it not enough by far. User management: hard to get people out. You can't contact your members directly. E-mail has been removed, for privacy. Features get added on GitHub, and no complete changelog.
Challenge on GitHub: we have public repositories, but we also have our deployments in there. Only really secure would be private repositories, otherwise the danger is that credentials or secret could get stolen. Every developer with access becomes an attack vector. Auditing is available for only 6 months. A simple question like: who has been active for the last 2 years? No, can't do.
Some actionable items on GitHub:
- We will separate the contributor agreement check from the organisation membership. We create a hidden team for those who signed, and use that in the check.
- Cleanup users, use Contributors team, Developers
- Active members: check who has contributed the last years.
- There have been security incidents. Someone accidentally removed a few repositories. Someone's account got hacked, luckily discovered within a few hours, and some actions had already been taken.
- More fine grained teams to control repository access.
- Use of GitHub Discussions for some central communication of changes.
- Use project management better.
- The elephant in the room that we have practice on this year, and ongoing: the Collective organisation. This was free for all, very nice, but the development world is not a nice and safe place anymore. So we already needed to lock down some things there.
- Keep deployments and the secrets all out of GitHub, so no secrets can be stolen.
Google Workspace:
- We are dependent on this.
- No user management. Admins have had access because they were on the board, but they kept access after leaving the board. So remove most inactive users.
- Spam and moderation issues
- We could move to Google docs for all kinds of things. Use Google workspace drives for all things. But the Drive UI is a mess, so docs can be in your personal account without you realizing it.
User management:
- We need separate standalone user management, but implementation is not clear.
- We cannot contact our members one on one.
Oh yes, Plone websites:
- upgrade plone.org
- self preservation: I know what needs to be done, and can do it, but have no time, focusing on the previous points instead.
22 Jan 2026 9:43am GMT
05 Jan 2026
Planet Twisted
Glyph Lefkowitz: How To Argue With Me About AI, If You Must
As you already know if you've read any of this blog in the last few years, I am a somewhat reluctant - but nevertheless quite staunch - critic of LLMs. This means that I have enthusiasts of varying degrees sometimes taking issue with my stance.
It seems that I am not going to get away from discussions, and, let's be honest, pretty intense arguments about "AI" any time soon. These arguments are starting to make me quite upset. So it might be time to set some rules of engagement.
I've written about all of these before at greater length, but this is a short post because it's not about the technology or making a broader point, it's about me. These are rules for engaging with me, personally, on this topic. Others are welcome to adopt these rules if they so wish but I am not encouraging anyone to do so.
Thus, I've made this post as short as I can so everyone interested in engaging can read the whole thing. If you can't make it through to the end, then please just follow Rule Zero.
Rule Zero: Maybe Don't
You are welcome to ignore me. You can think my take is stupid and I can think yours is. We don't have to get into an Internet Fight about it; we can even remain friends. You do not need to instigate an argument with me at all, if you think that my analysis is so bad that it doesn't require rebutting.
Rule One: No 'Just'
As I explained in a post with perhaps the least-predictive title I've ever written, "I Think I'm Done Thinking About genAI For Now", I've already heard a bunch of bad arguments. Don't tell me to 'just' use a better model, use an agentic tool, use a more recent version, or use some prompting trick that you personally believe works better. If you skim my work and think that I must not have deeply researched anything or read about it because you don't like my conclusion, that is wrong.
Rule Two: No 'Look At This Cool Thing'
Purely as a productivity tool, I have had a terrible experience with genAI. Perhaps you have had a great one. Neat. That's great for you. As I explained at great length in "The Futzing Fraction", my concern with generative AI is that I believe it is probably a net negative impact on productivity, based on both my experience and plenty of citations. Go check out the copious footnotes if you're interested in more detail.
Therefore, I have already acknowledged that you can get an LLM to do various impressive, cool things, sometimes. If I tell you that you will, on average, lose money betting on a slot machine, a picture of a slot machine hitting a jackpot is not evidence against my position.
Rule Two And A Half: Engage In Metacognition
I specifically didn't title the previous rule "no anecdotes" because data beyond anecdotes may be extremely expensive to produce. I don't want to say you can never talk to me unless you're doing a randomized controlled trial. However, if you are going to tell me an anecdote about the way that you're using an LLM, I am interested in hearing how you are compensating for the well-documented biases that LLM use tends to induce. Try to measure what you can.
Rule Three: Do Not Cite The Deep Magic To Me
As I explained in "A Grand Unified Theory of the AI Hype Cycle", I already know quite a bit of history of the "AI" label. If you are tempted to tell me something about how "AI" is really such a broad field, and it doesn't just mean LLMs, especially if you are trying to launder the reputation of LLMs under the banner of jumbling them together with other things that have been called "AI", I assure you that this will not be convincing to me.
Rule Four: Ethics Are Not Optional
I have made several arguments in my previous writing: there are ethical arguments, efficacy arguments, structuralist arguments, efficiency arguments and aesthetic arguments.
I am happy to, for the purposes of a good-faith discussion, focus on a specific set of concerns or an individual point that you want to make where you think I got something wrong. If you convince me that I am entirely incorrect about the effectiveness or predictability of LLMs in general or as specific LLM product, you don't need to make a comprehensive argument about whether one should use the technology overall. I will even assume that you have your own ethical arguments.
However, if you scoff at the idea that one should have any ethical boundaries at all, and think that there's no reason to care about the overall utilitarian impact of this technology, that it's worth using no matter what else it does as long as it makes you 5% better at your job, that's sociopath behavior.
This includes extreme whataboutism regarding things like the water use of datacenters, other elements of the surveillance technology stack, and so on.
Consequences
These are rules, once again, just for engaging with me. I have no particular power to enact broader sanctions upon you, nor would I be inclined to do so if I could. However, if you can't stay within these basic parameters and you insist upon continuing to direct messages to me about this topic, I will summarily block you with no warning, on mastodon, email, GitHub, IRC, or wherever else you're choosing to do that. This is for your benefit as well: such a discussion will not be a productive use of either of our time.
05 Jan 2026 5:22am GMT
02 Jan 2026
Planet Twisted
Glyph Lefkowitz: The Next Thing Will Not Be Big
The dawning of a new year is an opportune moment to contemplate what has transpired in the old year, and consider what is likely to happen in the new one.
Today, I'd like to contemplate that contemplation itself.
The 20th century was an era characterized by rapidly accelerating change in technology and industry, creating shorter and shorter cultural cycles of changes in lifestyles. Thus far, the 21st century seems to be following that trend, at least in its recently concluded first quarter.
The early half of the twentieth century saw the massive disruption caused by electrification, radio, motion pictures, and then television.
In 1971, Intel poured gasoline on that fire by releasing the 4004, a microchip generally recognized as the first general-purpose microprocessor. Popular innovations rapidly followed: the computerized cash register, the personal computer, credit cards, cellular phones, text messaging, the Internet, the web, online games, mass surveillance, app stores, social media.
These innovations have arrived faster than previous generations, but also, they have crossed a crucial threshold: that of the human lifespan.
While the entire second millennium A.D. has been characterized by a gradually accelerating rate of technological and social change - the printing press and the industrial revolution were no slouches, in terms of changing society, and those predate the 20th century - most of those changes had the benefit of unfolding throughout the course of a generation or so.
Which means that any individual person in any given century up to the 20th might remember one major world-altering social shift within their lifetime, not five to ten of them. The diversity of human experience is vast, but most people would not expect that the defining technology of their lifetime was merely the latest in a progression of predictable civilization-shattering marvels.
Along with each of these successive generations of technology, we minted a new generation of industry titans. Westinghouse, Carnegie, Sarnoff, Edison, Ford, Hughes, Gates, Jobs, Zuckerberg, Musk. Not just individual rich people, but entire new classes of rich people that did not exist before. "Radio DJ", "Movie Star", "Rock Star", "Dot Com Founder", were all new paths to wealth opened (and closed) by specific technologies. While most of these people did come from at least some level of generational wealth, they no longer came from a literal hereditary aristocracy.
To describe this new feeling of constant acceleration, a new phrase was coined: "The Next Big Thing". In addition to denoting that some Thing was coming and that it would be Big (i.e.: that it would change a lot about our lives), this phrase also carries the strong implication that such a Thing would be a product. Not a development in social relationships or a shift in cultural values, but some new and amazing form of conveying salted meat paste or what-have-you, that would make whatever lucky tinkerer who stumbled into it into a billionaire - along with any friends and family lucky enough to believe in their vision and get in on the ground floor with an investment.
In the latter part of the 20th century, our entire model of capital allocation shifted to account for this widespread belief. No longer were mega-businesses built by bank loans, stock issuances, and reinvestment of profit, the new model was "Venture Capital". Venture capital is a model of capital allocation explicitly predicated on the idea that carefully considering each bet on a likely-to-succeed business and reducing one's risk was a waste of time, because the return on the equity from the Next Big Thing would be so disproportionately huge - 10x, 100x, 1000x - that one could afford to make at least 10 bad bets for each good one, and still come out ahead.
The biggest risk was in missing the deal, not in giving a bunch of money to a scam. Thus, value investing and focus on fundamentals have been broadly disregarded in favor of the pursuit of the Next Big Thing.
If Americans of the twentieth century were temporarily embarrassed millionaires, those of the twenty-first are all temporarily embarrassed FAANG CEOs.
The predicament that this tendency leaves us in today is that the world is increasingly run by generations - GenX and Millennials - with the shared experience that the computer industry, either hardware or software, would produce some radical innovation every few years. We assume that to be true.
But all things change, even change itself, and that industry is beginning to slow down. Physically, transistor density is starting to brush up against physical limits. Economically, most people are drowning in more compute power than they know what to do with anyway. Users already have most of what they need from the Internet.
The big new feature in every operating system is a bunch of useless junk nobody really wants and is seeing remarkably little uptake. Social media and smartphones changed the world, true, but… those are both innovations from 2008. They're just not new any more.
So we are all - collectively, culturally - looking for the Next Big Thing, and we keep not finding it.
It wasn't 3D printing. It wasn't crowdfunding. It wasn't smart watches. It wasn't VR. It wasn't the Metaverse, it wasn't Bitcoin, it wasn't NFTs1.
It's also not AI, but this is why so many people assume that it will be AI. Because it's got to be something, right? If it's got to be something then AI is as good a guess as anything else right now.
The fact is, our lifetimes have been an extreme anomaly. Things like the Internet used to come along every thousand years or so, and while we might expect that the pace will stay a bit higher than that, it is not reasonable to expect that something new like "personal computers" or "the Internet"3 will arrive again.
We are not going to get rich by getting in on the ground floor of the next Apple or the next Google because the next Apple and the next Google are Apple and Google. The industry is maturing. Software technology, computer technology, and internet technology are all maturing.
There Will Be Next Things
Research and development is happening in all fields all the time. Amazing new developments quietly and regularly occur in pharmaceuticals and in materials science. But these are not predictable. They do not inhabit the public consciousness until they've already happened, and they are rarely so profound and transformative that they change everybody's life.
There will even be new things in the computer industry, both software and hardware. Foldable phones do address a real problem (I wish the screen were even bigger but I don't want to carry around such a big device), and would probably be more popular if they got the costs under control. One day somebody's going to crack the problem of volumetric displays, probably. Some VR product will probably, eventually, hit a more realistic price/performance ratio where the niche will expand at least a little more.
Maybe there will even be something genuinely useful, which is recognizably adjacent to the current "AI" fad, but if it is, it will be some new development that we haven't seen yet. If current AI technology were sufficient to drive some interesting product, it would already be doing it, not using marketing disguised as science to conceal diminishing returns on current investments.
But They Will Not Be Big
The impulse to find the One Big Thing that will dominate the next five years is a fool's errand. Incremental gains are diminishing across the board. The markets for time and attention2 are largely saturated. There's no need for another streaming service if 100% of your leisure time is already committed to TikTok, YouTube and Netflix; famously, Netflix has already considered sleep its primary competitor for close to a decade - years before the pandemic.
Those rare tech markets which aren't saturated are suffering from pedestrian economic problems like wealth inequality, not technological bottlenecks.
For example, the thing preventing the development of a robot that can do your laundry and your dishes without your input is not necessarily that we couldn't build something like that, but that most households just can't afford it without wage growth catching up to productivity growth. It doesn't make sense for anyone to commit to the substantial R&D investment that such a thing would take, if the market doesn't exist because the average worker isn't paid enough to afford it on top of all the other tech which is already required to exist in society.
The projected income from the tiny, wealthy sliver of the population who could pay for the hardware, cannot justify an investment in the software past a fake version remotely operated by workers in the global south, only made possible by Internet wage arbitrage, i.e. a more palatable, modern version of indentured servitude.
Even if we were to accept the premise of an actually-"AI" version of this, that is still just a wish that ChatGPT could somehow improve enough behind the scenes to replace that worker, not any substantive investment in a novel, proprietary-to-the-chores-robot software system which could reliably perform specific functions.
What, Then?
The expectation for, and lack of, a "big thing" is a big problem. There are others who could describe its economic, political, and financial dimensions better than I can. So then let me speak to my expertise and my audience: open source software developers.
When I began my own involvement with open source, a big part of the draw for me was participating in a low-cost (to the corporate developer) but high-value (to society at large) positive externality. None of my employers would ever have cared about many of the applications for which Twisted forms a core bit of infrastructure; nor would I have been able to predict those applications' existence. Yet, it is nice to have contributed to their development, even a little bit.
However, it's not actually a positive externality if the public at large can't directly benefit from it.
When real world-changing, disruptive developments are occurring, the bean-counters are not watching positive externalities too closely. As we discovered with many of the other benefits that temporarily accrued to labor in the tech economy, Open Source that is usable by individuals and small companies may have been a ZIRP. If you know you're gonna make a billion dollars you're not going to worry about giving away a few hundred thousand here and there.
When gains are smaller and harder to realize, and margins are starting to get squeezed, it's harder to justify the investment in vaguely good vibes.
But this, itself, is not a call to action. I doubt very much that anyone reading this can do anything about the macroeconomic reality of higher interest rates. The technological reality of "development is happening slower" is inherently something that you can't change on purpose.
However, what we can do is to be aware of this trend in our own work.
Fight Scale Creep
It seems to me that more and more open source infrastructure projects are tools for hyper-scale application development, only relevant to massive cloud companies. This is just a subjective assessment on my part - I'm not sure what tools even exist today to measure this empirically - but I remember a big part of the open source community when I was younger being things like Inkscape, Themes.Org and Slashdot, not React, Docker Hub and Hacker News.
This is not to say that the hobbyist world no longer exists. There is of course a ton of stuff going on with Raspberry Pi, Home Assistant, OwnCloud, and so on. If anything there's a bit of a resurgence of self-hosting. But the interests of self-hosters and corporate developers are growing apart; there seems to be far less of a beneficial overflow from corporate infrastructure projects into these enthusiast or prosumer communities.
This is the concrete call to action: if you are employed in any capacity as an open source maintainer, dedicate more energy to medium- or small-scale open source projects.
If your assumption is that you will eventually reach a hyper-scale inflection point, then mimicking Facebook and Netflix is likely to be a good idea. However, if we can all admit to ourselves that we're not going to achieve a trillion-dollar valuation and a hundred thousand engineer headcount, we can begin to consider ways to make our Next Thing a bit smaller, and to accommodate the world as it is rather than as we wish it would be.
Be Prepared to Scale Down
Here are some design guidelines you might consider, for just about any open source project, particularly infrastructure ones:
-
Don't assume that your software can sustain an arbitrarily large fixed overhead because "you just pay that cost once" and you're going to be running a billion instances so it will always amortize; maybe you're only going to be running ten.
-
Remember that such fixed overhead includes not just CPU, RAM, and filesystem storage, but also the learning curve for developers. Front-loading a massive amount of conceptual complexity to accommodate the problems of hyper-scalers is a common mistake. Try to smooth out these complexities and introduce them only when necessary.
-
Test your code on edge devices. This means supporting Windows and macOS, and even Android and iOS. If you want your tool to help empower individual users, you will need to meet them where they are, which is not on an EC2 instance.
-
This includes considering Desktop Linux as a platform, as opposed to Server Linux as a platform, which (while they certainly have plenty in common) they are also distinct in some details. Consider the highly specific example of secret storage: if you are writing something that intends to live in a cloud environment, and you need to configure it with a secret, you will probably want to provide it via a text file or an environment variable. By contrast, if you want this same code to run on a desktop system, your users will expect you to support the Secret Service. This will likely only require a few lines of code to accommodate, but it is a massive difference to the user experience.
-
Don't rely on LLMs remaining cheap or free. If you have LLM-related features4, make sure that they are sufficiently severable from the rest of your offering that if ChatGPT starts costing $1000 a month, your tool doesn't break completely. Similarly, do not require that your users have easy access to half a terabyte of VRAM and a rack full of 5090s in order to run a local model.
Even if you were going to scale up to infinity, the ability to scale down and consider smaller deployments means that you can run more comfortably on, for example, a developer's laptop. So even if you can't convince your employer that this is where the economy and the future of technology in our lifetimes is going, it can be easy enough to justify this sort of design shift, particularly as individual choices. Make your onboarding cheaper, your development feedback loops tighter, and your systems generally more resilient to economic headwinds.
So, please design your open source libraries, applications, and services to run on smaller devices, with less complexity. It will be worth your time as well as your users'.
But if you can fix the whole wealth inequality thing, do that first.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!
-
These sorts of lists are pretty funny reads, in retrospect. ↩
-
Which is to say, "distraction". ↩
-
... or even their lesser-but-still-profound aftershocks like "Social Media", "Smartphones", or "On-Demand Streaming Video" ... secondary manifestations of the underlying innovation of a packet-switched global digital network ... ↩
-
My preference would of course be that you just didn't have such features at all, but perhaps even if you agree with me, you are part of an organization with some mandate to implement LLM stuff. Just try not to wrap the chain of this anchor all the way around your code's neck. ↩
02 Jan 2026 1:59am GMT
