22 Mar 2026
Planet Python
Tryton News: Release 1.7.0 of python-sql
We are proud to announce the release of the version 1.7.0 of python-sql.
python-sql is a library to write SQL queries in a pythonic way. It is mainly developed for Tryton but it has no external dependencies and is agnostic to any framework or SQL database.
In addition to bug-fixes, this release contains the following improvements:
- Upgrade to pyproject
- Add support for array operators
- Remove the parentheses around the unary and binary operators
- Use the ordinal number as aliases for GROUP BY
- Check the coherence of the aliases of GROUP BY and ORDER BY expressions
- Do not use parameter for EXTRACT field
- Remove support for Python older than 3.9
python-sql is available on PyPI: python-sql 1.7.0.
1 post - 1 participant
22 Mar 2026 9:18am GMT
Antonio Cuni: My first OSS commit turns 20 today
My first OSS commit turns 20 today
Some time ago I realized that it was 20 years since I started to contribute toOpen Source. It's easy to remember, because I started to work on PyPy as part of mymaster's thesis and I graduated in 2006.
So, I did a bit of archeology to find the first commit:
$ cd ~/pypy/pypy && git show 1a086d45d9 --no-patchcommit 1a086d45d9Author: Antonio Cuni <anto.cuni@gmail.com>Date: Wed Mar 22 14:01:42 2006 +0000 Initial commit of the CLI backend!!! note "svn, hg, git"
Funny thing, the original commit was not in `git`, which was just a few months oldat the time. In 2006 PyPy was using `subversion`, then a few years later [migratedto mercurial](/2010/12/14/pypy-migrates-to-mercurial/), and many years later[migrated to git](https://pypy.org/posts/2023/12/pypy-moved-to-git-github.html).I managed to find traces of the original `svn` commit in the archives of the[pypy-svn](https://marc.info/?l=pypy-svn&m=118495688023240) mailing list.22 Mar 2026 12:22am GMT
21 Mar 2026
Django community aggregator: Community blog posts
Human.json
I have seen more and more people talk about human.json lately and I think it is a pretty neat idea. From what I can tell it checks all the boxes I would expect from a protocol like this.
The fact that it relies on browser extensions right now makes sense, but might become a limiting factor in future. Or the number of extensions needs to go up beyond the two easy ones and come to mobile as well. I am not sure this will be going anywhere beyond a few enthusiastic people, but you never know.
Implementing the protocol was not much work, which is expected considering it only consists of two required values and an optional list of two more values. If you want to add it to your Django based site, I packaged everything up and you can find it on PyPI.
Should you use the package? Eh, that is not an easy question. From a supply chain perspective I would say "no". It is only a few lines of code. But you never know how the protocol will evolve, so things might look more complicated in a month. I will do my best to keep up with the protocol and not ship crypto miners.
I am still not a fan of Python packaging, but I have to admit uv makes it kind of bearable despite still not being without little gotchas.
21 Mar 2026 5:05pm GMT
Wagtail Routable Pages and Layout Configuration

If you are familiar with Wagtail CMS for Django, you know that you can create Wagtail pages and control their content and layout with blocks inside of stream fields. But what if you have entries coming from normal Django models through a routable page? In this article, I will explore how you can control the dynamic layout of a detail view in a routable page.
Routable pages in Wagtail are dynamic pages of your CMS page tree that can have their own URL subpaths and views. You can use them for filtered list and detail views, multi-step forms, multiple formats for the same data, etc. Here I will show you a routable ArticleIndexPage with a list and detail views for Article instances rendering the detail views based on the block layout in a detail_layout stream field.

1. Project Setup
Create a Wagtail project myproject and articles app:
pip install wagtail
wagtail start myproject
cd myproject
python manage.py startapp articles
Add to INSTALLED_APPS in your Django project settings:
INSTALLED_APPS = [
...
"wagtail.contrib.routable_page", # required for RoutablePage
"myproject.apps.articles",
]
2. File Structure
The articles app:
myproject/apps/articles/
├── __init__.py
├── apps.py
├── models.py # Article, Category, ArticleIndexPage
├── blocks.py # All StreamField block definitions
└── admin.py # Register Article and Category in Django admin
The articles templates:
myproject/templates/articles/
├── article_list.html # List view
├── article_detail.html # Detail view
└── blocks/
├── cover_image_block.html
├── description_block.html
└── related_articles_block.html
3. Models
myproject/apps/articles/models.py
Create the Category and Article Django models, and the ArticleIndexPage routable Wagtail page with article list and detail views:
from django.core.paginator import EmptyPage, PageNotAnInteger, Paginator
from django.db import models
from django.shortcuts import get_object_or_404
from django.utils.translation import gettext_lazy as _
from wagtail.admin.panels import FieldPanel, ObjectList, TabbedInterface
from wagtail.contrib.routable_page.models import RoutablePageMixin, path
from wagtail.fields import StreamField
from wagtail.models import Page
from .blocks import article_detail_layout_blocks
class Category(models.Model):
name = models.CharField(max_length=100, verbose_name=_("name"))
slug = models.SlugField(unique=True, verbose_name=_("slug"))
class Meta:
verbose_name = _("category")
verbose_name_plural = _("categories")
def __str__(self):
return self.name
class Article(models.Model):
title = models.CharField(max_length=255, verbose_name=_("title"))
slug = models.SlugField(unique=True, verbose_name=_("slug"))
category = models.ForeignKey(
Category,
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name="articles",
verbose_name=_("category"),
)
cover_image = models.ForeignKey(
"wagtailimages.Image",
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name="+",
verbose_name=_("cover image"),
)
description = models.TextField(blank=True, verbose_name=_("description"))
created_at = models.DateTimeField(auto_now_add=True, verbose_name=_("created at"))
class Meta:
verbose_name = _("article")
verbose_name_plural = _("articles")
def __str__(self):
return self.title
class ArticleIndexPage(RoutablePageMixin, Page):
"""
A single Wagtail page that owns:
- /articles/ → paginated list of all Articles
- /articles/<slug>/ → detail view for one Article
The StreamField is edited once in the Wagtail admin and
defines the layout for every detail view.
"""
articles_per_page = models.IntegerField(default=10, verbose_name=_("articles per page"))
detail_layout = StreamField(
article_detail_layout_blocks(),
blank=True,
use_json_field=True,
verbose_name=_("detail layout"),
help_text=_(
"Configure the layout for all article detail pages. "
"Add, remove, and reorder blocks to change what appears "
"on every article detail view."
),
)
# TabbedInterface gives List View and Detail View their own tabs.
# promote_panels and settings_panels must be added explicitly here
# because edit_handler takes full ownership of the admin UI structure.
edit_handler = TabbedInterface([
ObjectList(Page.content_panels + [FieldPanel("articles_per_page")], heading=_("List View")),
ObjectList([FieldPanel("detail_layout")], heading=_("Detail View")),
ObjectList(Page.promote_panels, heading=_("SEO / Promote")),
ObjectList(Page.settings_panels, heading=_("Settings")),
])
class Meta:
verbose_name = _("article index page")
verbose_name_plural = _("article index pages")
@path("")
def article_list(self, request):
all_articles = Article.objects.select_related("category", "cover_image").order_by("-created_at")
paginator = Paginator(all_articles, self.articles_per_page)
page_number = request.GET.get("page")
try:
articles = paginator.page(page_number)
except PageNotAnInteger:
articles = paginator.page(1)
except EmptyPage:
articles = paginator.page(paginator.num_pages)
return self.render(
request,
context_overrides={"articles": articles, "paginator": paginator},
template="articles/article_list.html",
)
@path("<slug:article_slug>/")
def article_detail(self, request, article_slug):
article = get_object_or_404(
Article.objects.select_related("category", "cover_image"),
slug=article_slug,
)
return self.render(
request,
context_overrides={"article": article},
template="articles/article_detail.html",
)
4. StreamField Blocks
myproject/apps/articles/blocks.py
Create Wagtail stream-field blocks for the cover image, description, and the related articles of an actual article. Each block can have some settings on how to represent the content of the block.
from django.utils.translation import gettext_lazy as _
from wagtail import blocks
class CoverImageBlock(blocks.StructBlock):
aspect_ratio = blocks.ChoiceBlock(
choices=[
("16-9", _("16:9 Widescreen")),
("4-3", _("4:3 Standard")),
("1-1", _("1:1 Square")),
("3-1", _("3:1 Banner")),
],
default="16-9",
label=_("Aspect ratio"),
help_text=_("Controls the cropping of the cover image."),
)
class Meta:
template = "articles/blocks/cover_image_block.html"
icon = "image"
label = _("Cover Image")
class DescriptionBlock(blocks.StructBlock):
max_lines = blocks.IntegerBlock(
min_value=0,
default=0,
label=_("Maximum lines"),
help_text=_("Clamp the description to this many lines. Set to 0 to show all."),
required=False,
)
class Meta:
template = "articles/blocks/description_block.html"
icon = "pilcrow"
label = _("Description")
class RelatedArticlesBlock(blocks.StructBlock):
sort_order = blocks.ChoiceBlock(
choices=[
("newest", _("Newest first")),
("oldest", _("Oldest first")),
("title_asc", _("Title A → Z")),
("title_desc", _("Title Z → A")),
],
default="newest",
label=_("Sort order"),
help_text=_("Order in which related articles are listed."),
)
def get_context(self, value, parent_context=None):
context = super().get_context(value, parent_context=parent_context)
article = (parent_context or {}).get("article")
if not article or not article.category_id:
context["related_articles"] = []
return context
from .models import Article
sort_map = {
"newest": "-created_at",
"oldest": "created_at",
"title_asc": "title",
"title_desc": "-title",
}
context["related_articles"] = (
Article.objects.select_related("category", "cover_image")
.filter(category=article.category)
.exclude(pk=article.pk)
.order_by(sort_map.get(value["sort_order"], "-created_at"))[:3]
)
return context
class Meta:
template = "articles/blocks/related_articles_block.html"
icon = "list-ul"
label = _("Related Articles")
def article_detail_layout_blocks():
"""
Returns the list of (name, block) tuples used in ArticleIndexPage.detail_layout.
Defined as a function so models.py can import it without circular issues.
"""
return [
("cover_image", CoverImageBlock()),
("description", DescriptionBlock()),
("related_articles", RelatedArticlesBlock()),
]
The RelatedArticlesBlock here also has a customized context where we pass related_articles variable with 3 other articles of the same category sorted by the sorting order defined in the block.
5. Templates
articles/article_list.html
This will be the template for the paginated article list. Later you could augment it with a search form and filters.
{% extends "base.html" %}
{% load wagtailcore_tags wagtailimages_tags i18n wagtailroutablepage_tags %}
{% block content %}
<main class="article-index">
<h1>{{ page.title }}</h1>
<ul class="article-list">
{% for article in articles %}
<li class="article-card">
{% if article.cover_image %}{% image article.cover_image width-400 as img %}
<img src="{{ img.url }}" alt="{{ article.title }}">
{% endif %}
<h2>
<a href="{% routablepageurl page "article_detail" article.slug %}">{{ article.title }}</a>
</h2>
{% if article.category %}<span class="badge">{{ article.category.name }}</span>{% endif %}
<p>{{ article.description|truncatewords:30 }}</p>
</li>
{% empty %}
<li>{% trans "No articles yet." %}</li>
{% endfor %}
</ul>
{% if articles.has_other_pages %}
<nav class="pagination" aria-label="{% trans 'Article pagination' %}">
{% if articles.has_previous %}
<a href="?page={{ articles.previous_page_number }}">{% trans "← Previous" %}</a>
{% endif %}
<span>{% blocktrans with num=articles.number total=articles.paginator.num_pages %}Page {{ num }} of {{ total }}{% endblocktrans %}</span>
{% if articles.has_next %}
<a href="?page={{ articles.next_page_number }}">{% trans "Next →" %}</a>
{% endif %}
</nav>
{% endif %}
</main>
{% endblock %}
articles/article_detail.html
The detail page would use the {% include_block page.detail_layout with article=article page=page %} to pass the article to the context of each block:
{% extends "base.html" %}
{% load i18n wagtailcore_tags wagtailroutablepage_tags %}
{% block content %}
<article class="article-detail">
<header>
<h1>{{ article.title }}</h1>
{% if article.category %}<span class="badge">{{ article.category.name }}</span>{% endif %}
</header>
{% include_block page.detail_layout with article=article page=page %}
<p>
<a href="{% routablepageurl page "article_list" %}">{% trans "← Back to all articles" %}</a>
</p>
</article>
{% endblock %}
articles/blocks/cover_image_block.html
Cover image block would show the article cover image with the aspect ratio set in the block:
{% load wagtailimages_tags %}
{% if article.cover_image %}
<div class="cover-image cover-image--{{ value.aspect_ratio }}">
{% image article.cover_image width-1200 as img %}
<img src="{{ img.url }}" alt="{{ article.title }}">
</div>
{% endif %}
articles/blocks/description_block.html
Description block would hide the article description text overflow based on the max lines set in the block:
<section class="article-description">
<p{% if value.max_lines > 0 %} class="line-clamp" style="-webkit-line-clamp: {{ value.max_lines }};"{% endif %}>
{{ article.description }}
</p>
</section>
articles/blocks/related_articles_block.html
The related articles block would list the related articles as defined in the extra context of the block:
{% load i18n wagtailimages_tags wagtailroutablepage_tags %}
{% if related_articles %}
<section class="related-articles">
<h2>{% trans "Related Articles" %}</h2>
<ul class="related-articles__list">
{% for rel in related_articles %}
<li class="related-card">
{% if rel.cover_image %}{% image rel.cover_image width-400 as img %}
<img src="{{ img.url }}" alt="{{ rel.title }}">
{% endif %}
<div class="related-card__body">
{% if rel.category %}<span class="badge">{{ rel.category.name }}</span>{% endif %}
<h3>
<a href="{% routablepageurl page "article_detail" rel.slug %}">{{ rel.title }}</a>
</h3>
<p>{{ rel.description|truncatewords:20 }}</p>
</div>
</li>
{% endfor %}
</ul>
</section>
{% endif %}
6. Django Admin Registration
articles/admin.py
Let's not forget to register admin views for the categories and articles so that we can add some data there:
from django.contrib import admin
from .models import Article, Category
@admin.register(Category)
class CategoryAdmin(admin.ModelAdmin):
list_display = ("name", "slug")
prepopulated_fields = {"slug": ("name",)}
@admin.register(Article)
class ArticleAdmin(admin.ModelAdmin):
list_display = ("title", "category", "created_at")
list_filter = ("category",)
search_fields = ("title", "description")
prepopulated_fields = {"slug": ("title",)}
7. Migrations and Initial Data
python manage.py makemigrations articles
python manage.py migrate
python manage.py createsuperuser
python manage.py runserver
8. Wagtail Admin Setup
- Open
http://localhost:8000/cms/and log in. - In the Pages explorer, create an Article Index Page as a child of the root page.
- Set the Slug to
articles.
- Set the Slug to
- On the List View tab, set Articles per page (e.g.
24). - On the Detail View tab, open the Detail Layout StreamField and add blocks in your preferred order:
- Cover Image - choose an aspect ratio.
- Description - optionally set a maximum line count to clamp long descriptions.
- Related Articles - choose the sort order for the three related articles shown.
- Publish the page.
- In the Django admin (
/django-admin/), create some Categories and Articles with cover images and descriptions. - Visit
http://localhost:8000/articles/for the paginated list. - Click any article to see the detail view rendered using the StreamField layout you configured in step 4.
Final words
Using stream fields we can render not only editorial content, for example, images or rich-text descriptions, but also dynamic content based on values from other models and/or the context of the given template.
The approach illustrated in this article allows us to create Wagtail pages where content editors have freedom to adjust the layouts of the pages or insert blocks, such as ads or info texts, into specific places based on real-time events.
21 Mar 2026 5:00pm GMT
20 Mar 2026
Django community aggregator: Community blog posts
How to Show a Waitlist Until Your Wagtail Site Is Ready

This year, I want to bring my centralized gamified donation platform www.make-impact.org to life (at least technically). Earlier I had the version I was developing separate from the waiting list, but I decided to merge them and have a switch between the waitlist and an early preview.
This allows me to have no data duplication, the possibility to create user accounts immediately, and saves hosting and maintenance costs.
This guide walks through a pattern that lets you ship a temporary waitlist page while your Wagtail site is still being built, with the ability to show your progress to chosen people. If you are building a Software as a Service (SaaS) or a web platform with Django, this article is for you.
The Concept
A custom start page view will check for a specific cookie value. If it is unset, the visitor will be redirected to a waitlist form at /waitlist/. If it is set, the visitor will be served the Wagtail home page.
All views under development will have a decorator that checks the cookie value and redirects to the start page if it is unset.
There will be a special view at /preview-access/ with a passphrase form that allows the visitor to gain preview access by setting the mentioned cookie. This view will also allow preview access to be deactivated.
These are the steps to implement this:
1. Generate and store two secrets
You will need two secret values, either set manually or generated with a cryptographically secure random generator (e.g. Python's secrets module):
PREVIEW_ACCESS_PASSPHRASE- the human-readable passphrase typed into the form. Share this with the people who need site access.PREVIEW_ACCESS_TOKEN- the opaque random value stored in the cookie. Never exposed to users; only the server compares against it.
>>> import secrets
>>> print(secrets.token_urlsafe(16)) # passphrase
dI5nGNftZOBx8m-r0m6glg
>>> print(secrets.token_hex(32)) # cookie token
c1b7a76e3ad5cbfb1657fa4e9885a3c8baa6a5a869f49a136abd0e873a9be9ee
Add both to environment variables or a secrets file untracked by Git, and load them in the Django project settings:
# myproject/settings/_base.py
PREVIEW_ACCESS_PASSPHRASE = get_secret("PREVIEW_ACCESS_PASSPHRASE")
PREVIEW_ACCESS_TOKEN = get_secret("PREVIEW_ACCESS_TOKEN")
The get_secret() here is my custom function to retrieve a secret from the secrets source.
2. Create the access-control decorator
Create myproject/apps/misc/decorators.py. Every protected view will import from here.
# myproject/apps/misc/decorators.py
from functools import wraps
from django.conf import settings
from django.shortcuts import redirect
def preview_access_required(view_func):
@wraps(view_func)
def wrapper(request, *args, **kwargs):
if request.COOKIES.get("preview_access") == settings.PREVIEW_ACCESS_TOKEN:
return view_func(request, *args, **kwargs)
return redirect("misc:home_page")
return wrapper
The decorator compares the cookie against the opaque unguessable token from settings, so unless the token value is known, a random attacker cannot gain access by setting the cookie manually in DevTools.
3. Create the passphrase form
Create myproject/apps/misc/forms.py. The form will have a single required password field. Validation will reject any value that does not match the setting.
# myproject/apps/misc/forms.py
from django import forms
from django.conf import settings
from django.utils.translation import gettext_lazy as _
class PreviewAccessForm(forms.Form):
passphrase = forms.CharField(
label=_("Passphrase"),
widget=forms.PasswordInput(
attrs={"autocomplete": "current-password"}
),
required=True,
)
def clean_passphrase(self):
value = self.cleaned_data["passphrase"]
if value != settings.PREVIEW_ACCESS_PASSPHRASE:
raise forms.ValidationError(
_("Incorrect passphrase.")
)
return value
4. Build the cookie toggle view
Point your browser to /preview-access/. When access is off it shows a passphrase form; when access is on it shows a disable button.
# myproject/apps/misc/views.py
from django.conf import settings
from django.shortcuts import redirect, render
from .forms import PreviewAccessForm
def preview_access(request):
has_access = request.COOKIES.get("preview_access") == settings.PREVIEW_ACCESS_TOKEN
if request.method == "POST":
if has_access:
response = redirect("misc:home_page")
response.delete_cookie("preview_access")
return response
form = PreviewAccessForm(request.POST)
if form.is_valid():
response = redirect("misc:home_page")
response.set_cookie(
"preview_access",
settings.PREVIEW_ACCESS_TOKEN,
httponly=True,
samesite="Strict",
)
return response
else:
form = PreviewAccessForm()
return render(
request,
"preview_access/preview_access.html",
{"has_access": has_access, "form": form}
)
Key points: - Disabling never requires the passphrase - the cookie is already proof of prior access. - The cookie is set with httponly=True (not readable by JavaScript) and samesite="Strict" (not sent on cross-site requests). - The cookie value is the opaque token, not "1", so it cannot be guessed.
The template renders the passphrase input only when not has_access, and shows field-level errors from the form if the passphrase is wrong.
5. Wrap the Wagtail catch-all with the decorator
Replace the default Wagtail catch-all route handler with a thin wrapper that enforces the same cookie check.
# myproject/apps/misc/views.py
from myproject.apps.misc.decorators import preview_access_required
from wagtail.views import serve as wagtail_serve
@preview_access_required
def serve_wagtail_page(request, path=""):
return wagtail_serve(request, path)
Without this, a visitor who knows any Wagtail page URL could bypass the gate by typing it directly into the browser.
6. Build the proxy home page view
This view is the only entry point to the site. It decides what every visitor sees first.
# myproject/apps/misc/views.py
from django.conf import settings
from wagtail.models import Site
from wagtail.views import serve as wagtail_serve
def home_page(request):
if request.COOKIES.get("preview_access") == settings.PREVIEW_ACCESS_TOKEN:
# serve Wagtail directly
site = Site.find_for_request(request)
return wagtail_serve(request, "")
# redirect to waiting list
return redirect("waiting_list")
Key point: the waiting_list view and a Wagtail Site and page must exist and be matched to the request domain before wagtail_serve is called.
7. Wire up the URLs
Django project URL rules:
# myproject/urls.py
from django.conf.urls.i18n import i18n_patterns
from django.urls import re_path
from wagtail.coreutils import WAGTAIL_APPEND_SLASH
from myproject.apps.misc import views as misc_views
if WAGTAIL_APPEND_SLASH:
wagtail_serve_pattern = r"^((?:[\w\-]+/)*)$"
else:
wagtail_serve_pattern = r"^([\w\-/]*)$"
urlpatterns += i18n_patterns(
# ... all your other app URLs above ...
# Catch-all - must be last
re_path(
wagtail_serve_pattern,
misc_views.serve_wagtail_page,
name="wagtail_serve"
),
)
The misc app URLs:
# myproject/apps/misc/urls.py
from django.urls import path
from . import views
app_name = "misc"
urlpatterns = [
path("", views.home_page, name="home_page"),
path("preview-access/", views.preview_access, name="preview_access"),
]
The waiting_list app URLs:
# myproject/apps/waiting_list/urls.py
from django.urls import path
from . import views
urlpatterns = [
path("waitlist/", views.show_waiting_list_form, name="waiting_list"),
]
8. Protect every other app view
Import and apply @preview_access_required to every view that belongs to the real site. Class-based views can be wrapped at assignment time:
from myproject.apps.misc.decorators import preview_access_required
# Function-based view
@preview_access_required
def event_list(request):
...
# Class-based view
event_list = preview_access_required(
EventListView.as_view()
)
Waiting-list views, API views, social authentication views, and static/legal pages (/imprint/, /privacy/, etc.) must not receive this decorator - they need to remain publicly accessible.
Final words
You get a lot of benefits from this setup. The waitlist measures demand for your website while you are still building. Invited test users can evaluate your progress at any time. While you are developing the website, you do not necessarily need multiple servers. Launching later is also easier - no hassle or delays with domain IP updates and SSL certificates.
20 Mar 2026 5:00pm GMT
Planet Python
Real Python: Quiz: Python Decorators 101
In this quiz, you'll test your understanding of Python Decorators 101.
Work through this quiz to review first-class functions, inner functions, and decorators, and learn how to create, reuse, and apply them to extend behavior cleanly in Python.
[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
20 Mar 2026 12:00pm GMT
16 Mar 2026
Planet Twisted
Donovan Preston: "Start Drag" and "Drop" to select text with macOS Voice Control
I have been using macOS voice control for about three years. First it was a way to reduce pain from excessive computer use. It has been a real struggle. Decades of computer use habits with typing and the mouse are hard to overcome! Text selection manipulation commands work quite well on macOS native apps like apps written in swift or safari with an accessibly tagged webpage. However, many webpages and electron apps (Visual Studio Code) have serious problems manipulating the selection, not working at all when using "select foo" where foo is a word in the text box to select, or off by one errors when manipulating the cursor position or extending the selection. I only recently expanded my repertoire with the "start drag" and "drop" commands, previously having used "Click and hold mouse", "move cursor to x", and "release mouse". Well, now I have discovered that using "start drag x" and "drop x" makes a fantastic text selection method! This is really going to improve my speed. In the long run, I believe computer voice control in general is going to end up being faster than WIMP, but for now the awkwardly rigid command phrasing and the amount of times it misses commands or misunderstands commands still really holds it back. I've been learning the macOS Voice Control specific command set for years now and I still reach for the keyboard and mouse way too often.
16 Mar 2026 11:04am GMT
04 Mar 2026
Planet Twisted
Glyph Lefkowitz: What Is Code Review For?
Humans Are Bad At Perceiving
Humans are not particularly good at catching bugs. For one thing, we get tired easily. There is some science on this, indicating that humans can't even maintain enough concentration to review more than about 400 lines of code at a time..
We have existing terms of art, in various fields, for the ways in which the human perceptual system fails to register stimuli. Perception fails when humans are distracted, tired, overloaded, or merely improperly engaged.
Each of these has implications for the fundamental limitations of code review as an engineering practice:
-
Inattentional Blindness: you won't be able to reliably find bugs that you're not looking for.
-
Repetition Blindness: you won't be able to reliably find bugs that you are looking for, if they keep occurring.
-
Vigilance Fatigue: you won't be able to reliably find either kind of bugs, if you have to keep being alert to the presence of bugs all the time.
-
and, of course, the distinct but related Alert Fatigue: you won't even be able to reliably evaluate reports of possible bugs, if there are too many false positives.
Never Send A Human To Do A Machine's Job
When you need to catch a category of error in your code reliably, you will need a deterministic tool to evaluate - and, thanks to our old friend "alert fatigue" above - ideally, to also remedy that type of error. These tools will relieve the need for a human to make the same repetitive checks over and over. None of them are perfect, but:
- to catch logical errors, use automated tests.
- to catch formatting errors, use autoformatters.
- to catch common mistakes, use linters.
- to catch common security problems, use a security scanner.
Don't blame reviewers for missing these things.
Code review should not be how you catch bugs.
What Is Code Review For, Then?
Code review is for three things.
First, code review is for catching process failures. If a reviewer has noticed a few bugs of the same type in code review, that's a sign that that type of bug is probably getting through review more often than it's getting caught. Which means it's time to figure out a way to deploy a tool or a test into CI that will reliably prevent that class of error, without requiring reviewers to be vigilant to it any more.
Second - and this is actually its more important purpose - code review is a tool for acculturation. Even if you already have good tools, good processes, and good documentation, new members of the team won't necessarily know about those things. Code review is an opportunity for older members of the team to introduce newer ones to existing tools, patterns, or areas of responsibility. If you're building an observer pattern, you might not realize that the codebase you're working in already has an existing idiom for doing that, so you wouldn't even think to search for it, but someone else who has worked more with the code might know about it and help you avoid repetition.
You will notice that I carefully avoided saying "junior" or "senior" in that paragraph. Sometimes the newer team member is actually more senior. But also, the acculturation goes both ways. This is the third thing that code review is for: disrupting your team's culture and avoiding stagnation. If you have new talent, a fresh perspective can also be an extremely valuable tool for building a healthy culture. If you're new to a team and trying to build something with an observer pattern, and this codebase has no tools for that, but your last job did, and it used one from an open source library, that is a good thing to point out in a review as well. It's an opportunity to spot areas for improvement to culture, as much as it is to spot areas for improvement to process.
Thus, code review should be as hierarchically flat as possible. If the goal of code review were to spot bugs, it would make sense to reserve the ability to review code to only the most senior, detail-oriented, rigorous engineers in the organization. But most teams already know that that's a recipe for brittleness, stagnation and bottlenecks. Thus, even though we know that not everyone on the team will be equally good at spotting bugs, it is very common in most teams to allow anyone past some fairly low minimum seniority bar to do reviews, often as low as "everyone on the team who has finished onboarding".
Oops, Surprise, This Post Is Actually About LLMs Again
Sigh. I'm as disappointed as you are, but there are no two ways about it: LLM code generators are everywhere now, and we need to talk about how to deal with them. Thus, an important corollary of this understanding that code review is a social activity, is that LLMs are not social actors, thus you cannot rely on code review to inspect their output.
My own personal preference would be to eschew their use entirely, but in the spirit of harm reduction, if you're going to use LLMs to generate code, you need to remember the ways in which LLMs are not like human beings.
When you relate to a human colleague, you will expect that:
- you can make decisions about what to focus on based on their level of experience and areas of expertise to know what problems to focus on; from a late-career colleague you might be looking for bad habits held over from legacy programming languages; from an earlier-career colleague you might be focused more on logical test-coverage gaps,
- and, they will learn from repeated interactions so that you can gradually focus less on a specific type of problem once you have seen that they've learned how to address it,
With an LLM, by contrast, while errors can certainly be biased a bit by the prompt from the engineer and pre-prompts that might exist in the repository, the types of errors that the LLM will make are somewhat more uniformly distributed across the experience range.
You will still find supposedly extremely sophisticated LLMs making extremely common mistakes, specifically because they are common, and thus appear frequently in the training data.
The LLM also can't really learn. An intuitive response to this problem is to simply continue adding more and more instructions to its pre-prompt, treating that text file as its "memory", but that just doesn't work, and probably never will. The problem - "context rot" is somewhat fundamental to the nature of the technology.
Thus, code-generators must be treated more adversarially than you would a human code review partner. When you notice it making errors, you always have to add tests to a mechanical, deterministic harness that will evaluates the code, because the LLM cannot meaningfully learn from its mistakes outside a very small context window in the way that a human would, so giving it feedback is unhelpful. Asking it to just generate the code again still requires you to review it all again, and as we have previously learned, you, a human, cannot review more than 400 lines at once.
To Sum Up
Code review is a social process, and you should treat it as such. When you're reviewing code from humans, share knowledge and encouragement as much as you share bugs or unmet technical requirements.
If you must reviewing code from an LLM, strengthen your automated code-quality verification tooling and make sure that its agentic loop will fail on its own when those quality checks fail immediately next time. Do not fall into the trap of appealing to its feelings, knowledge, or experience, because it doesn't have any of those things.
But for both humans and LLMs, do not fall into the trap of thinking that your code review process is catching your bugs. That's not its job.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!
04 Mar 2026 5:24am GMT
19 Feb 2026
Planet Twisted
Donovan Preston: Wello Horld.
Onovanday Restonpay is going to logbay here again. It's time to take back the rss-source-rss-reader web of links
19 Feb 2026 2:36am GMT
22 Jan 2026
Planet Plone - Where Developers And Integrators Write
Maurits van Rees: Mikel Larreategi: How we deploy cookieplone based projects.

We saw that cookieplone was coming up, and Docker, and as game changer uv making the installation of Python packages much faster.
With cookieplone you get a monorepo, with folders for backend, frontend, and devops. devops contains scripts to setup the server and deploy to it. Our sysadmins already had some other scripts. So we needed to integrate that.
First idea: let's fork it. Create our own copy of cookieplone. I explained this in my World Plone Day talk earlier this year. But cookieplone was changing a lot, so it was hard to keep our copy updated.
Maik Derstappen showed me copier, yet another templating language. Our idea: create a cookieplone project, and then use copier to modify it.
What about the deployment? We are on GitLab. We host our runners. We use the docker-in-docker service. We develop on a branch and create a merge request (pull request in GitHub terms). This activates a piple to check-test-and-build. When it is merged, bump the version, use release-it.
Then we create deploy keys and tokens. We give these access to private GitLab repositories. We need some changes to SSH key management in pipelines, according to our sysadmins.
For deployment on the server: we do not yet have automatic deployments. We did not want to go too fast. We are testing the current pipelines and process, see if they work properly. In the future we can think about automating deployment. We just ssh to the server, and perform some commands there with docker.
Future improvements:
- Start the docker containers and curl/wget the
/okendpoint. - lock files for the backend, with pip/uv.
22 Jan 2026 9:43am GMT
Maurits van Rees: Jakob Kahl and Erico Andrei: Flying from one Plone version to another

This is a talk about migrating from Plone 4 to 6 with the newest toolset.
There are several challenges when doing Plone migrations:
- Highly customized source instances: custom workflow, add-ons, not all of them with versions that worked on Plone 6.
- Complex data structures. For example a Folder with a Link as default page, with pointed to some other content which meanwhile had been moved.
- Migrating Classic UI to Volto
- Also, you might be migrating from a completely different CMS to Plone.
How do we do migrations in Plone in general?
- In place migrations. Run migration steps on the source instance itself. Use the standard upgrade steps from Plone. Suitable for smaller sites with not so much complexity. Especially suitable if you do only a small Plone version update.
- Export - import migrations. You extract data from the source, transform it, and load the structure in the new site. You transform the data outside of the source instance. Suitable for all kinds of migrations. Very safe approach: only once you are sure everything is fine, do you switch over to the newly migrated site. Can be more time consuming.
Let's look at export/import, which has three parts:
- Extraction: you had collective.jsonify, transmogrifier, and now collective.exportimport and plone.exportimport.
- Transformation: transmogrifier, collective.exportimport, and new: collective.transmute.
- Load: Transmogrifier, collective.exportimport, plone.exportimport.
Transmogrifier is old, we won't talk about it now. collective.exportimport: written by Philip Bauer mostly. There is an @@export_all view, and then @@import_all to import it.
collective.transmute is a new tool. This is made to transform data from collective.exportimport to the plone.exportimport format. Potentially it can be used for other migrations as well. Highly customizable and extensible. Tested by pytest. It is standalone software with a nice CLI. No dependency on Plone packages.
Another tool: collective.html2blocks. This is a lightweight Python replacement for the JavaScript Blocks conversion tool. This is extensible and tested.
Lastly plone.exportimport. This is a stripped down version of collective.exportimport. This focuses on extract and load. No transforms. So this is best suited for importing to a Plone site with the same version.
collective.transmute is in alpha, probably a 1.0.0 release in the next weeks. Still missing quite some documentation. Test coverage needs some improvements. You can contribute with PRs, issues, docs.
22 Jan 2026 9:43am GMT
Maurits van Rees: Fred van Dijk: Behind the screens: the state and direction of Plone community IT

This is a talk I did not want to give.
I am team lead of the Plone Admin team, and work at kitconcept.
The current state: see the keynotes, lots happening on the frontend. Good.
The current state of our IT: very troubling and daunting.
This is not a 'blame game'. But focussing on resources and people this conference should be a first priority. We are a real volunteer organisation, nobody is pushing anybody around. That is a strength, but also a weakness. We also see that in the Admin team.
The Admin team is 4 senior Plonistas as allround admin, 2 release managers, 2 CI/CD experts. 3 former board members, everyone overburdened with work. We had all kinds of plans for this year, but we have mostly been putting out fires.
We are a volunteer organisation, and don't have a big company behind us that can throw money at the problems. Strength and weakness. In all society it is a problem that volunteers are decreasing.
Root causes:
- We failed to scale down in time in our IT landscape and usage.
- We have no clean role descriptions, team descriptions, we can't ask a minimum effort per week or month.
- The trend is more communication channels, platforms to join and promote yourself, apps to use.
Overview of what have have to keep running as admin team:
- Support main development process: github, CI/CD, Jenkins main and runners, dist.plone.org.
- Main communication, documentation: pone.org, docs.plone.org, training.plone.org, conf and country sites, Matomo.
- Community office automation: Google docds, workspacae, Quaive, Signal, Slack
- Broader: Discourse and Discord
The first two are really needed, the second we already have some problems with.
Some services are self hosted, but also a lot of SAAS services/platforms. In all, it is quite a bit.
The Admin team does not officially support all of these, but it does provide fallback support. It is too much for the current team.
There are plans for what we can improve in the short term. Thank you to a lot of people that I have already talked to about this. 3 areas: GitHub setup and config, Google Workspace, user management.
On GitHub we have a sponsored OSS plan. So we have extra features for free, but it not enough by far. User management: hard to get people out. You can't contact your members directly. E-mail has been removed, for privacy. Features get added on GitHub, and no complete changelog.
Challenge on GitHub: we have public repositories, but we also have our deployments in there. Only really secure would be private repositories, otherwise the danger is that credentials or secret could get stolen. Every developer with access becomes an attack vector. Auditing is available for only 6 months. A simple question like: who has been active for the last 2 years? No, can't do.
Some actionable items on GitHub:
- We will separate the contributor agreement check from the organisation membership. We create a hidden team for those who signed, and use that in the check.
- Cleanup users, use Contributors team, Developers
- Active members: check who has contributed the last years.
- There have been security incidents. Someone accidentally removed a few repositories. Someone's account got hacked, luckily discovered within a few hours, and some actions had already been taken.
- More fine grained teams to control repository access.
- Use of GitHub Discussions for some central communication of changes.
- Use project management better.
- The elephant in the room that we have practice on this year, and ongoing: the Collective organisation. This was free for all, very nice, but the development world is not a nice and safe place anymore. So we already needed to lock down some things there.
- Keep deployments and the secrets all out of GitHub, so no secrets can be stolen.
Google Workspace:
- We are dependent on this.
- No user management. Admins have had access because they were on the board, but they kept access after leaving the board. So remove most inactive users.
- Spam and moderation issues
- We could move to Google docs for all kinds of things. Use Google workspace drives for all things. But the Drive UI is a mess, so docs can be in your personal account without you realizing it.
User management:
- We need separate standalone user management, but implementation is not clear.
- We cannot contact our members one on one.
Oh yes, Plone websites:
- upgrade plone.org
- self preservation: I know what needs to be done, and can do it, but have no time, focusing on the previous points instead.
22 Jan 2026 9:43am GMT
