22 Apr 2026
Planet Python
Kay Hayen: Nuitka Release 4.0
This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, "download now".
This release is a major release with many new features and the long wanted improvements for scalability of the Python compilation.
Bug Fixes
-
Accelerated: The enhanced detection for uninstalled Anaconda and WinPython was not fully working. (Fixed in 2.8.1 already.)
-
Onefile: Fixed an issue in DLL mode where signal handlers were not being registered, which could prevent proper program termination on signals like CTRL-C. (Fixed in 2.8.1 already.)
-
Windows: Fixed incorrect handling of forward slashes in cache directory paths, which caused issues with Nuitka-Action. (Fixed in 2.8.1 already.)
-
UI: The
--output-diroption was not being honored in accelerated mode when--output-filenamewas also provided. (Fixed in 2.8.2 already.) -
UI: The
--output-filenameoption help said it wouldn't work for standalone mode when in fact it did for a while already. (Fixed in 2.8.2 already.) -
Onefile: On Windows, fixed a crash when using
--output-dirwhere it was checking for the wrong folder to exist. (Fixed in 2.8.2 already.) -
macOS: Fixed a crash that could occur when many package-specific directories were used, which could lead to the
otoolcommand line being too long. (Fixed in 2.8.2 already.) -
Standalone: For the "Python Build Standalone" flavor, ensured that debug builds correctly recognize all their specific built-in modules, preventing potential errors. (Fixed in 2.8.4 already.)
-
macOS: Fixed an issue where
$ORIGINr-paths were set but ended up unused, which in some cases caused errors by exhausting the header space and preventing the build entirely. (Fixed in 2.8.5 already.) -
macOS: Fixed an issue to ensure the system
xattrbinary is used.Otherwise, using
arch -x86_64 pythonfor compilation could fail when some packages are installed that providexattras well, because that might be anarm64binary only and would not work. (Fixed in 2.8.5 already.) -
UI: Fixed a misleading typo in the rejection message for unsupported Python 3.13.4. (Fixed in 2.8.5 already.)
-
Accelerated: The runner scripts
.cmdor.shnow are also placed respecting the--output-filenameand--output-diroptions. (Fixed in 2.8.5 already.) -
Plugins: Ensured that plugins detected by namespace usage are also activated in module mode. (Fixed in 2.8.5 already.)
-
Standalone: Fixed an issue where non-existent packages listed in
top_level.txtfiles could cause errors during metadata collection. (Fixed in 2.8.6 already.) -
Standalone: Corrected the classification of the
sitemodule, which was previously treated as a standard library module in some cases. (Fixed in 2.8.6 already.) -
Windows: Ensured that temporary link libraries and export files created during compilation are properly deleted, preventing them from being included in the standalone distribution. (Fixed in 2.8.6 already.)
-
Python3.14: Adapted to core changes by no longer inlining
haclcode for this version. (Fixed in 2.8.6 already.) -
Python 3.14: Follow allocator changes and immortal flags changes.
-
Python 3.14: Follow GC changes for compiled frames as well.
-
Python 3.14: Catch attempts to clear a compiled suspended frame object.
-
Fixed a potential mis-optimization for uses of
locals()when transforming the variable name reference call. (Fixed in 2.8.6 already.) -
Module: Fixed
pkgutil.iter_modulesnot working when loading a module into a namespace. (Fixed in 2.8.7 already.) -
Reports: Fixed a crash when creating the compilation report before the source directory is created. (Fixed in 2.8.7 already.)
-
Standalone: Fixed ignoring of non-existent packages from
top_level.txtfor metadata. (Fixed in 2.8.7 already.) -
UI: The
--no-progress-baroption was not disabling the Scons progress bars. (Fixed in 2.8.7 already.) -
UI: Fixed an exception in the
tqdmprogress bar during process shutdown. (Fixed in 2.8.7 already.) -
Windows: Fixed incorrect
sys.executablevalue in onefile DLL mode. (Fixed in 2.8.9 already.) -
Python3.14: Added missing implicit dependency for
_ctypeson Windows. (Fixed in 2.8.9 already.) -
Python3.13+: Fixed missing export of
PyInterpreter_*API. -
Python3.14: Adapted to change in evaluation order of
__exit__and__enter__. -
Multiprocessing: Fixed issue where
sys.argvwas not yet corrected whenargparsewas used early in spawned processes. -
Scons: Fixed an issue where Zig was not used as a fallback when MinGW64 was present but unusable.
-
Windows: Made onefile binary work on systems without runtime DLLs installed as well.
-
Scons: Made tracing robust against threaded outputs.
-
Python3.12+: Enhanced workaround for loading of extension modules with sub-packages to cover more cases.
-
Scons: Fixed missing Zig version output.
-
Scons: Fixed Zig detection to enforce PATH or CC usage on macOS instead of download, since it's not available.
-
UI: Fixed normalization of user paths, improving macOS support for reporting.
-
Linux: Fixed the workaround for the
memsetzero length warning, which was wrongly applied to Clang. Only GCC requires it, and Clang complained about it. -
Linux: More robust fallback to
g++whengccis too old for C11 support. -
Compatibility: Fixed a bug where
delof a subscript could cause wrong runtime behavior due to missing control flow escape annotations for the subscript value itself and the index. -
macOS: Fixed an issue where
Info.plistuser-facing entitlements keys mapping to multiple internal entitlements were not handled correctly. -
UI: Ensured tracing uses at least 80 characters for very narrow terminals to maintain readability.
-
Compatibility: Fixed an issue where nested loops could have incorrect traces, potentially leading to mis-optimizations.
-
Linux: Fixed an issue where
_XOPEN_SOURCEwas mistakenly appended for Clang, causing warnings. -
Scons: Improved passed variables handling by detecting
Noneor invalid types earlier. -
Fixed a bug where propagating class dictionaries needed extra micro passes to ensure proper optimization of their traces for the new variables.
-
Scons: Fixed an issue with process spawning when using
rusagecapture. -
Scons: Followed the file closing behavior of the standard communicate closer to avoid potential hangs.
Package Support
-
Anti-Bloat: Avoided a warning during program shutdown when using a compiled
xgboostpackage. (Fixed in 2.8.1 already.) -
Standalone: Added support for the
oracledbpackage. (Fixed in 2.8.2 already.) -
macOS: Added support for newer
PySide6versions. (Fixed in 2.8.4 already.) -
Standalone: Added support for including more metadata for the
transformerspackage. (Fixed in 2.8.5 already.) -
Standalone: Metadata from Nuitka Package Configuration is now only included if the corresponding package is part of the compilation. (Fixed in 2.8.5 already.)
-
Standalone: Added support for the
win32ctypespackage. (Fixed in 2.8.6 already.) -
Standalone: Added support for newer versions of the
daskpackage. (Fixed in 2.8.6 already.) -
Standalone: Added support for the
dataparserpackage. (Added in 2.8.7 already.) -
Standalone: Added support for
puremagic,pygments.lexersandtomliin standalone mode. -
Standalone: Added automatic detection of
mypycruntime dependencies, no need to manually configure that anymore. Also our configuration was often only correct for a single OS, and single upstream versions which is now fixed for packages having it before. -
Standalone: Added support for the newer
av(PyAV) package version. -
Standalone: Added support for the
sentry_sdk,jedi,parso, andline_profilerpackages. -
Standalone: Added support for newer
pandasversions.
New Features
-
UI: Added support for
--projectparameter to build using configuration frompyproject.toml(e.g. Poetry, Setuptools).With this, you can simply run
python -m nuitka --project --mode=onefileand it will use thepyproject.tomlorsetup.py/setup.cfgfiles to get the configuration and build the Nuitka binary.Previously Nuitka could only be used for building wheels with
buildpackage, and for building wheels that is still the best way.The
--projectoption is currently compatible withbuildandpoetryand detects the used build system automatically. -
Zig: Added experimental support for using Zig project's
zig ccas a C compiler backend for Nuitka. This can be enabled by setting theCCenvironment variable to point to thezigorzig.exeexecutable. -
Reports: Started capturing
rusagefor OSes that support it.-
Only POSIX-compliant OSes will do it (Linux, macOS, and all BSD variants), but Android does not.
-
Not yet part of the actual report, as we need to figure out how to use and present the information.
-
-
Scons: Added experimental support for enabling Thin LTO with the Clang compiler.
-
Standalone: Honor
--nofollow-import-tofor stdlib modules as well.This allows users to manually reduce standard library usage, but it can also cause crashes from extension modules not prepared for the absence of standard library modules.
-
Onefile: Allowed disabling the onefile timeout and hard killing on CTRL-C entirely by providing
--onefile-child-grace-time=infinity. -
Scons: Added newer inline copy of Scons which supports Visual Studio 2026. (Added in 2.8.7 already.)
-
Scons: Allowed using Python versions only partially supported for Nuitka with Scons. (Added in 2.8.7 already.)
-
UI: Added option
--devel-profile-compilationfor compile time profiling. Also renamed the old runtime profiling option--profileto--debug-profile-runtime, that is however still broken. -
Reports: Including CPU instr and cycle counters in timing on native Linux.
-
With appropriate configuration on Linux this allows to get at very precise timing configuration so we can judge even small compile time improvements correctly. We then don't need many runs to average out noise from other effects.
-
Don't use wall clock but process time for steps that are not doing IO like module optimization for more accurate values otherwise, it is however not very accurate still.
-
-
Python3.12+: Added support for function type syntax (generics).
-
Python3.14: Added groundwork for deferred evaluation of function annotations.
-
Python3.14: Added support for uncompiled generator integration which is crucial for
asynciocorrectness and general usability with modern frameworks. -
Debugging: Added
--debug-self-forkingto debug fork bombs. -
Windows: Added
--include-windows-runtime-dllsoption to control inclusion of Windows C runtime DLLs. Defaults toauto. -
Python 3.14: Added experimental support for deferred annotations.
-
Plugins: Added option
--qt-debug-pluginsfor debugging Qt plugin loading. -
DLLs: Added support for DLL tags to potentially control inclusion with more granularity.
-
macOS: Added support for many more protected resource entitlements (Siri, Bluetooth, HomeKit, etc.) to the bundle details.
-
Python: Added support for
@nuitka_ignoredecorator to exclude functions from compilation.@nuitka_ignore def my_cpython_func(): # This function is not compiled, but stays bytecode ...
-
UI: Added support for merging user and standard YAML Nuitka package configurations, currently only including proper merging of implicit imports.
Optimization
-
Avoid making duplicate hard imports by dropping assignments if the variable was already assigned to the same value.
-
Found previous assignment traces faster.
-
The assignment and
delnodes were using functions to find what they already knew from the last micro pass. Theself.variable_tracealready kept track of the previous value trace situation. -
For matching unescaped traces we will do similar, but it's not really used right now, so make it only a TODO as that will eventually be very similar.
-
Also speeds up the first micro pass even more, because it doesn't have to search and do other things. If no previous trace exists, none is attempted to be used.
-
Also the common check if no by-name uses or merges of a value occurred was always used inverted and now should be slightly faster to use and allow to short-circuit.
-
While this accelerated the first micro pass by a lot for per-assignment work, it mainly means to cleanup the design such that traces are easier to re-recognize. And this is a first step with immediate impacts.
-
-
Much faster Python passes.
-
The "Escape" and "Unknown" traces now have their own number spaces. This allows doing some quick checks for a trace without using the actual object, but just its number.
-
Narrow the scope of variables to the outline scope that uses them, so that they don't need to be dealt with in merging later code where they don't ever change anymore and are not used at all.
-
When checking for unused variables, do not ask the trace collection to filter its traces. Instead it works off the ones attached to the variable already. This avoids a lot of searching work. It also uses a method to decide if a trace constitutes usage rather than a long
elifchain.
-
-
Faster variable trace maintenance.
-
We now trace variables in trace collection as a dictionary per variable with a dictionary of the versions, this is closer to our frequent usage per variable.
-
That makes it a lot easier to update variables after the tracing is finished to know their users and writers.
-
Requires a lot less work, but also makes work less memory local such that the performance gain is relatively small despite less work being done.
-
It also avoids having to maintain a per-variable set for its using scopes.
-
Decide presence of writing traces for parameter variables faster.
-
-
Avoid unnecessary micro passes.
-
Detect variable references discarded sooner for better micro-pass efficiency. We were spending an extra pass on the whole module to stabilize the variable usage, which can end up being a lot of work.
-
After a module optimization pass found no changes, we no longer make an extra micro pass to avoid stabilization bugs, but only check against it not happening in debug mode. Depending on the number of micro passes, this can be a relatively high performance gain. For the
telethon.tl.typesmodule this was a 13% performance gain on top.
-
-
For "PASS 1" of
telethon.tl.types, which has been one of the known troublemakers with many classes and type annotations, all changes combined improve the compilation time by 1500%. -
Faster code generation.
-
Indentation in generated C code is no longer performed to speed up code generation. To restore readability, use the new option
--devel-generate-readable-codewhich will useclang-formatto format the C code.
-
-
Recognized module variable usages inside outlined functions that are in a loop, which improves the effectiveness of caching at run-time. (Added in 2.8.6 already.)
-
Standalone: Partially solved a TODO of minimizing intermediate directories in r-paths of ELF platforms, by only putting them there if the directory they point to will contain DLLs or binaries. This removes unused elements and reduces r-path size.
-
Windows: Made the caching of external paths effective, which significantly speeds up DLL resolution in subsequent compilations. (Fixed in 2.8.6 already.)
-
macOS: Removed extended attributes from data files as well, improving performance. (Fixed in 2.8.7 already.)
-
Scons: Stopped detecting installed MinGW to avoid overhead as it is not supported. (Fixed in 2.8.9 already.)
-
Scons: Added caching for MSVC information to reduce compilation time and if already available, use that to detect Windows SDK location rather that using
vswhere.exeeach time. -
Avoid computing large
%string interpolations at compile time. These could cause constants to be included in the binary as a result. -
Avoid including
importlib._bootstrapandimportlib._bootstrap_externalas they are available as frozen modules. -
Fixed un-hashable dictionary keys not being properly optimized, forcing runtime handling.
Anti-Bloat
-
Avoid including
tzdataon non-Windows platforms. (Fixed in 2.8.7 already.) -
Avoid including
pyparsing.testingin thepyparsingpackage. -
Added configuration to avoid compiled via C for large generated files for the
sqlfluffpackage.
Organizational
-
UI: Don't say
--include-data-files-externaldoesn't work in standalone mode.It actually has worked for a while, and we since renamed that option, but the help still said it wouldn't work in standalone mode.
-
Debugging: Added assertions for code object creation.
We were getting assertions from Python when built with Zig, and these are supposed to provide those as well.
-
Debugging: In case of tool commands failing, output the too long command line if that was the error given.
-
Anti-Bloat: Don't allow custom
nofollowmodes, point the user to the correct option instead. This was never needed, but two ways of providing this user decision make no sense. -
UI: The help text for
--include-data-files-externalwas updated to reflect that it works in standalone mode. (Fixed in 2.8.5 already.) -
Release: Use lowercase names for source archives in PyPI uploads. (Fixed in 2.8.7 already.)
-
Quality: Fixed an issue where "assume yes" was not being passed for downloads in the commit hook.
-
UI: Improved wording for missing C compiler message.
-
Debugging: More clear verbose trace for dropped expressions.
-
Debugging: Output what module had extra changes during debug extra micro pass.
-
Quality: Manage more development tools (
clang-format, etc.) via private pip space for better consistency and isolation. -
AI: Enhanced pull request template with directions for AI-driven PRs.
-
AI: Added agent command
create-mreto assist in creating a minimal reproduction example (MRE). -
User Manual: Added documentation about redistribution requirements for Python 3.12-3.14.
-
Quality: Added
--un-pushedargument to auto-format tool for checking only un-pushed changes. -
Scons: Improved error message to point to Zig support if no C compiler is found.
-
MonolithPy: Follow rename of our Python fork to MonolithPy to avoid confusion with the Nuitka compiler project itself.
-
Scons: Prefer English output and warn user for missing English language pack with MSVC in case or outputs being made.
-
UI: When running non-interactively, print the default response that is assumed for user queries to stdout as well, so it becomes visible in the logs.
-
UI: Warn when using protected resources options without standalone/bundle mode enabled on macOS.
-
Reports: Sort DLLs and entry points in compilation reports by destination path for deterministic output.
-
Quality: Skip files with
spell-checker: disableincodespellchecks. -
Release: Avoid compiling bytecode for inline copies that are not compatible with the running Python version during install.
-
Visual Studio: Ignored names in backticks and code blocks in ReST for spelling checks.
-
Actions: Ensured compilation reports are always recorded, even in case of errors, as they are most useful then.
-
AI: Added a workflow
create-mreto assist in creating a Minimal Reproducible Example from a larger file triggering a Nuitka bug. This has guidance on avoiding standalone mode and instructions for reducing code to just produce a MRE that is really small. -
AI: Added a workflow
fix-module-not-found-errorfor solving simpleModuleNotFoundErrorat runtime errors. -
AI: Added further strategies for Minimal Reproducible Example (MRE) reduction to the agent workflow.
-
UI: Reject input paths from standard library locations to prevent compiling files from there as main files.
Tests
-
Added support for
--allwith--max-failuresoption to the test runner to stop after a specified number of failures, or just run all tests and output the failed tests in the end.Also the tests specified can be a glob pattern, to match multiple tests, not just a test name.
Added examples to the help output of the runner to guide the usage of the developers.
-
Ignore multiline source code outputs of Python3.14 in tracebacks for output comparison, Nuitka won't do those.
-
Added test cases for poetry and distutils. Also verify that standalone mode works with
--projectfor the supported build systems. -
Made the distutils tests cases much more consistent.
-
Watch: Improved binary name detection from compilation reports for better mode support beyond standalone mode.
-
Allow downloading tools (like
clang-format) for all test cases. -
Added options to enforce Zig or Clang usage for C compiling.
-
Suppress
pipoutput when not running interactively to avoid test output differences. -
Added
nuitka.formatandnuitka.package_configto self-compilation tests. -
Added colorization to test comparison diffs if a tty is available.
-
Avoided using
--nofollow-importsin tests as some Python flavors do not work with it when using--mode=standalone.
Cleanups
-
Moved options to new
nuitka.optionspackage. -
Python3.14: Fixed a type mismatch warning seen with MSVC. (Fixed in 2.8.9 already.)
-
Massive amounts of spelling cleanups. Correct spelling is more and more places allows identification of bugs more immediately, therefore these are very worthwhile.
-
Code cleanup and style improvements in
ErrorsandOutputDirectoriesmodules. -
Replaced usages of
os.environ.getwithos.getenvfor consistency and denser code. -
Moved MSVC re-dist detection to
DllDependenciesWin32. -
Release: Don't install
zstandardby default anymore. -
UI: Tone down complaint about checksum mismatches.
-
Static source files are now provided by Nuitka directly.
-
Renamed C function
modulecode_tomodule_code_for consistency.
Summary
This release is finally a break-through for scalability. We will continue the push for scalability in the next release as well, but with more of a focus on the C compilation step, to generate C code that is easier for the backend compiler.
Also, this release finally addresses many usability problems. The non-deployment hooks for imports not found, that were actively excluded, are one such thing. The start of --project enables far easier adoption of Nuitka for existing projects.
Other huge improvements are related to generics, they are now much better support, closing gaps in the Python3.12 support.
The onefile DLL mode as used on Windows is finally perfect and should have no issues anymore, while enabling big future improvements.
Unfortunately 3.14 support is not yet ready and will have to be delayed until the next release.
22 Apr 2026 1:39pm GMT
Real Python: Quiz: SQLite and SQLAlchemy in Python: Move Your Data Beyond Flat Files
In this quiz, you'll test your understanding of the concepts in the video course SQLite and SQLAlchemy in Python: Move Your Data Beyond Flat Files.
By working through this quiz, you'll revisit how Python, SQLite, and SQLAlchemy work together to give your programs reliable data storage. You'll also check your grasp of primary and foreign keys, SQLAlchemy's Core and ORM layers, and the many-to-many relationships that tie your data together.
[ Improve Your Python With π Python Tricks π - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
22 Apr 2026 12:00pm GMT
Python GUIs: Checkboxes in Table Views with a Custom Model β Show checkboxes for boolean values in PyQt/PySide table views
I have a QTableView with a custom QAbstractTableModel, and I want to add a column of checkboxes. Should I create a custom delegate class for the checkbox, or is there a simpler way to do this?
You can use a custom delegate to draw a checkbox widget, but you don't have to. Qt provides a built-in mechanism for this: Qt.CheckStateRole. By returning Qt.Checked or Qt.Unchecked from your model's data() method, Qt will render a checkbox automatically - no delegate required.
Let's walk through how this works, starting with a simple display and then adding some interactivity.
Displaying checkboxes using Qt.CheckStateRole
The simplest way to add checkboxes to a QTableView is to handle Qt.CheckStateRole in your model's data() method. When Qt asks your model for data with this role, returning Qt.Checked or Qt.Unchecked tells Qt to draw a checkbox in that cell.
Here's a minimal example that shows a checked checkbox in every cell:
def data(self, index, role):
if role == Qt.DisplayRole:
value = self._data[index.row()][index.column()]
return str(value)
if role == Qt.CheckStateRole:
return Qt.Checked
This produces a table where every cell has both text and a checked checkbox:

In a real application, you would return Qt.Checked or Qt.Unchecked based on actual boolean values in your data. You might also restrict checkboxes to a specific column - for example, one that holds True/False values - rather than showing them everywhere.
Making checkboxes toggleable
Displaying checkboxes is a good start, but users will expect to be able to click them. To make checkboxes interactive, you need three things:
- A data store for the check state - a list (or column) that tracks which items are checked.
Qt.ItemIsUserCheckablereturned fromflags()- this tells Qt that the cell supports toggling.- A
setData()implementation forQt.CheckStateRole- this stores the updated state when the user clicks a checkbox.
Let's put all of this together in a complete example.
import sys
from PyQt6 import QtCore, QtGui, QtWidgets
from PyQt6.QtCore import Qt
class TableModel(QtCore.QAbstractTableModel):
def __init__(self, data, checked):
super().__init__()
self._data = data
self._checked = checked
def data(self, index, role):
if role == Qt.ItemDataRole.DisplayRole:
value = self._data[index.row()][index.column()]
return str(value)
if role == Qt.ItemDataRole.CheckStateRole:
checked = self._checked[index.row()][index.column()]
if checked:
return Qt.CheckState.Checked
return Qt.CheckState.Unchecked
def setData(self, index, value, role):
if role == Qt.ItemDataRole.CheckStateRole:
checked = value == Qt.CheckState.Checked.value
self._checked[index.row()][index.column()] = checked
self.dataChanged.emit(index, index, [role])
return True
return False
def rowCount(self, index):
return len(self._data)
def columnCount(self, index):
return len(self._data[0])
def flags(self, index):
return (
Qt.ItemFlag.ItemIsSelectable
| Qt.ItemFlag.ItemIsEnabled
| Qt.ItemFlag.ItemIsUserCheckable
)
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.table = QtWidgets.QTableView()
data = [
[1, 9, 2],
[1, 0, -1],
[3, 5, 2],
[3, 3, 2],
[5, 8, 9],
]
checked = [
[True, True, True],
[False, False, False],
[True, False, False],
[True, False, True],
[False, True, True],
]
self.model = TableModel(data, checked)
self.table.setModel(self.model)
self.setCentralWidget(self.table)
app = QtWidgets.QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
Run this and you'll see a table with checkboxes next to every value. Clicking any checkbox toggles it on and off, and the underlying checked list is updated accordingly.
Storing check state separately
The checked list mirrors the structure of the data list - each cell has a corresponding True or False value. This keeps the boolean check state separate from the data.
You could store it in the same data structure, as a [bool, data_value] nested list, or tuple if you like.
Returning the check state in data()
When Qt asks for Qt.ItemDataRole.CheckStateRole, we look up the boolean value for that cell and return either Qt.CheckState.Checked or Qt.CheckState.Unchecked:
if role == Qt.ItemDataRole.CheckStateRole:
checked = self._checked[index.row()][index.column()]
if checked:
return Qt.CheckState.Checked
return Qt.CheckState.Unchecked
For these return X if True, otherwise return Y type returns you can also use and X if bool else Y expression.
if role == Qt.ItemDataRole.CheckStateRole:
checked = self._checked[index.row()][index.column()]
return Qt.CheckState.Checked if checked else Qt.CheckState.Unchecked
Handling user clicks in setData()
When the user clicks a checkbox, Qt calls setData() with the new value and the Qt.ItemDataRole.CheckStateRole role. We compare the incoming value to Qt.CheckState.Checked.value to determine whether the box was checked or unchecked, then store the result:
def setData(self, index, value, role):
if role == Qt.ItemDataRole.CheckStateRole:
checked = value == Qt.CheckState.Checked.value
self._checked[index.row()][index.column()] = checked
self.dataChanged.emit(index, index, [role])
return True
return False
Notice the self.dataChanged.emit(...) call - this notifies the view that the data has changed so it can redraw the cell. Always emit this signal when you modify data in setData().
Enabling user interaction with flags()
The flags() method tells Qt what the user can do with each cell. Including Qt.ItemFlag.ItemIsUserCheckable is what makes the checkbox clickable:
def flags(self, index):
return (
Qt.ItemFlag.ItemIsSelectable
| Qt.ItemFlag.ItemIsEnabled
| Qt.ItemFlag.ItemIsUserCheckable
)
Without this flag, the checkbox will still appear (because you're returning data for CheckStateRole), but the user won't be able to toggle it.
Showing checkboxes in only one column
In many applications, you only want checkboxes in a specific column. You can achieve this by checking index.column() in your data() and flags() methods. For example, to show checkboxes only in column 2:
def data(self, index, role):
if role == Qt.ItemDataRole.DisplayRole:
value = self._data[index.row()][index.column()]
return str(value)
if role == Qt.ItemDataRole.CheckStateRole:
if index.column() == 2:
checked = self._checked[index.row()]
if checked:
return Qt.CheckState.Checked
return Qt.CheckState.Unchecked
def flags(self, index):
flags = Qt.ItemFlag.ItemIsSelectable | Qt.ItemFlag.ItemIsEnabled
if index.column() == 2:
flags |= Qt.ItemFlag.ItemIsUserCheckable
return flags
In this case, self._checked would be a simple one-dimensional list (one boolean per row) rather than a 2D list.
Summary
To add checkboxes to a QTableView with a custom QAbstractTableModel:
- Handle
Qt.ItemDataRole.CheckStateRoleindata()to display checkboxes based on boolean values. - Return
Qt.ItemFlag.ItemIsUserCheckablefromflags()to make checkboxes interactive. - Implement
setData()forQt.ItemDataRole.CheckStateRoleto store the updated state when the user clicks, and emitdataChangedto keep the view in sync.
This approach works natively with Qt's model/view architecture and avoids the complexity of writing a custom delegate. For a more complete guide to displaying data in table views - including using numpy and pandas data sources - see our QTableView with ModelViews tutorial. If you want to show only an icon without text in specific cells, see how to show only an icon in a QTableView cell. You can also learn how to create your own custom widgets for more advanced UI needs.
For an in-depth guide to building Python GUIs with PyQt6 see my book, Create GUI Applications with Python & Qt6.
22 Apr 2026 9:00am GMT
20 Apr 2026
Django community aggregator: Community blog posts
Django: fixing a memory βleakβ from Python 3.14βs incremental garbage collection
Back in February, I encountered an out-of-memory error while migrating a client project to Python 3.14. The issue occurred when running Django's database migration command (migrate) on a limited-resource server, and seemed to be caused by the new incremental garbage collection algorithm in Python 3.14.
At the time, I wrote a workaround and started on this blog post, but other tasks took priority and I never got around to finishing it. But four days ago, Hugo van Kemenade, the Python 3.14 release manager, announced that the new garbage collection algorithm will be reverted in Python 3.14.5, and the next Python 3.15 alpha release, due to reports of increased memory usage.
Here's the story of my workaround, as extra evidence that reverting incremental garbage collection is a good call.
Python 3.14's incremental garbage collection
Python (well, CPython) has a garbage collector that runs regularly to clean up unreferenced objects. Most objects are cleaned up immediately when their reference count drops to zero, but some objects can be part of reference cycles, where some set of objects reference each other and thus never reach a reference count of zero. The garbage collector sweeps through all objects to find and clean up these cycles.
Python 3.14 changed garbage collection to operate incrementally. Previously, a garbage collection run would sweep through all objects in one go, but this could lead to "stop the world" stalls where your program's real work could pause for seconds while the garbage collector did its job. The incremental garbage collection algorithm instead does a fraction of the work at a time, spreading out the cost of garbage collection.
Here's the full release note (historical source):
Incremental garbage collection
The cycle garbage collector is now incremental. This means that maximum pause times are reduced by an order of magnitude or more for larger heaps.
There are now only two generations: young and old. When
gc.collect()is not called directly, the GC is invoked a little less frequently. When invoked, it collects the young generation and an increment of the old generation, instead of collecting one or more generations.The behavior of
gc.collect()changes slightly:
gc.collect(1): Performs an increment of garbage collection, rather than collecting generation 1.- Other calls to
gc.collect()are unchanged.(Contributed by Mark Shannon in 108362.)
The problem
I'd been helping one of my clients upgrade to Python 3.14 for a few months, chipping away at compatibility work like upgrading dependencies and fixing deprecations. Tests were finally all passing and everything was working on the local development server. The next stop was to launch a temporary deployment using Python 3.14 via Heroku's review apps feature.
At the basic tier, Heroku review apps use fairly resource-constrained servers, including just 512MB of RAM, with the ability to temporarily burst up to nearly 1GB (200%). Paying for larger servers is an option, but unfortunately the next step up is pretty expensive.
When I launched a review app for my Python 3.14 branch, I found its release phase failed while running migrate. Inspecting the logs, I found the migrations started fine:
$ heroku logs --app example-python-314-wsgk3w --num 1000 | less
...
app[release.6634]: System check identified no issues (26 silenced).
app[release.6634]: Operations to perform:
app[release.6634]: Apply all migrations: admin, auth, contenttypes, ...
app[release.6634]: Running migrations:
β¦but partway through, these messages started appearing:
heroku[release.6634]: Process running mem=527M(101.5%)
heroku[release.6634]: Error R14 (Memory quota exceeded)
β¦ramping up until the 200% mark:
heroku[release.9599]: Process running mem=977M(190.3%)
heroku[release.9599]: Error R14 (Memory quota exceeded)
β¦and finally the termination of the release process:
heroku[release.9599]: Process running mem=1033M(201.7%)
heroku[release.9599]: Error R15 (Memory quota vastly exceeded)
heroku[release.9599]: Stopping process with SIGKILL
These messages came from Heroku's process management layer, which terminated the memory-hungry release process with SIGKILL after the hard threshold of 1GB memory usage was breached. Repeat attempts hit the same issue.
I was confused: migrations should not consume much memory. While they create a lot of temporary objects (Django model classes and fields) in order to calculate the SQL to send to the database, such objects are all short-lived and should be garbage-collected fairly swiftly. Additionally, migrations worked fine on the local and CI environments, and they'd never had memory issues on previous Python versions.
It looked like there was a memory leak, and it was time to dig in.
Initial investigation
I first profiled memory usage of migrate locally using Memray, the memory profiler that I covered in my previous post, using:
$ memray run manage.py migrate
The profiles revealed that memory usage had slightly increased on Python 3.14 compared to 3.13, but did not find a memory leak (a pattern of continual growth). Still, I made some optimizations to defer some imports, saving about 30% of startup memory usage, and tried again, to no avail.
I then had the idea to profile on a Heroku dyno directly. After hacking the release process to not run migrations, I built a review app and SSH'd into its web server:
$ heroku ps:exec -a example-python-314-rspwtc --dyno web.1 bash
Establishing credentials... done
Connecting to web.1 on β¬’ example-python-314-rspwtc...
~ $
Initially, I tried using Memray's live mode to profile the migrations as they ran:
$ memray run --live manage.py migrate
While this tool looks great for some situations, it didn't really work here, especially since it seized up after Heroku terminated the server.
I then tried running the default memray run command:
$ memray run manage.py migrate
Writing profile results into memray-manage.py.724.bin
β¦then, on my local computer, I repeatedly ran this command to copy down the results file:
$ trash memray-manage.py.724.bin && heroku ps:copy -a example-python-314-rspwtc --dyno web.1 memray-manage.py.724.bin
I was a bit worried here that the Memray binary file might be corrupted due to copying it while memray run was generating it. But with a final truncated copy left over after the server crashed, I asked Memray to generate a flamegraph for it:
$ memray flamegraph memray-manage.py.724.bin
β¦and it worked! Kudos to the Memray team for making their output format usable even when incomplete.
This more detailed flamegraph revealed more than 50% of the memory usage was allocated in ModelState.render(), which creates temporary model classes:
class ModelState:
...
def render(self, apps):
"""Create a Model object from our current state into the given apps."""
...
return type(self.name, bases, body)
This information hinted that these temporary model classes were hanging around beyond their expected short lifetime, leading to the memory leak. For example, every model class could also end up in a list intended for debugging, but accidentally extending the lifetime of these temporary classes.
I decided to dig a bit deeper using machete-mode debugging, with the below snippet that captures the temporary model classes and logs details about them. I wrote this within the Django settings file, where it was guaranteed to run at Django startup time, before the migrate management command.
import atexit
import gc
import tracemalloc
import weakref
from itertools import islice
from django.db.migrations.state import ModelState
tracemalloc.start(2)
orig_render = ModelState.render
rendered_classes = weakref.WeakSet()
def wrapped_render(*args, **kwargs):
cls = orig_render(*args, **kwargs)
rendered_classes.add(cls)
return cls
ModelState.render = wrapped_render
@atexit.register
def show_referrers():
print(f"π― {len(rendered_classes)} classes referred to.\n")
for cls in islice(rendered_classes, 2):
print(f"πππ {cls!r} πππ")
for i, referrer in enumerate(gc.get_referrers(cls), start=1):
print(f"π Referrer #{i}: {referrer!r}")
if tb := tracemalloc.get_object_traceback(referrer):
print("\n".join(tb.format(most_recent_first=True)))
print()
print()
print()
Note:
tracemalloc.start()starts Python's built-in memory allocation tracking.- The
ModelState.render()method was monkeypatched with a wrapper that stores every temporary model class in a WeakSet. - The
@atexit.register-decorated function runs at the end of the program, and logs two things. - The first piece of logging is the number of temporary model classes still alive at the end of the program, which should be close to zero. (Some may stick around from the final migration state.)
- The second piece of logging iterates over the first two live temporary model classes and logs their name and their referring objects, discovered via
gc.get_referrers(). For each referring object, it also logs the traceback of where that object was allocated, usingtracemalloc.get_object_traceback()(which is whytracemalloc.start()was needed at the beginning). - The emojis are a bit of fun to make the log messages easier to skim through. I have no idea why I picked π and π!!
The output from this hook was voluminous, even with the limit to the first two live classes. For example, here's the output for a temporary ContentType model class:
πππ <class '__fake__.ContentType'> πππ
π Referrer #1: <generator object WeakSet.__iter__ at 0x1234ef300>
File "/.../example/core/apps.py", line 45
for cls in islice(rendered_classes, 2):
...
π Referrer #11: {'name': 'model', ..., 'model': <class '__fake__.ContentType'>}
File "/.../.venv/lib/python3.14/site-packages/django/utils/functional.py", line 47
res = instance.__dict__[self.name] = self.func(instance)
File "/.../.venv/lib/python3.14/site-packages/django/db/models/fields/__init__.py", line 1210
self.validators.append(validators.MaxLengthValidator(self.max_length))
I checked the live referrers for a few classes, and they all seemed to be expected. However, it did reveal just how many cycles exist between ORM objects. For example, model classes refer to their field objects, which in turn refer back to their model classes, thanks to Django's Field.contribute_to_class() creating this reference:
def contribute_to_class(self, cls, name, private_only=False):
...
self.model = cls
...
Anyway, from comparing the output between Python 3.13 and 3.14, I could see that no new references were being created on Python 3.14. It seemed likely that the incremental garbage collection algorithm was the culprit.
The workaround
Given the investigation, I wanted to work around the issue by forcing a full garbage collection sweep with gc.collect() after each migration file ran. I came up with the below code, saved as management/commands/migrate.py in one of the project's Django apps. It extends the default migrate command to run gc.collect() after each successful migration (where "apply" is forwards and "unapply" is backwards).
import gc
from django.core.management.commands.migrate import Command as BaseCommand
class Command(BaseCommand):
"""Extended 'migrate' command."""
def migration_progress_callback(self, action, migration=None, fake=False):
"""
Extend Django's migration progress reporting to force garbage
collection after each migration. This is a workaround to keep memory
usage low, especially because we have a low limit on Heroku. It seems
the incremental garbage collector introduced in Python 3.14 cannot
keep up with the migration process's tendency to create many cyclical
objects, so our best fallback is to force collection of everything
after each migration is applied or unapplied.
https://adamj.eu/tech/2026/04/20/django-python-3.14-incremental-gc/
"""
super().migration_progress_callback(action, migration=migration, fake=fake)
if action in ("apply_success", "unapply_success"):
gc.collect()
It felt a bit hacky, but it did the trick! The review app succeeded to launch, showing a flat memory profile as before.
We then continued to deploy to staging and production without any issues, and the team have been happily using Python 3.14 for over a month now.
Fin
Well, that's where the tale ends right now. After the incremental garbage collection algorithm is reverted in Python 3.14.5, I guess I'll be able to remove this workaround.
While it would be nice to have incremental garbage collection work well, it's clear that the current implementation has some issues. I think the core team is making the right call reverting it, but hopefully there will be energy to improve the feature for the future.
May your garbage be collected efficiently and without fuss,
-Adam
20 Apr 2026 4:00am GMT
17 Apr 2026
Django community aggregator: Community blog posts
Django News - 30% Off PyCharm Pro β 100% for Django - Apr 17th 2026
Introduction
Django News Newsletter is moving!
Just a quick heads up. We're planning to move our newsletter to a new platform next week.
If things look a little different when it shows up, it's still us.
Django Newsletter
News
PyCharm & Django annual fundraiser
JetBrains and the Django Software Foundation team up again to offer 30% off PyCharm while matching donations to fund Django's core development and community programs.
New Technical Governance - request for community feedback
Django proposes a simpler, more flexible technical governance model and is inviting community feedback ahead of a planned July 2026 rollout.
Could you host DjangoCon Europe 2027? Call for organizers
DjangoCon Europe 2026 is happening right now in Athens, Greece but plans for 2027 have already begun. This post lays out all the resources for any questions, support, and more for future organizers.
Reverting the incremental GC in Python 3.14 and 3.15 - Core Development
Python is rolling back its new incremental garbage collector in 3.14 and 3.15 after real-world memory issues, reverting to the proven generational model while rethinking a future reintroduction.
PEP 772: Packaging Council governance process (Round 3) - Packaging / Coordination
PEP 772 has officially been approved, creating a new Python Packaging Council to guide the future of packaging standards, tools, and ecosystem governance.
Django Software Foundation
Django Has Adopted Contributor Covenant 3
The 3.0 edition of the new Code of Conduct is here! This milestone represents the completion of a careful, community-driven process that began earlier this year.
DSF Board monthly meeting, April 9, 2026
The Django Software Foundation approved a modernized Code of Conduct, new working group charters, and key community initiatives, signaling a fresh push toward clearer governance and sustained project growth.
Python Software Foundation
PyCon US 2026: Why we're asking you to think about your hotel reservation
For many years, PyCon US has relied on hotel booking commissions to help pay for conference space. If you are attending this year, please use an official hotel to be both close to the venue.
Python Software Foundation News: Reflecting on Five Years as the PSF's First CPython Developer in Residence
Εukasz Langa looks back on five years and highlights including the transition to GitHub issues from bugs.python.org, the replacement of the mostly manual CLA process with an automated system, the introduction of free threading to Python, and the replacement of the interactive shell in the interpreter. Also while addressing thousands of bugs, he's witnessed the full-time paid developer in residence roster at the Python Software Foundation grow from one person to five.
Updates to Django
Today, "Updates to Django" is presented by Johanan Oppong Amoateng from Djangonaut Space! π
Last week we had 12 pull requests merged into Django by 10 different contributors - including a first-time contributor! Congratulations to Jonathan Wu for having their first commits merged into Django - welcome on board!
This week's Django highlights: π¦
-
Added
user_perm_strhelper function that can be used when checking user permission usinghas_perm(). (#37021) -
The task decorator was updated to accept
**kwargsand forward them totask_class, allowing additional parameters to be passed to custom Task subclasses. (#36816)
Django Newsletter
Django Fellow Reports
Fellow Report - Natalia
A good chunk of this week focused on improving contributor workflows and reducing review overhead by introducing automated quality checks for PRs :robot:. This builds on prior experimentation (thanks @frankwiles) and seeks to provide early, actionable feedback for PR authors while helping maintainers focus on substantive review. We also had a flood of overly verbose and low quality reports from the same person, which I closed eagerly making use of the recent new guidelines we published in the security policy.
Fellow Report - Jacob
The last report before DjangoCon Europe. Lots of tickets triaged, reviewed, authored, discussed, and the usual kaleidoscope of miscellaneous tasks.
Django Fellow Report - Sarah
Django Fellow Sarah Boyce returns from maternity leave with part-time updates, tackling triage, reviews, security work, and GSoC prep while navigating connectivity challenges from Turkey.
Sponsored Link 1
You know @login_required. Now meet @app.reasoner(). AgentField turns Python functions into production AI agents, structured output, async execution, agent discovery. Every decorator becomes a REST endpoint. Open source, Apache 2.0. Python, Go & TypeScript SDKs.
Articles
Enforce Business Logic in the Database with Django
A practical guide to enforcing business logic at the database layer in Django using transactions, select_for_update locks, and CheckConstraint / UniqueConstraint to prevent race conditions and invalid data rather than relying on application-level validation.
Let's talk about LLMs
James Bennett consolidates his thoughts on AI/LLMs in this wide-ranging piece, ending with a call to invest in software fundamentals instead of racing to adopt the latest AI craze.
Django Table, Filter and Export With Htmx
A reusable pattern for combining django-tables2, django-filter, and HTMX into a single generic view and template. Very cool stuff.
Decoupling Your Business Logic from the Django ORM
Carlton Gibson's latest The Stack Report is a detailed dive into business logic and how to handle it in Django. This is a perennial topic, but he comes at it with decades of experience and wisdom.
djust 0.4.0 - The Developer Experience Release
djust 0.4.0 is about developer experience - making everyday tasks faster, safer, and more intuitive. 30+ new features, critical bug fixes, and a security hardening pass that eliminated every known vulnerability.
Why aren't we uv yet?
A decent chunk of new Python repos already use uv. Coding agents still overwhelmingly recommend pip and requirements.txt, while many users prefer uv.
Events
Are You Attending PyCon, or Orbiting It?
PSF Board Member Georgi Ker makes a personal case for booking hotels via the official PyCon US website before April 24th.
Design Articles
Under the hood of MDN's new frontend
From 2-min dev server starts to 2s. They rewrote MDN's entire frontend, ditching the React SPA for Lit web components, server components, and Rspack. The result: less JS shipped, scoped CSS, and a build pipeline that just works.
Videos
Debunking Django Myths - Sarah Boyce at PyTV
Django Fellow Sarah Boyce gave a talk recently at PyTV titled, "Django Has a Marketing Problem: Debunking the Myths That Won't Die." It is a fantastic overview of what Django does well and what it can improve.
Incremental Typing in Django - Carlton Gibson
Former Django Fellow and current Django Chat podcast host Carlton Gibson, recently gave a talk titled, "Static Islands, Dynamic Sea: Some Thoughts on Incremental Typing." In it he talks about why Python's dynamic nature is a feature, not a bug, and demonstrates Mantle - a library of utilities for typing around Django's liquid core.
Sponsored Link 2
Annual PyCharm Promo - 30% off, all money goes to Django
The annual PyCharm + Django promotion is live until May 1st. This is the single biggest fundraiser for Django and has raised over $350,000 since 2016.
Podcasts
Django Tasks - Jake Howard
Episode 200(!) features Jake Howard, a Senior Systems Engineer at Torchbox and the author of DEP 14, django.tasks, the highlight feature in Django 6.0. We discuss his work on the Django security team, work with Wagtail, AI dabblings, and more.
Django Job Board
Python Developer at Open Data Services
Remote UK role building Python data systems for social-impact projects, offering ~Β£48k plus profit share in a collaborative worker co-op.
Projects
yassi/dj-signals-panel
Display registered Django signals and receivers, showing what fires and where.
dvf/opinionated-django
An opinionated Django project with Repository pattern, Pydantic DTOs, svcs DI, and Stripe-style ULID IDs
This RSS feed is published on https://django-news.com/. You can also subscribe via email.
17 Apr 2026 3:00pm GMT
Djangocon EU: auto-prefetching with model field fetch modes in Django 6.1 - Jacob Walls
(One of my summaries of the 2026 Djangocon EU in Athens).
There's an example to experiment with here: https://dryorm.xterm.info/fetch-modes-simple
Timeline: it will be included in Django 6.1 in August.
The reason is the 1+n problem:
books = Book.objects.all()
for book in books:
print(book.author.name)
# This does a fresh query for author every time.
You can solve it with select_related(relation_names) or prefetch_related(relation_names). The first does an inner join. The second does two queries.
But: you might miss a relation. You might specify too many relations, getting data you don't need. Or you might not know about the relation as the code is in a totally different part of the code.
Fetch mode is intended to solve it. You can append .fetch_mode(models.FETCH_xyz) to your query:
- models.FETCH_ONE: the current behaviour, which will be the default.
- models.FETCH_PEERS: Fetch a deferred field for all instances that came from the same queryset. More or less prefetch_related in an automatic, lazy manner.
- models.FETCH_RAISE: useful for development, it will raise FieldFetchBlocked. And it will thus tell you that you'll have a performance problem and that you might need FETCH_PEERS
This is what happens:
books = Book.objects.all().fetch_mode(models.FETCH_PEERS)
for book in books:
# We're iterating over the query, so the query executes and grabs all books.
print(book.author.name)
# We accessed a relation, so at this point the prefetch_related-like
# mechanism ist fired off and all authors linked to by the books are
# grabbed in one single query.
You can write your own fetch modes, for instance if you only want a warning instead of raising an error.
Unrelated photo explanation: a cat I encountered in Athens on an evening stroll in the neighbourhood behind the hotel.
17 Apr 2026 4:00am GMT
04 Apr 2026
Planet Twisted
Donovan Preston: Using osascript with terminal agents on macOS
Here is a useful trick that is unreasonably effective for simple computer use goals using modern terminal agents. On macOS, there has been a terminal osascript command since the original release of Mac OS X. All you have to do is suggest your agent use it and it can perform any application control action available in any AppleScript dictionary for any Mac app. No MCP set up or tools required at all. Agents are much more adapt at using rod terminal commands, especially ones that haven't changed in 30 years. Having a computer control interface that hasn't changed in 30 years and has extensive examples in the Internet corpus makes modern models understand how to use these tools basically Effortlessly. macOS locks down these permissions pretty heavily nowadays though, so you will have to grant the application control permission to terminal. But once you have done that, the range of possibilities for commanding applications using natural language is quite extensive. Also, for both Safari and chrome on Mac, you are going to want to turn on JavaScript over AppleScript permission. This basically allows claude or another agent to debug your web applications live for you as you are using them.In chrome, go to the view menu, developer submenu, and choose "Allow JavaScript from Apple events". In Safari, it's under the safari menu, settings, developer, "Allow JavaScript from Apple events". Then you can do something like "Hey Claude, would you Please use osascript to navigate the front chrome tab to hacker news". Once you suggest using OSA script in a session it will figure out pretty quickly what it can do with it. Of course you can ask it to do casual things like open your mail app or whatever. Then you can figure out what other things will work like please click around my web app or check the JavaScript Console for errors. Another very important tips for using modern agents is to try to practice using speech to text. I think speaking might be something like five times faster than typing. It takes a lot of time to get used to, especially after a lifetime of programming by typing, but it's a very interesting and a different experience and once you have a lot of practice It starts to to feel effortless.
04 Apr 2026 1:31pm GMT
16 Mar 2026
Planet Twisted
Donovan Preston: "Start Drag" and "Drop" to select text with macOS Voice Control
I have been using macOS voice control for about three years. First it was a way to reduce pain from excessive computer use. It has been a real struggle. Decades of computer use habits with typing and the mouse are hard to overcome! Text selection manipulation commands work quite well on macOS native apps like apps written in swift or safari with an accessibly tagged webpage. However, many webpages and electron apps (Visual Studio Code) have serious problems manipulating the selection, not working at all when using "select foo" where foo is a word in the text box to select, or off by one errors when manipulating the cursor position or extending the selection. I only recently expanded my repertoire with the "start drag" and "drop" commands, previously having used "Click and hold mouse", "move cursor to x", and "release mouse". Well, now I have discovered that using "start drag x" and "drop x" makes a fantastic text selection method! This is really going to improve my speed. In the long run, I believe computer voice control in general is going to end up being faster than WIMP, but for now the awkwardly rigid command phrasing and the amount of times it misses commands or misunderstands commands still really holds it back. I've been learning the macOS Voice Control specific command set for years now and I still reach for the keyboard and mouse way too often.
16 Mar 2026 11:04am GMT
04 Mar 2026
Planet Twisted
Glyph Lefkowitz: What Is Code Review For?
Humans Are Bad At Perceiving
Humans are not particularly good at catching bugs. For one thing, we get tired easily. There is some science on this, indicating that humans can't even maintain enough concentration to review more than about 400 lines of code at a time..
We have existing terms of art, in various fields, for the ways in which the human perceptual system fails to register stimuli. Perception fails when humans are distracted, tired, overloaded, or merely improperly engaged.
Each of these has implications for the fundamental limitations of code review as an engineering practice:
-
Inattentional Blindness: you won't be able to reliably find bugs that you're not looking for.
-
Repetition Blindness: you won't be able to reliably find bugs that you are looking for, if they keep occurring.
-
Vigilance Fatigue: you won't be able to reliably find either kind of bugs, if you have to keep being alert to the presence of bugs all the time.
-
and, of course, the distinct but related Alert Fatigue: you won't even be able to reliably evaluate reports of possible bugs, if there are too many false positives.
Never Send A Human To Do A Machine's Job
When you need to catch a category of error in your code reliably, you will need a deterministic tool to evaluate - and, thanks to our old friend "alert fatigue" above - ideally, to also remedy that type of error. These tools will relieve the need for a human to make the same repetitive checks over and over. None of them are perfect, but:
- to catch logical errors, use automated tests.
- to catch formatting errors, use autoformatters.
- to catch common mistakes, use linters.
- to catch common security problems, use a security scanner.
Don't blame reviewers for missing these things.
Code review should not be how you catch bugs.
What Is Code Review For, Then?
Code review is for three things.
First, code review is for catching process failures. If a reviewer has noticed a few bugs of the same type in code review, that's a sign that that type of bug is probably getting through review more often than it's getting caught. Which means it's time to figure out a way to deploy a tool or a test into CI that will reliably prevent that class of error, without requiring reviewers to be vigilant to it any more.
Second - and this is actually its more important purpose - code review is a tool for acculturation. Even if you already have good tools, good processes, and good documentation, new members of the team won't necessarily know about those things. Code review is an opportunity for older members of the team to introduce newer ones to existing tools, patterns, or areas of responsibility. If you're building an observer pattern, you might not realize that the codebase you're working in already has an existing idiom for doing that, so you wouldn't even think to search for it, but someone else who has worked more with the code might know about it and help you avoid repetition.
You will notice that I carefully avoided saying "junior" or "senior" in that paragraph. Sometimes the newer team member is actually more senior. But also, the acculturation goes both ways. This is the third thing that code review is for: disrupting your team's culture and avoiding stagnation. If you have new talent, a fresh perspective can also be an extremely valuable tool for building a healthy culture. If you're new to a team and trying to build something with an observer pattern, and this codebase has no tools for that, but your last job did, and it used one from an open source library, that is a good thing to point out in a review as well. It's an opportunity to spot areas for improvement to culture, as much as it is to spot areas for improvement to process.
Thus, code review should be as hierarchically flat as possible. If the goal of code review were to spot bugs, it would make sense to reserve the ability to review code to only the most senior, detail-oriented, rigorous engineers in the organization. But most teams already know that that's a recipe for brittleness, stagnation and bottlenecks. Thus, even though we know that not everyone on the team will be equally good at spotting bugs, it is very common in most teams to allow anyone past some fairly low minimum seniority bar to do reviews, often as low as "everyone on the team who has finished onboarding".
Oops, Surprise, This Post Is Actually About LLMs Again
Sigh. I'm as disappointed as you are, but there are no two ways about it: LLM code generators are everywhere now, and we need to talk about how to deal with them. Thus, an important corollary of this understanding that code review is a social activity, is that LLMs are not social actors, thus you cannot rely on code review to inspect their output.
My own personal preference would be to eschew their use entirely, but in the spirit of harm reduction, if you're going to use LLMs to generate code, you need to remember the ways in which LLMs are not like human beings.
When you relate to a human colleague, you will expect that:
- you can make decisions about what to focus on based on their level of experience and areas of expertise to know what problems to focus on; from a late-career colleague you might be looking for bad habits held over from legacy programming languages; from an earlier-career colleague you might be focused more on logical test-coverage gaps,
- and, they will learn from repeated interactions so that you can gradually focus less on a specific type of problem once you have seen that they've learned how to address it,
With an LLM, by contrast, while errors can certainly be biased a bit by the prompt from the engineer and pre-prompts that might exist in the repository, the types of errors that the LLM will make are somewhat more uniformly distributed across the experience range.
You will still find supposedly extremely sophisticated LLMs making extremely common mistakes, specifically because they are common, and thus appear frequently in the training data.
The LLM also can't really learn. An intuitive response to this problem is to simply continue adding more and more instructions to its pre-prompt, treating that text file as its "memory", but that just doesn't work, and probably never will. The problem - "context rot" is somewhat fundamental to the nature of the technology.
Thus, code-generators must be treated more adversarially than you would a human code review partner. When you notice it making errors, you always have to add tests to a mechanical, deterministic harness that will evaluates the code, because the LLM cannot meaningfully learn from its mistakes outside a very small context window in the way that a human would, so giving it feedback is unhelpful. Asking it to just generate the code again still requires you to review it all again, and as we have previously learned, you, a human, cannot review more than 400 lines at once.
To Sum Up
Code review is a social process, and you should treat it as such. When you're reviewing code from humans, share knowledge and encouragement as much as you share bugs or unmet technical requirements.
If you must reviewing code from an LLM, strengthen your automated code-quality verification tooling and make sure that its agentic loop will fail on its own when those quality checks fail immediately next time. Do not fall into the trap of appealing to its feelings, knowledge, or experience, because it doesn't have any of those things.
But for both humans and LLMs, do not fall into the trap of thinking that your code review process is catching your bugs. That's not its job.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!
04 Mar 2026 5:24am GMT
22 Jan 2026
Planet Plone - Where Developers And Integrators Write
Maurits van Rees: Mikel Larreategi: How we deploy cookieplone based projects.

We saw that cookieplone was coming up, and Docker, and as game changer uv making the installation of Python packages much faster.
With cookieplone you get a monorepo, with folders for backend, frontend, and devops. devops contains scripts to setup the server and deploy to it. Our sysadmins already had some other scripts. So we needed to integrate that.
First idea: let's fork it. Create our own copy of cookieplone. I explained this in my World Plone Day talk earlier this year. But cookieplone was changing a lot, so it was hard to keep our copy updated.
Maik Derstappen showed me copier, yet another templating language. Our idea: create a cookieplone project, and then use copier to modify it.
What about the deployment? We are on GitLab. We host our runners. We use the docker-in-docker service. We develop on a branch and create a merge request (pull request in GitHub terms). This activates a piple to check-test-and-build. When it is merged, bump the version, use release-it.
Then we create deploy keys and tokens. We give these access to private GitLab repositories. We need some changes to SSH key management in pipelines, according to our sysadmins.
For deployment on the server: we do not yet have automatic deployments. We did not want to go too fast. We are testing the current pipelines and process, see if they work properly. In the future we can think about automating deployment. We just ssh to the server, and perform some commands there with docker.
Future improvements:
- Start the docker containers and curl/wget the
/okendpoint. - lock files for the backend, with pip/uv.
22 Jan 2026 9:43am GMT
Maurits van Rees: Jakob Kahl and Erico Andrei: Flying from one Plone version to another

This is a talk about migrating from Plone 4 to 6 with the newest toolset.
There are several challenges when doing Plone migrations:
- Highly customized source instances: custom workflow, add-ons, not all of them with versions that worked on Plone 6.
- Complex data structures. For example a Folder with a Link as default page, with pointed to some other content which meanwhile had been moved.
- Migrating Classic UI to Volto
- Also, you might be migrating from a completely different CMS to Plone.
How do we do migrations in Plone in general?
- In place migrations. Run migration steps on the source instance itself. Use the standard upgrade steps from Plone. Suitable for smaller sites with not so much complexity. Especially suitable if you do only a small Plone version update.
- Export - import migrations. You extract data from the source, transform it, and load the structure in the new site. You transform the data outside of the source instance. Suitable for all kinds of migrations. Very safe approach: only once you are sure everything is fine, do you switch over to the newly migrated site. Can be more time consuming.
Let's look at export/import, which has three parts:
- Extraction: you had collective.jsonify, transmogrifier, and now collective.exportimport and plone.exportimport.
- Transformation: transmogrifier, collective.exportimport, and new: collective.transmute.
- Load: Transmogrifier, collective.exportimport, plone.exportimport.
Transmogrifier is old, we won't talk about it now. collective.exportimport: written by Philip Bauer mostly. There is an @@export_all view, and then @@import_all to import it.
collective.transmute is a new tool. This is made to transform data from collective.exportimport to the plone.exportimport format. Potentially it can be used for other migrations as well. Highly customizable and extensible. Tested by pytest. It is standalone software with a nice CLI. No dependency on Plone packages.
Another tool: collective.html2blocks. This is a lightweight Python replacement for the JavaScript Blocks conversion tool. This is extensible and tested.
Lastly plone.exportimport. This is a stripped down version of collective.exportimport. This focuses on extract and load. No transforms. So this is best suited for importing to a Plone site with the same version.
collective.transmute is in alpha, probably a 1.0.0 release in the next weeks. Still missing quite some documentation. Test coverage needs some improvements. You can contribute with PRs, issues, docs.
22 Jan 2026 9:43am GMT
Maurits van Rees: Fred van Dijk: Behind the screens: the state and direction of Plone community IT

This is a talk I did not want to give.
I am team lead of the Plone Admin team, and work at kitconcept.
The current state: see the keynotes, lots happening on the frontend. Good.
The current state of our IT: very troubling and daunting.
This is not a 'blame game'. But focussing on resources and people this conference should be a first priority. We are a real volunteer organisation, nobody is pushing anybody around. That is a strength, but also a weakness. We also see that in the Admin team.
The Admin team is 4 senior Plonistas as allround admin, 2 release managers, 2 CI/CD experts. 3 former board members, everyone overburdened with work. We had all kinds of plans for this year, but we have mostly been putting out fires.
We are a volunteer organisation, and don't have a big company behind us that can throw money at the problems. Strength and weakness. In all society it is a problem that volunteers are decreasing.
Root causes:
- We failed to scale down in time in our IT landscape and usage.
- We have no clean role descriptions, team descriptions, we can't ask a minimum effort per week or month.
- The trend is more communication channels, platforms to join and promote yourself, apps to use.
Overview of what have have to keep running as admin team:
- Support main development process: github, CI/CD, Jenkins main and runners, dist.plone.org.
- Main communication, documentation: pone.org, docs.plone.org, training.plone.org, conf and country sites, Matomo.
- Community office automation: Google docds, workspacae, Quaive, Signal, Slack
- Broader: Discourse and Discord
The first two are really needed, the second we already have some problems with.
Some services are self hosted, but also a lot of SAAS services/platforms. In all, it is quite a bit.
The Admin team does not officially support all of these, but it does provide fallback support. It is too much for the current team.
There are plans for what we can improve in the short term. Thank you to a lot of people that I have already talked to about this. 3 areas: GitHub setup and config, Google Workspace, user management.
On GitHub we have a sponsored OSS plan. So we have extra features for free, but it not enough by far. User management: hard to get people out. You can't contact your members directly. E-mail has been removed, for privacy. Features get added on GitHub, and no complete changelog.
Challenge on GitHub: we have public repositories, but we also have our deployments in there. Only really secure would be private repositories, otherwise the danger is that credentials or secret could get stolen. Every developer with access becomes an attack vector. Auditing is available for only 6 months. A simple question like: who has been active for the last 2 years? No, can't do.
Some actionable items on GitHub:
- We will separate the contributor agreement check from the organisation membership. We create a hidden team for those who signed, and use that in the check.
- Cleanup users, use Contributors team, Developers
- Active members: check who has contributed the last years.
- There have been security incidents. Someone accidentally removed a few repositories. Someone's account got hacked, luckily discovered within a few hours, and some actions had already been taken.
- More fine grained teams to control repository access.
- Use of GitHub Discussions for some central communication of changes.
- Use project management better.
- The elephant in the room that we have practice on this year, and ongoing: the Collective organisation. This was free for all, very nice, but the development world is not a nice and safe place anymore. So we already needed to lock down some things there.
- Keep deployments and the secrets all out of GitHub, so no secrets can be stolen.
Google Workspace:
- We are dependent on this.
- No user management. Admins have had access because they were on the board, but they kept access after leaving the board. So remove most inactive users.
- Spam and moderation issues
- We could move to Google docs for all kinds of things. Use Google workspace drives for all things. But the Drive UI is a mess, so docs can be in your personal account without you realizing it.
User management:
- We need separate standalone user management, but implementation is not clear.
- We cannot contact our members one on one.
Oh yes, Plone websites:
- upgrade plone.org
- self preservation: I know what needs to be done, and can do it, but have no time, focusing on the previous points instead.
22 Jan 2026 9:43am GMT

