17 May 2026
Planet Python
Artem Golubin: PyPI packages are increasing rapidly
PyPI is the main repository for Python packages. One thing that I've noticed recently is the number of published packages per week.
Let's look at published counts of new package versions per week:

There are some dips in the data, but that's because of how the data was collected. We can see a clear increase in the number of published packages, especially in the last few months.
Because of AI, the number of packages published per week has increased by 30% since 2025.
I'm working on hexora, a library that detects malicious Python code in packages.[......]
17 May 2026 1:37pm GMT
16 May 2026
Planet Python
PyCon: Welcome Back, NVIDIA: Visionary Sponsor of PyCon US 2026
NVIDIA is excited to once again support PyCon US 2026 as a Visionary Sponsor, and to sponsor the Future of AI with Python Conference Track.
Python is a "first-class" language at NVIDIA CUDA, and NVIDIA is committed to bringing our technology to Python developers in close alignment with C++ upon new releases of our hardware. We're also happy to announce the general availability of CUDA Python 1.0.
NVIDIA's commitment to Python goes well beyond just our own tech stack. NVIDIA's Python engineers contribute across a broad swath of the Python ecosystem, from the core interpreter itself, to packaging and PyPI, to the Python community at large. NVIDIA is inspired by the energy of, and privileged to collaborate with, people across the open source Python community.
Since PyCon last year, NVIDIA Pythonistas - in collaboration with many others in the Python community - have made great progress on the evolution of various packaging standards, including working with community partners on the implementation of wheel variants and the establishment of a Packaging Council to better govern the evolution of packaging standards and PyPI. NVIDIA Python engineers are also engaged in implementation, testing, and porting work for the free-threaded build of the interpreter. NVIDIA Python engineers are driving the early exploratory work for adopting Rust for CPython, work on Python performance benchmarking, and are actively involved in many enhancements for Python 3.14 and 3.15, including providing built-in Zstandard support in Python 3.14.
At NVIDIA, we are excited to work with our partners and the open source Python community to help bring the best developer experience for users of high performance computing and AI. Come see NVIDIA at the Anaconda and PyTorch booths, and at the AI Track.
Barry Warsaw
May 2026
Principal System Software Engineer, NVIDIA
Python Core Developer since 1994
Python Steering Council member in 2026
16 May 2026 2:30pm GMT
15 May 2026
Planet Python
Kay Hayen: Nuitka Release 4.1
This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, "download now".
This release adds many new features and corrections with a focus on async code compatibility, missing generics features, and Python 3.14 compatibility and Python compilation scalability yet again.
Bug Fixes
-
Python 3.14: Fix, decorators were breaking when disabling deferred annotations. (Fixed in 4.0.1 already.)
-
Fix, nested loops could have wrong traces lead to mis-optimization. (Fixed in 4.0.1 already.)
-
Plugins: Fix, run-time check of package configuration was incorrect. (Fixed in 4.0.1 already.)
-
Compatibility: Fix,
__builtins__lacked necessary compatibility in compiled functions. (Fixed in 4.0.1 already.) -
Distutils: Fix, incorrect UTF-8 decoding was used for TOML input file parsing. (Fixed in 4.0.1 already.)
-
Fix, multiple hard value assignments could cause compile time crashes. (Fixed in 4.0.1 already.)
-
Fix, string concatenation was not properly annotating exception exits. (Fixed in 4.0.2 already.)
-
Windows: Fix,
--verbose-outputand--show-modules-outputdid not work with forward slashes. (Fixed in 4.0.2 already.) -
Python 3.14: Fix, there were various compatibility issues including dictionary watchers and inline values. (Fixed in 4.0.2 already.)
-
Python 3.14: Fix, stack pointer initialization to
localspluswas incorrect to avoid garbage collection issues. (Fixed in 4.0.2 already.) -
Python 3.12+: Fix, generic type variable scoping in classes was incorrect. (Fixed in 4.0.2 already.)
-
Python 3.12+: Fix, there were various issues with function generics. (Fixed in 4.0.2 already.)
-
Python 3.8+: Fix, names in named expressions were not mangled. (Fixed in 4.0.2 already.)
-
Plugins: Fix, module checksums were not robust against quoting style of module-name entry in YAML configurations. (Fixed in 4.0.2 already.)
-
Plugins: Fix, doing imports in queried expressions caused corruption. (Fixed in 4.0.2 already.)
-
UI: Fix, support for
uv_buildin the--projectoption was broken. (Fixed in 4.0.2 already.) -
Compatibility: Fix, names assigned in assignment expressions were not mangled. (Fixed in 4.0.2 already.)
-
Python 3.12+: Fix, there were still various issues with function generics. (Fixed in 4.0.3 already.)
-
Clang: Fix, debug mode was disabled for clang generally, but only ClangCL and macOS Clang didn't want it. (Fixed in 4.0.3 already.)
-
Zig: Fix,
--windows-console-mode=attach|disablewas not working when using Zig. (Fixed in 4.0.3 already.) -
macOS: Fix, yet another way self dependencies can look like, needed to have support added. (Fixed in 4.0.3 already.)
-
Python 3.12+: Fix, generic types in classes had bugs with multiple type variables. (Fixed in 4.0.3 already.)
-
Scons: Fix, repeated builds were not producing binary identical results. (Fixed in 4.0.3 already.)
-
Scons: Fix, compiling with newer Python versions did not fall back to Zig when the developer prompt MSVC was unusable, and error reporting could crash. (Fixed in 4.0.4 already.)
-
Zig: Fix, the workaround for Windows console mode
attachordisablewas incorrectly applied on non-Windows platforms. (Fixed in 4.0.4 already.) -
Standalone: Fix, linking with Python Build Standalone failed because
libHacl_Hash_SHA2was not filtered out unconditionally. (Fixed in 4.0.4 already.) -
Python 3.6+: Fix, exceptions like
CancelledErrorthrown into an async generator awaiting an inner awaitable could be swallowed, causing crashes. (Fixed in 4.0.4 already.) -
Fix, not all ordered set modules accepted generators for update. (Fixed in 4.0.5 already.)
-
Plugins: Disabled warning about rebuilding the
pytokensextension module. (Fixed in 4.0.5 already.) -
Standalone: Filtered
libHacl_Hash_SHA2from link libs unconditionally. (Fixed in 4.0.5 already.) -
Debugging: Disabled unusable unicode consistency checks for Python versions 3.4 to 3.6. (Fixed in 4.0.5 already.)
-
Python3.12+ Avoided cloning call nodes on class level which caused issues with generic functions in combination with decorators. (Added in 4.0.5 already.)
-
Python 3.12+: Added support for generic type variables in
async deffunctions. (Added in 4.0.5 already.) -
UI: Fix, flushing outputs for prompts was not working in all cases when progress bars were enabled. (Fixed in 4.0.6 already.)
-
UI: Fix, unused variable warnings were missing at C compile time when using
zigas a C compiler. (Fixed in 4.0.6 already.) -
Scons: Fix, forced stdout and stderr paths as a feature was broken. (Fixed in 4.0.6 already.)
-
Fix, replacing a branch did not accurately track shared active variables causing optimization crashes. (Fixed in 4.0.7 already.)
-
macOS: Fix, failed to remove extended attributes because files need to be made writable first. (Fixed in 4.0.7 already.)
-
Fix, dict
popandsetdefaultusing with:=rewrites lacked exception-exit annotations for un-hashable keys. (Fixed in 4.0.8 already.) -
Python 3.13: Fix, the
__parameters__attribute of generic classes was not working. (Fixed in 4.0.8 already.) -
Python 3.11+: Fix, starred arguments were not working as type variables. (Fixed in 4.0.8 already.)
-
Python2: Fix,
FileNotFoundErrorcompatibility fallback handling was not working properly. (Fixed in 4.0.8 already.) -
Compatibility: Fix, loop ownership check in value traces was missing, causing issues with nested loops.
-
Windows: Improved
--windows-console-mode=attachto properly handle console handles, enabling cases likeos.systemto work nicely. -
Python2: Fix, there was a compatibility issue where providing default values to the
mkdtempfunction was failing. -
Windows: Fix, there were spurious issues with C23 embedding in 32-bit MinGW64 by switching to
coff_objresource mode for it as well. -
Plugins: Fix, the
post-import-codeexecution could fail because the triggering sub-package was not yet available insys.modules. -
UI: Fix, listing package DLLs with
--list-package-dllswas broken due to recent plugin lifecycle changes. -
UI: Fix,
--list-package-exewas not working properly on non-Windows platforms failing to detect executable files correctly. -
UI: Handled paths starting with
{PROGRAM_DIR}the same as a relative path when parsing the--onefile-tempdir-specoption. -
Plugins: Followed multiprocessing
forkserverchanges for newer Python versions. -
Python 3.12+: Fix, generic class type parameters handling was incorrect.
-
Python 3.12: Fix, deferred evaluation of type aliases was failing.
-
Python 3.12+: Aligned
sumbuilt-in float summation with CPython's compensated sum for better accuracy. -
Python 3.10+: Fix, uncompiled coroutine
throw()return handling was incorrect, restoring completed coroutine results viaStopIteration.valuerather than exposing them as ordinary return values to the outer await chain. -
Python 3.13+: Fix, uncompiled coroutine
cancel()/awaitsuspension handling was incorrect, improved to ensure integration compatibility. -
macOS: Made finding
create-dmgmore robustly by also checking the Homebrew path for Intel and fromPATHproperly. -
Compatibility: Fix, class frames were not exposing frame locals.
-
UI: Detected
static-libpythonproblems, which affected some forms of Anaconda. -
Distutils: Rejected
--projectmixed with--mainarguments as it is not useful. -
macOS: Fix,
zigfromPATHor fromziglangwas not being used. -
Distutils: Fix, the wrong
module-rootconfig value was being checked foruvbuild backend. -
macOS: Fix, was attempting to change removed (rejected) DLLs, which of course failed and errored out.
-
Python 3.14: Fix, tuple reuse was not fully compatible, potentially causing crashes due to outdated hash caches.
-
Fix, fake modules were still being attempted to located when imported by other code, which could conflict with existing modules.
-
Python 3.5+: Fix, failed to send uncompiled coroutines the sent in value in
yield from. -
Fix, older
gcccompilers lacking newer intrinsic methods had compilation issues that needed to be addressed. -
Standalone: Fix, multiphase module extension modules with post-load code were not working properly.
-
Fix, Avoid using the non-inline copy of
pkg_resourceswith the inline copy of Jinja2. These could mismatch and cause errors. -
Fix, loops could make releasing of previous values very unclear, causing optimization errors.
-
Fix,
incbinresource mode was not working with oldgccC++ fallback. -
Python 3.4 to 3.6: Fix, bytecode demotion was not working properly for these versions, also bytecode only files not working.
-
Plugins: Added a check for the broken
patchelfversions 0.10 and 0.11 to prevent breaking Qt plugins. -
Android: Allowed
patchelfversion 0.18 on Android. -
Windows: Fix, the header path for self uninstalled Python was not detected correctly.
-
Release: Fix, inclusion of the
pkg_resourcesinline copy for Python 2 to source distributions was missing. -
UI: Detected the OBS versions of SUSE Linux better.
-
Suse: Allowed using
patchelf0.18.0 there too. -
Python 3.11: Fix, package and module dicts were not aligned close enough to avoid a CPython bug.
-
Fix, unbound compiled methods could crash when called without an object passed.
-
Standalone: Fix, multiphase module extension modules with postload. (Fixed in 4.0.8 already.)
-
Onefile: Fix, while waiting for the child, it may already be terminated.
-
macOS: Removed existing absolute rpaths for Homebrew and MacPorts.
-
Python 3.14: Avoided warning in CPython headers.
-
Python 3.14: Followed allocator changes more closely.
-
Compatibility: Avoided using
pkg_resourcesfor Jinja2 template location for loading. -
No-GIL: Applied some bug fixes to get basic things to work.
Package Support
-
Standalone: Add support for newer
paddleversion. (Added in 4.0.1 already.) -
Standalone: Add workaround for refcount checks of
pandas. (Fixed in 4.0.1 already.) -
Standalone: Add support for newer
h5pyversion. (Added in 4.0.2 already.) -
Standalone: Add support for newer
scipypackage. (Added in 4.0.2 already.) -
Plugins: Revert accidental
os.getenvoveros.environ.getchanges in anti-bloat configurations that stopped them from working. Affected packages arenetworkx,persistent, andtensorflow. (Fixed in 4.0.5 already.) -
Standalone: Added missing DLLs for
openvino. (Added in 4.0.7 already.) -
Enhanced the package configuration YAML schema by adding the
relative_toparameter forfrom_filenamesDLL specification, avoiding error-prone purely relative paths. -
Standalone: Fix,
flet_desktopapp assets were missing, now preserving the packaged runtime and sidecar DLLs. -
Standalone: Added support for the
tyropackage. -
Standalone: Added data files for the
perfettopackage. -
Standalone: Added support for
anyioprocess forking. -
Standalone: Added support for the
plotly.graphpackage. -
Anaconda: Fix, dependencies for the
numpyconda package on Windows were incorrect. -
Plugins: Enhanced the auto-icon hack in PySide6 to use compatible class names.
-
Standalone: Fix, Qt libraries were duplicated with
PySide6WebEngine framework support on macOS. -
Plugins: Fix, automatic detection of
mypycruntime dependencies was including all top level modules of the containing package by accident. (Fixed in 4.0.5 already.) -
Anaconda: Fix,
delvewheelplugin was not working with Python 3.8+. This enhances compatibility with installed PyPI packages that use it for their DLLs. (Fixed in 4.0.6 already.) -
Plugins: Fix, our protection workaround could confuse methods used with
PySide6.
New Features
-
UI: Added the
--recommended-python-versionoption to display recommended Python versions for supported, working, or commercial usage. -
UI: Add message to inform users about
Nuitka[onefile]if compression is not installed. (Added in 4.0.1 already.) -
UI: Add support for
uv_buildin the--projectoption. (Added in 4.0.1 already.) -
Onefile: Allow extra includes as well. (Added in 4.0.2 already.)
-
UI: Add
nuitka-project-setfeature to define project variables, checking for collisions with reserved runtime variables. (Added in 4.0.2 already.) -
Scons: Added new option to select
--reproduciblebuilds or not. (Added in 4.0.6 already.) -
Python 3.10+: Added support for
importlib.metadata.package_distributions(). (Added in 4.0.8 already.) -
Plugins: Added support for the multiprocessing
forkservercontext. (Added in 4.0.8 already, for 4.1 Python 3.6 and earlier, as well as 3.14 support were added too.) -
Reports: Added structured resource usage (
rusage) performance information to compilation reports. -
Reports: Included individual module-level C compiler caching (
ccache/clcache) statistics in compilation reports. -
Added support for detecting and correctly resolving the Python prefix for the
PyEnv on HomebrewPython flavor. -
macOS: Added support for
rusageinformation for Scons. -
UI: Added the
__compiled__.extension_filenameattribute to give the real filename of the containing extension module. -
Windows: Added support for
--clangor ARM. (Added in 4.0.8 already.) -
Windows: Added support for resources names as not just integers, important when we copy them from template files.
-
MacPorts: Added basic support for this Python flavor. More work will be needed to get it to work fully though.
Optimization
-
Avoid including
importlib._bootstrapandimportlib._bootstrap_external. (Added in 4.0.1 already.) -
Linux: Cached the
syscallused for time keeping during compilation to avoid loadinglibcfor each trace. (Added in 4.0.8 already.) -
UI: Output a warning for modules that remain unfinished after the third optimization pass.
-
Added an extra micro pass trigger when new variables are introduced or variable usage changes severely, ensuring optimizations are fully propagated, avoiding unnecessary extra full passes.
-
Provided scripts to compile Python statically with PGO tailored for Nuitka on Linux, Windows, and macOS.
-
Added support for running the Data Composer tool from a compiled Nuitka binary without spawning an uncompiled Python process.
-
Enhanced the usage of
vectorcallforPyCFunctionobjects by directly checking for its presence instead of relying purely on flags, allowing more frequent use of this faster execution path. -
Cached frequently used declarations for top-level variables to speed up C code generation.
-
Sped up trace collection merging by avoiding unnecessary set creation and using a set instead of a list for escaped traces.
-
Optimized plugin hook execution by tracking overloaded methods and added an option to show plugin usage statistics.
-
Improved performance of module location by avoiding unnecessary module name reconstruction and redundant filesystem checks for pre-loaded packages.
-
Improved the caching of distribution name lookups to effectively avoid repeated IO operations across all package types.
-
Plugins: Cached callback plugin dispatch for
onFunctionBodyParsingandonClassBodyParsingto skip argument computation when no plugin overrides them. -
Python 3.13: Handled sub-packages of
pathlibas hard modules. -
Handled hard attributes through merge traces as well.
-
Made constant blobs more compact by avoiding repeated identifiers and unnecessary fields.
-
Enhanced Python compilation scripts further. (Fixed in 4.0.8 already.)
-
Recognized late incomplete variables better. (Fixed in 4.0.8 already.)
-
Made constant blobs more compact. (Fixed in 4.0.8 already.)
-
Optimized calls with only constant keywords and variable posargs too.
Anti-Bloat
-
Fix, memory bloat occurred when C compiling
sqlalchemy. (Fixed in 4.0.2 already.) -
Avoid using
pydocinPySimpleGUI. (Added in 4.0.2 already.) -
Avoided using
doctestfromzodbpickle. (Added in 4.0.5 already.) -
Avoided inclusion of
cythonwhen usingpyav. (Added in 4.0.7 already.) -
Avoided including
typing_extensionswhen usingnumpy. (Added in 4.0.7 already.)
Organizational
-
UI: Relocated the warning about the available source code of extension modules to be evaluated at a more appropriate time.
-
Debian: Remove recommendation for
libfuse2package as it is no longer useful. -
Debian: Used
platformdirsinstead ofappdirs. -
Debugging: Removed Python 3.11+ restriction for
clang-formatas it is available everywhere, even Python 2.7, and we still want nicely formatted code when we read things. (Added in 4.0.6 already.) -
Removed no longer useful inline copy of
wax_off. We have our own stubs generator project. -
Release: Added missing package to the CI container for building Nuitka Debian packages.
-
Developer: Updated AI instructions for creating Minimal Reproducible Examples (MRE) to skip unneeded C compilation.
-
Debugging: Added an internal function for checking if a string is a valid Python identifier.
-
AI: Added a task in Visual Studio Code to export the currently selected Python interpreter path to a file, making it available as "python" and "pip" matching the selected interpreter. This makes it easier to use a specific version with no instructions needed.
-
AI: Updated the rules to instruct AI to only generate useful comments that add context not present in the code.
-
Containers: Added template rendering support for Jinja2 (
.j2) container files in our internal Podman tools. -
Projects: Clarified the current status and rationale of Python 2.6 support in the developer manual.
-
Debugging: Added experimental flag
--experimental=ignore-extra-micro-passto allow ignoring extra micro pass detection. -
Visual Code: Added integration scripts for
bashandzshautocompletion of Nuitka CLI options. These are now also integrated into Visual Studio Code terminal profiles and the Debian package. -
RPM: Included the Python compile script for Linux.
-
RPM: Removed the requirement for
distutilsin the spec.
Tests
-
Install only necessary build tools for test cases.
-
Avoided spurious failures in reference counting tests due to Python internal caching differences. (Fixed in 4.0.3 already.)
-
Fix, the parsing of the compilation report for reflected tests was incorrect.
-
Python 3.14: Ignored a syntax error message change.
-
Python 3.14: Added test execution support options to the main test runner to use this version as well.
-
Fix, the runner binary path was mishandled for the third pass of reflected compilations.
-
Removed the usage of obsolete plugins in reflected compilation tests.
-
Debugging: Prevented boolean testing of
namedtuplesto avoid unexpected bugs. -
Added the
Testsuffix to syntax test files and disabled "python" mode and spell checking for them to resolve issues reported in IDEs. -
Fix, newline handling in diff outputs from the output comparison tool was incorrect.
-
Covered
post-import-codefunctionality with a new subpackage test case. -
Prevented the program test suite from running an unnecessary variant to save execution time.
-
macOS: Ignored differences from GUI framework error traces in headless runs in output comparisons.
-
Reflected test for Nuitka, where it compiles itself and compares its operation has been restored to functional state.
-
Used the new method to clear internal caches if available for reference counts.
-
Disabled running nested loops test with Python 2.6.
-
Containers: Detected Python 2 defaulting containers in Podman tooling.
Cleanups
-
UI: Fix, there was a double space in the Windows Runtime DLLs inclusion message. (Fixed in 4.0.1 already.)
-
Onefile: Separated files and defines for extra includes for onefile boot and Python build.
-
Scons: Provided nicer errors in case of "unset" variables being used, so we can tell it.
-
Refactored the process execution results to correctly utilize our
namedtuplesvariant, that makes it easier to understand what code does with the results. -
Quality: Enabled automatic conversion of em-dashes and en-dashes in code comments to the autoformat tool. AI won't stop producing them and they can cause
SyntaxErrorfor older Python versions, nor is unnecessarily using UTF-8 welcome. -
Ensured that cloned outline nodes are assigned their correct names immediately upon creation, that avoids inconsistencies during their creation.
-
Quality: Updated to the latest versions of
blackand adopted a fasterisortexecution by caching results. -
Quality: Modified the PyLint wrapper to exit gracefully instead of raising an error when no matching files require checking.
-
Quality: Avoided checking YAML package configuration files twice, since autoformat already handles them.
-
Quality: Ensured that YAML package configuration checks output the original filename instead of the temporary one when a failure occurs.
-
Quality: Prevented pushing of tags from triggering git pre-push quality checks.
-
Quality: Silenced the output of
optipngandjpegoptimduring image optimization auto-formatting. -
Visual Code: Added the generated Python alias path file to the ignore list.
-
Quality: Enabled auto-formatting for the Nuitka devcontainer configuration file.
-
Watch: Avoided absolute paths in compilation to make reports more comparable across machines.
-
Quality: Changed
mdformatchecks to run only once and silently. -
Scons: Disabled format security errors in debug mode and moved Python-related warning disables into common build setup code.
-
Quality: Updated to the latest
deepdiffversion. -
Scons: Avoided MSVC telemetry since it can produce outputs that break CI.
-
Debugging: Enhanced non-deployment handler for importing excluded modules.
-
Split import module finding functionality into more pieces for enhanced readability.
-
Debugging: Added more assertions for constants loading and checking.
-
macOS: Dropped the
universaltarget arch. -
Debugging: Added more traces for deep hash verification.
Summary
This release builds on the scalability improvements established in 4.0, with enhanced Python 3.14 support, expanded package compatibility, and significant optimization work.
The --project option seems usable now.
Python 3.14 support remains experimental, but only barely made the cut, and probably will get there in hotfixes. Some of the corrections came in so late before the release, that it was just not possible to feel good about declaring it fully supported just yet.
15 May 2026 10:00pm GMT
Django community aggregator: Community blog posts
Issue 337: Django Developers Survey 2026
Will and Jeff are at PyCon US in Long Beach, California this week. Drop by the Django Software Foundation booth or the JetBrains booth and say hello.
News
Django Developers Survey 2026
The Django Software Foundation is once again partnering with JetBrains to run the 2026 Django Developers Survey π Help us better understand how Django is being used around the world and guide future technical and community decisions.
DSF member of the month - Bhuvnesh Sharma
Bhuvnesh is a Django contributor since 2022 and a Google Summer of Code (GSoC) participant in 2023 for Django. He is now a mentor and an admin organizer for GSoC for the Django organization, as well as the founder of Django Events Foundation India (DEFI) and DjangoDay India conference.
Announcing the Google Summer of Code 2026 contributors for Django
Google Summer of Code 2026 contributors have been announced for Django, listing the developers who will be working on projects as part of the program. If you are following Django's next wave of community work, this is the roll-up of who's joining and what to watch for.
Releases
Python 3.14.5 is out!
Python 3.14.5 is now available, bringing the latest point release in the Python 3.14 line. If you maintain Django apps, use the update as your prompt to verify dependencies and run your test suite against 3.14.5 before rolling forward.
Updates to Django
Today, "Updates to Django" is presented by Johanan Oppong Amoateng from Djangonaut Space! π
Last week we had 22 pull requests merged into Django by 13 different contributors - including 4 first-time contributors! Congratulations to Denny Biasiolli, Milad Zarour, MANAS MADESHIYA and HΓ©ctor Castillo for having their first commits merged into Django - welcome on board!
This week's Django highlights: π¦
- Allowed max redirect URL length to be set on HttpResponseRedirect. (#36767)
- Added support for object-based form media stylesheet assets. (#37085)
- Deprecated SHA-1 default for salted_hmac() and base64_hmac() algorithm. (#37078)
Python Software Foundation
Python Software Foundation News: Announcing PSF Community Service Award Recipients!
Python Software Foundation has announced the recipients of its PSF Community Service Award. The update highlights people recognized for their contributions to the Python community.
Python Software Foundation News: Strategic Planning at the PSF
Python Software Foundation News covers the PSF's strategic planning efforts and the direction they are working toward. Expect a focus on how the foundation plans its priorities and activities moving forward.
Wagtail CMS News
Results of the 2026 Wagtail DX with AI survey
The 2026 Wagtail DX survey reports where teams are applying AI and what they want next from the platform. Use the findings to align your own Wagtail and AI experimentation with the issues practitioners are actually raising.
Our four contributors for Google Summer of Code 2026
Google Summer of Code 2026 is welcoming four contributors, highlighting the people behind the upcoming work. If you're tracking Django ecosystem activity, this is a quick way to see who's starting and what to watch for next.
Sponsored Link
Middleware, but for AI agents
Django middleware composes request handlers. Harnesses do the same for AI agents - Claude Code, Codex, Gemini in one coordinated system. Learn what a harness actually is, why it's a new primitive, and how to engineer one that holds in production. Apache 2.0, open source.

Articles
How to have a great first PyCon (updated for 2026)
Timeless advice from Trey Hunner on how to make the most out of PyCon US this week or any other technical conference.
Using Django Tasks in production Β· Better Simple
Production-ready Django task setups: what to change, what to watch, and how to keep background jobs reliable once you leave local dev. Useful guidance for deploying and operating task workers with fewer surprises.
Dealing with Dead Links (404s): 2026 Edition | Will Vincent
A practical guide to handling dead links in Django, focusing on what to do when a URL no longer exists and how to respond with clean, user-friendly 404 behavior. Expect guidance on keeping routing and error handling tidy as your site evolves.
Podcasts
Django Chat #203: Deploy on Day One - Calvin Hendryx-Parker
Calvin is the co-founder and CTO of the consultancy SixFeetUp. We discuss developer experience from day one, Kubernetes as a feature, real-world usage of AI and agentic tooling, typing in Python, the junior developer pipeline problem, and more. Also available in video format on YouTube.
Django Job Board
Founding Engineer at MyDataValue
Junior Software Developer (Apprentice) at UCS Assist
PyPI Sustainability Engineer at Python Software Foundation
Projects
abu-rayhan-alif/djangoSecurityHunter
A security and performance inspector for Django & DRF. Features static analysis, config checks, N+1 query detection, and SARIF support for GitHub Code Scanning.
janraasch/dsd-vps-kamal
A Django Simple Deploy plugin for configuring & automating deployments of your Django project to any VPS using Kamal.
15 May 2026 2:00pm GMT
13 May 2026
Django community aggregator: Community blog posts
Deploy on Day One - Calvin Hendryx-Parker
π Links
- SixFeetUp Careers
- getscaf, copier, tilt
- A CTOs Guide to AI Coding Assistants
- kind, nix, spec-kit
- Figma make
π¦ Projects
π Books
- London Review of Books
- Big Panda & Tiny Dragon by James Norbury
- Universal Principles of Typography by Elliot Jay Stocks
π₯ YouTube
π€ Sponsor
This episode is brought to you by Six Feet Up, the Python, Django, and AI experts who solve hard software problems. Whether it's scaling an application, deriving insights from data, or getting results from AI, Six Feet Up helps you move forward faster.
See what's possible at https://sixfeetup.com/.
13 May 2026 3:00pm GMT
11 May 2026
Django community aggregator: Community blog posts
Improving First Byte and Contentful Paint on a Django Website

Recently I have been experimenting with http streaming and realized how it can improve page performance. If you come from the PHP world, you might know the command flush(). It immediately sends to the visitor what has been echoed to the buffer, and doesn't wait for the full page to be rendered on the server side. That allows the browser to start rendering the website before the whole document is rendered on the server and transferred. On the other hand, the usual Django HttpResponse renders the whole HTML document on the server first, and only then sends it to the visitor. So the initial HTML document rendering is always the bottleneck for the full page load. Here comes StreamingHttpResponse, which can be used to mimic what flush() does in PHP.
HttpResponse vs. StreamingHttpResponse in Action
When using a normal HttpResponse, the HTML document is first rendered on the server side, then sent to the browser, then static files are downloaded in parallel if possible, and lastly rendering in the browser happens.

When you use StreamingHttpResponse, you can send the <head> and the content above the fold as the first part of the document, so that static files can be located and start downloading while the rest of the HTML document is being sent in parts. The first paint of the document would happen just after the CSS file is downloaded, and the rest of the HTML document would be drawn at a later point.
Generic HTML Streaming View
Here is a generic HTMLStreamingView that expects a list of template files, get_document_context_data() for the global context, and get_template_context_data() for the template-specific context:
from django.http.response import StreamingHttpResponse
from django.conf import settings
from django.template.loader import render_to_string
from django.views.generic.base import View
class HTMLStreamingView(View):
# templates for different parts of the document
template_names = []
extra_context = None
def get(self, request, *args, **kwargs):
# Capture the nonce before StreamingHttpResponse is returned.
# CSP middleware writes the nonce into the response header during
# process_response, then replaces request.csp_nonce with
# an error-raising lazy object. generate() restores the plain value
# so templates can access it during streaming.
self._csp_nonce = (
str(request.csp_nonce)
if hasattr(request, "csp_nonce")
else None
)
context = self.get_document_context_data(**kwargs)
return StreamingHttpResponse(
self.generate(context),
content_type="text/html"
)
def generate(self, context):
if self._csp_nonce is not None:
self.request.csp_nonce = self._csp_nonce
for template_name in self.template_names:
template_context = {
**context,
**self.get_template_context_data(template_name)
}
yield render_to_string(
template_name,
template_context,
request=self.request
)
def get_document_context_data(self, **kwargs):
kwargs.setdefault("view", self)
if self.extra_context is not None:
kwargs.update(self.extra_context)
return kwargs
def get_template_context_data(self, template_name, **kwargs):
return {}
Use Case with the Strategic Prioritizer "1st things 1st"
The start page of the decision support system and strategic prioritizer 1st things 1st has been implemented as a multi-section landing page. The cookie consent widget only showed up after the whole page had rendered, resulting in a delay of a few seconds.
This is how I used HTMLStreamingView to reorganize the page into parts:
class StartPageView(HTMLStreamingView):
template_names = [
"startpage_index_top.html",
"startpage/includes/description.html",
"startpage/includes/tutorial.html",
"startpage/includes/benefits.html",
"startpage/includes/social_proof.html",
"startpage/includes/testimonials.html",
"startpage/includes/about_us.html",
"startpage/includes/questions_and_answers.html",
"startpage/includes/pricing.html",
"startpage/includes/cause.html",
"startpage/includes/call_to_action.html",
"startpage/includes/footer.html",
"startpage_index_bottom.html",
]
def get_template_context_data(self, template_name, *args, **kwargs):
if template_name == "startpage_index_top.html":
return {
"structured_data": settings.JSON_LD_STRUCTURED_DATA,
}
if template_name == "startpage/includes/social_proof.html":
from django.contrib.auth import get_user_model
User = get_user_model()
return {
"active_user_count": User.objects.filter(is_active=True).count(),
}
...
return super().get_template_context_data(template_name, **kwargs)
To transform a normal Django view into an HTTP streaming view, I cut the base.html template into two pieces:
- everything before
{% block content %}asbase_top.html- the head and content above the fold. - everything after
{% endblock content %}asbase_bottom.html- the closing HTML tags and the footer.
For example, here's base_top.html:
<!DOCTYPE html>
{% load static %}
<html lang="en">
<head>
<meta charset="utf-8" />
<title>{% block title %}1st things 1st{% endblock %}</title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="stylesheet" href="{% static 'css/styles.css' %}" />
{% block extra_head %}{% endblock %}
</head>
<body>
{% block top_navigation %}
<nav>
<a href="/">Logo</a>
</nav>
{% endblock %}
<main id="main_content">
{% block content %}{% endblock %}
{% include "startpage/includes/extra_js.html" %}
And here is base_bottom.html:
{% block content %}{% endblock %}
</main>
<footer>
...
</footer>
</body>
</html>
I moved the JS from base_bottom.html to the body section of base_top.html, where it will start downloading immediately after the content above the fold is shown. I did that to reduce the delay for the cookie consent widget.
Then I prepared the templates for all parts of the start page:
startpage_index_top.htmlextendsbase_top.html- content templates provide the HTML directly without extending anything.
startpage_index_bottom.htmlextendsbase_bottom.html.
The Optimization Results
I used the Lighthouse plugin to measure performance for the start page on an emulated slow mobile network, before and after applying StreamingHttpResponse.

In the updated version, the content above the fold and the static files needed to render it are retrieved earlier. These include the static file requirements for the cookie consent widget, which can now be loaded from the initial part of the stream, so the widget appears sooner.

Final Words
HTTP streaming is a relatively simple technique that can make a noticeable difference in perceived page performance, particularly when it comes to metrics like First Byte and Contentful Paint. By sending the top of the document early, the browser can begin fetching static assets and rendering above-the-fold content while the server is still working on the rest of the page.
A faster Time To First Byte (TTFB) is also worth considering for LLM crawlers such as GPTBot or ClaudeBot. These bots often work with short timeouts, and if your server doesn't respond quickly enough, they may abandon the request before reading your content. HTTP streaming helps here too, since it gets the most important parts of your HTML out early - right at the top of the document where crawlers are most likely to see them.
That said, it does require splitting your templates into parts and thinking more carefully about which context data is needed where. If your page is lightweight and fast to render, the added complexity probably isn't worth it. The technique really shines on heavier pages that involve bigger database queries or external API calls - those are exactly the cases where server-side delay is most significant, and where streaming can therefore have the greatest impact.
It is also worth noting that HTTP streaming works with both WSGI and ASGI, so it fits into most standard Django deployment setups without requiring any major infrastructure changes.
Thanks to Famitsay Tamayo for the cover photo!
11 May 2026 5:00pm GMT
04 Apr 2026
Planet Twisted
Donovan Preston: Using osascript with terminal agents on macOS
Here is a useful trick that is unreasonably effective for simple computer use goals using modern terminal agents. On macOS, there has been a terminal osascript command since the original release of Mac OS X. All you have to do is suggest your agent use it and it can perform any application control action available in any AppleScript dictionary for any Mac app. No MCP set up or tools required at all. Agents are much more adapt at using rod terminal commands, especially ones that haven't changed in 30 years. Having a computer control interface that hasn't changed in 30 years and has extensive examples in the Internet corpus makes modern models understand how to use these tools basically Effortlessly. macOS locks down these permissions pretty heavily nowadays though, so you will have to grant the application control permission to terminal. But once you have done that, the range of possibilities for commanding applications using natural language is quite extensive. Also, for both Safari and chrome on Mac, you are going to want to turn on JavaScript over AppleScript permission. This basically allows claude or another agent to debug your web applications live for you as you are using them.In chrome, go to the view menu, developer submenu, and choose "Allow JavaScript from Apple events". In Safari, it's under the safari menu, settings, developer, "Allow JavaScript from Apple events". Then you can do something like "Hey Claude, would you Please use osascript to navigate the front chrome tab to hacker news". Once you suggest using OSA script in a session it will figure out pretty quickly what it can do with it. Of course you can ask it to do casual things like open your mail app or whatever. Then you can figure out what other things will work like please click around my web app or check the JavaScript Console for errors. Another very important tips for using modern agents is to try to practice using speech to text. I think speaking might be something like five times faster than typing. It takes a lot of time to get used to, especially after a lifetime of programming by typing, but it's a very interesting and a different experience and once you have a lot of practice It starts to to feel effortless.
04 Apr 2026 1:31pm GMT
16 Mar 2026
Planet Twisted
Donovan Preston: "Start Drag" and "Drop" to select text with macOS Voice Control
I have been using macOS voice control for about three years. First it was a way to reduce pain from excessive computer use. It has been a real struggle. Decades of computer use habits with typing and the mouse are hard to overcome! Text selection manipulation commands work quite well on macOS native apps like apps written in swift or safari with an accessibly tagged webpage. However, many webpages and electron apps (Visual Studio Code) have serious problems manipulating the selection, not working at all when using "select foo" where foo is a word in the text box to select, or off by one errors when manipulating the cursor position or extending the selection. I only recently expanded my repertoire with the "start drag" and "drop" commands, previously having used "Click and hold mouse", "move cursor to x", and "release mouse". Well, now I have discovered that using "start drag x" and "drop x" makes a fantastic text selection method! This is really going to improve my speed. In the long run, I believe computer voice control in general is going to end up being faster than WIMP, but for now the awkwardly rigid command phrasing and the amount of times it misses commands or misunderstands commands still really holds it back. I've been learning the macOS Voice Control specific command set for years now and I still reach for the keyboard and mouse way too often.
16 Mar 2026 11:04am GMT
04 Mar 2026
Planet Twisted
Glyph Lefkowitz: What Is Code Review For?
Humans Are Bad At Perceiving
Humans are not particularly good at catching bugs. For one thing, we get tired easily. There is some science on this, indicating that humans can't even maintain enough concentration to review more than about 400 lines of code at a time..
We have existing terms of art, in various fields, for the ways in which the human perceptual system fails to register stimuli. Perception fails when humans are distracted, tired, overloaded, or merely improperly engaged.
Each of these has implications for the fundamental limitations of code review as an engineering practice:
-
Inattentional Blindness: you won't be able to reliably find bugs that you're not looking for.
-
Repetition Blindness: you won't be able to reliably find bugs that you are looking for, if they keep occurring.
-
Vigilance Fatigue: you won't be able to reliably find either kind of bugs, if you have to keep being alert to the presence of bugs all the time.
-
and, of course, the distinct but related Alert Fatigue: you won't even be able to reliably evaluate reports of possible bugs, if there are too many false positives.
Never Send A Human To Do A Machine's Job
When you need to catch a category of error in your code reliably, you will need a deterministic tool to evaluate - and, thanks to our old friend "alert fatigue" above - ideally, to also remedy that type of error. These tools will relieve the need for a human to make the same repetitive checks over and over. None of them are perfect, but:
- to catch logical errors, use automated tests.
- to catch formatting errors, use autoformatters.
- to catch common mistakes, use linters.
- to catch common security problems, use a security scanner.
Don't blame reviewers for missing these things.
Code review should not be how you catch bugs.
What Is Code Review For, Then?
Code review is for three things.
First, code review is for catching process failures. If a reviewer has noticed a few bugs of the same type in code review, that's a sign that that type of bug is probably getting through review more often than it's getting caught. Which means it's time to figure out a way to deploy a tool or a test into CI that will reliably prevent that class of error, without requiring reviewers to be vigilant to it any more.
Second - and this is actually its more important purpose - code review is a tool for acculturation. Even if you already have good tools, good processes, and good documentation, new members of the team won't necessarily know about those things. Code review is an opportunity for older members of the team to introduce newer ones to existing tools, patterns, or areas of responsibility. If you're building an observer pattern, you might not realize that the codebase you're working in already has an existing idiom for doing that, so you wouldn't even think to search for it, but someone else who has worked more with the code might know about it and help you avoid repetition.
You will notice that I carefully avoided saying "junior" or "senior" in that paragraph. Sometimes the newer team member is actually more senior. But also, the acculturation goes both ways. This is the third thing that code review is for: disrupting your team's culture and avoiding stagnation. If you have new talent, a fresh perspective can also be an extremely valuable tool for building a healthy culture. If you're new to a team and trying to build something with an observer pattern, and this codebase has no tools for that, but your last job did, and it used one from an open source library, that is a good thing to point out in a review as well. It's an opportunity to spot areas for improvement to culture, as much as it is to spot areas for improvement to process.
Thus, code review should be as hierarchically flat as possible. If the goal of code review were to spot bugs, it would make sense to reserve the ability to review code to only the most senior, detail-oriented, rigorous engineers in the organization. But most teams already know that that's a recipe for brittleness, stagnation and bottlenecks. Thus, even though we know that not everyone on the team will be equally good at spotting bugs, it is very common in most teams to allow anyone past some fairly low minimum seniority bar to do reviews, often as low as "everyone on the team who has finished onboarding".
Oops, Surprise, This Post Is Actually About LLMs Again
Sigh. I'm as disappointed as you are, but there are no two ways about it: LLM code generators are everywhere now, and we need to talk about how to deal with them. Thus, an important corollary of this understanding that code review is a social activity, is that LLMs are not social actors, thus you cannot rely on code review to inspect their output.
My own personal preference would be to eschew their use entirely, but in the spirit of harm reduction, if you're going to use LLMs to generate code, you need to remember the ways in which LLMs are not like human beings.
When you relate to a human colleague, you will expect that:
- you can make decisions about what to focus on based on their level of experience and areas of expertise to know what problems to focus on; from a late-career colleague you might be looking for bad habits held over from legacy programming languages; from an earlier-career colleague you might be focused more on logical test-coverage gaps,
- and, they will learn from repeated interactions so that you can gradually focus less on a specific type of problem once you have seen that they've learned how to address it,
With an LLM, by contrast, while errors can certainly be biased a bit by the prompt from the engineer and pre-prompts that might exist in the repository, the types of errors that the LLM will make are somewhat more uniformly distributed across the experience range.
You will still find supposedly extremely sophisticated LLMs making extremely common mistakes, specifically because they are common, and thus appear frequently in the training data.
The LLM also can't really learn. An intuitive response to this problem is to simply continue adding more and more instructions to its pre-prompt, treating that text file as its "memory", but that just doesn't work, and probably never will. The problem - "context rot" is somewhat fundamental to the nature of the technology.
Thus, code-generators must be treated more adversarially than you would a human code review partner. When you notice it making errors, you always have to add tests to a mechanical, deterministic harness that will evaluates the code, because the LLM cannot meaningfully learn from its mistakes outside a very small context window in the way that a human would, so giving it feedback is unhelpful. Asking it to just generate the code again still requires you to review it all again, and as we have previously learned, you, a human, cannot review more than 400 lines at once.
To Sum Up
Code review is a social process, and you should treat it as such. When you're reviewing code from humans, share knowledge and encouragement as much as you share bugs or unmet technical requirements.
If you must reviewing code from an LLM, strengthen your automated code-quality verification tooling and make sure that its agentic loop will fail on its own when those quality checks fail immediately next time. Do not fall into the trap of appealing to its feelings, knowledge, or experience, because it doesn't have any of those things.
But for both humans and LLMs, do not fall into the trap of thinking that your code review process is catching your bugs. That's not its job.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!
04 Mar 2026 5:24am GMT