16 May 2026

feedPlanet Python

PyCon: Welcome Back, NVIDIA: Visionary Sponsor of PyCon US 2026

NVIDIA is excited to once again support PyCon US 2026 as a Visionary Sponsor, and to sponsor the Future of AI with Python Conference Track.

Python is a "first-class" language at NVIDIA CUDA, and NVIDIA is committed to bringing our technology to Python developers in close alignment with C++ upon new releases of our hardware. We're also happy to announce the general availability of CUDA Python 1.0.

NVIDIA's commitment to Python goes well beyond just our own tech stack. NVIDIA's Python engineers contribute across a broad swath of the Python ecosystem, from the core interpreter itself, to packaging and PyPI, to the Python community at large. NVIDIA is inspired by the energy of, and privileged to collaborate with, people across the open source Python community.

Since PyCon last year, NVIDIA Pythonistas - in collaboration with many others in the Python community - have made great progress on the evolution of various packaging standards, including working with community partners on the implementation of wheel variants and the establishment of a Packaging Council to better govern the evolution of packaging standards and PyPI. NVIDIA Python engineers are also engaged in implementation, testing, and porting work for the free-threaded build of the interpreter. NVIDIA Python engineers are driving the early exploratory work for adopting Rust for CPython, work on Python performance benchmarking, and are actively involved in many enhancements for Python 3.14 and 3.15, including providing built-in Zstandard support in Python 3.14.

At NVIDIA, we are excited to work with our partners and the open source Python community to help bring the best developer experience for users of high performance computing and AI. Come see NVIDIA at the Anaconda and PyTorch booths, and at the AI Track.

Barry Warsaw
May 2026
Principal System Software Engineer, NVIDIA
Python Core Developer since 1994
Python Steering Council member in 2026

16 May 2026 2:30pm GMT

15 May 2026

feedPlanet Python

Kay Hayen: Nuitka Release 4.1

This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, "download now".

This release adds many new features and corrections with a focus on async code compatibility, missing generics features, and Python 3.14 compatibility and Python compilation scalability yet again.

Bug Fixes

Package Support

New Features

Optimization

Anti-Bloat

Organizational

Tests

Cleanups

Summary

This release builds on the scalability improvements established in 4.0, with enhanced Python 3.14 support, expanded package compatibility, and significant optimization work.

The --project option seems usable now.

Python 3.14 support remains experimental, but only barely made the cut, and probably will get there in hotfixes. Some of the corrections came in so late before the release, that it was just not possible to feel good about declaring it fully supported just yet.

15 May 2026 10:00pm GMT

Anarcat: The Four Horsemen of the LLM Apocalypse

I have been battling Large Language Models (LLM1) for the past couple of weeks and have struggled to think about what it means and how to deal with its fallout.

Because the fight has come from many fronts, I've come to articulate this in terms of the Four Horsemen of the Apocalypse.

Sound track: Metallica's The Four Horsemen, preferably downloaded from Napster around 2000, but now I guess you get it on YouTube.

War: bot armies

Let's start with War. We've been battling bot armies for control of our GitLab server for a while. Bots crawl virtually infinite endpoints on our Git repositories (as opposed to downloading an archive or shallow clone), including our fork of Firefox, Tor Browser, a massive repository.

At first, we've tried various methods: robots.txt, blocking user agents, and finally blocking entire networks. I wrote asncounter. It worked for a while.

But now, blocking entire networks doesn't work: they come back some other way, typically through shady proxy networks, which is kind of ironic considering we're essentially running the largest proxy network of the world.

Out of desperation, we've forced users to use cookies when visiting our site. We haven't deployed Anubis yet, as we worry that bots have broken Anubis anyways and that it does not really defend against a well-funded attacker, something which Pretix warned against in 2025 already.

(We have a whole discussion regarding those tools here.)

But even that, predictably, has failed. I suspect what we consider bots are now really agents. They run full web browsers, JavaScript included, so a feeble cookie is no match for the massive bot armies.

Side note on LLM "order of battle"

We often underestimate the size of that army. The cloud was huge even before LLMs, serving about two thirds of the web. Even larger swaths of clients like government and corporate databases have all moved to the cloud, in shared, but private infrastructure with massive spare capacity that is readily available to anyone who pays.

LLMs have made the problem worse by dramatically expanding the capacity of the "cloud". We now have data centers that defy imagination with millions of cores, petabytes of memory, exabytes of storage.

I thought that 25 gigabit residential internet in Switzerland could bring balance, but this is nothing compared to the scale of those data centers.

Those companies can launch thousands, if not millions of fully functional web browsers at our servers. Computing power or bandwidth are not a limitation for them, our primitive infrastructure is. No one but hyperscalers can deal with this kind of load, and I suspect that they are also struggling, as even Google is deploying extreme mechanisms in reCAPTCHA.

This is the largest attack on the internet since the Morris worm but while Robert Tappan Morris went to jail on a felony, LLM companies are celebrated as innovators and will soon be too big to fail.2

Which brings us to the second horsemen, famine.

Famine: shortages

All that computing power doesn't come out of thin air: it needs massive amounts of hardware, power, and cooling.

Earlier this year, I've heard from a colleague that their Dell supplier refused to even provide a quote before August. Dell!

In February, Western Digital's hard drive production for 2026 was already sold out. Hard drives essentially doubled in price within a year, and some have now tripled. A server quote we had in November has now quadrupled, going from 10 thousand to FORTY thousand dollars for a single server.

But regular folks are facing real-life shortages as well, as city-size data centers are being built at neck-breaking speed, stealing fresh water and energy from human beings to feed the war machine.

We've been scared of losing our jobs, but it seems that Apocalypse has yet to fully materialize. Regardless for engineers, the market feels tighter than it was a couple years ago, and everyone feels on edge that they will just have to learn to operate LLMs to keep their jobs.

Which brings us, of course, to Death.

Death: security and copyright

Our third horseman is one I did not expect a couple of months ago. Back at FOSDEM, curl's maintainer Daniel Stenberg famously complained about the poor quality of LLM-generated reports but then, a few months later, everyone is scrambling to deal with floods of good reports.

In the past two weeks, this culminated in a significant number of critical security issues across multiple projects. Chained together, remote code execution vulnerabilities in Nginx and Apache and two local privilege escalations in the Linux kernel (dirtyfrag and fragnesia) essentially gave anyone root access to any unpatched server to the web.

As I write this, another vulnerability dropped, which gives read access to any file to a local user, compromising TLS and SSH private keys.

All those vulnerabilities were released without any significant coordination while people scrambled to mitigate.

Many people including Linus Torvalds are now considering issues discovered through LLMs to be essentially public. This puts some debates about disclosure processes in perspective, to say the least.

But this is not merely the death of the traditional coordinated disclosure process, the C programming language, or the Linux kernel: remember that those bots are trained on a large corpus of copyrighted material. Facebook has trained their models on pirated books and Nvidia has done deals with Anna's Archive to secure access to large swaths of copyrighted material. The US Congress seems to think LLM outputs are not copyrightable, like any other machine outputs.

With many people now vibe coding their way out of learning or remembering how computers work, is this the Death of Copyright?

And that, of course, brings us to the final horseman: Pestilence.

Pestilence: slop

There is a growing meme that programming is essentially over as we know it. That you can simply vibe-code applications from scratch and it's pretty good.

Maybe that's true.

So far, most of my attempts at resolving any complex problem with a LLM have often failed with bizarre failures. Some worked surprisingly well. Maybe, of course, I am holding it wrong.

I personally don't believe LLMs will ever be good enough to produce and maintain software at scale. They're surprisingly good at finding security flaws right now. But what I see is also a lot of Bullshit, with a capital B. It's not lying: it does not "know" anything, so it can't lie. It's misleadingly cohesive and deliberate, but it lacks meaning, intent, will.

I have not been confronted with much slop, apart from the lobster Jesus or the yellow man atrocities, and particularly not in my work. But I see what it is doing to my profession: beyond vibe-coding, people are now token-maxxing, and land-grabbing their colleagues.

I don't like what LLMs do to our communities, or the fabric of software we live with.

Software does not evolve in a void. It is a team effort, be it free software or a corporate product. Generations of humans have carefully built the scaffolding of technology required for modern networks and software to operate, in a convoluted contraption that no single human fully understands anymore.

The idea of simply giving up on that understanding entirely and delegating it to an unproven model is not only chilling, it feels just plain stupid. Not stupid as in Skynet, stupid as in "I can't get inside the data center because the authentication system is down". Except we're in a "the power plant doesn't reboot" or "their LLM found an 0day in our slop" kind of stupid.

The fifth horsemen

Researching for this article, I looked up the four horsemen and found out they original seems to have been:

I was surprised. I grew up thinking about the horsemen being Famine, War, Pestilence, and Death. So I went back to my original source which actually claims the horsemen are:

Time has taken its toll on you, the lines that crack your face.
Famine, your body, it has torn through, withered in every place.
Pestilence for what you've had to endure, and what you have put others through
Death, deliverance for you, for sure, now there's nothing you can do

So I guess that makes no sense either, which, fair enough, I shouldn't rely on Metallica for theological references. Especially since that song was originally called Mechanix and was "about having sex at a gas station".

Anyways.

The point is, there are actually five horsemen, and the fifth one is, in my opinion, Conquest.

Those companies (and not "AI", mind you) are taking over the world. I sense a strong connection with the "post-truth" world imposed on us by fascists like Trump and Putin. It's not an accident, it's a power grab part of the Californian Ideology3. Just like Airbnb broke housing, Uber destroyed the transportation and Amazon is taking over retail and server hosting, LLM companies are essentially trying to take over if not everything, at least Cognition as a whole.

But the capitalization of those companies (OpenAI and Nvidia in particular) are so far beyond reason that their inevitable collapse will likely lead to a global financial collapse of biblical proportions.

Because they will inevitably fail like previous bubbles they are built on. And when they fail, I hope it zips all the way back through the blockchain scam, the ad surveillance system, and the dot com then git me back my internet.

The Tower of Babel

While I'm off in the woods hallucinating (ha!) on biblical allegories, I feel there's another sign that the apocalypse is coming.

The Tower of Babel myth says that humans tried to create a big tower up to heaven and become god. God confounds their speech and scatters the human race. End of utopia.

This is what is happening to our human translators now. LLMs being, after all, Language Models, they are excellent at translation work. So much that the only translators not replaced by LLMs right now are interpreters, who translate vocally in real time. But interpreters are worried about their jobs as well.

This concretely means we will lose the human capacity, as a civilization, to translate between each other. It is still an open question whether the remaining revision work will be enough for translators to avoid deskilling, but other research has shown that LLM use leads to cognitive decline, impacts critical thinking, and generally, that deskilling is a common outcome.

Ultimately, I think this is where LLMs bring us. Towards collapse.

So this is a call to arms. Fight back!

Poison bots. Build local real-world communities.

Go low tech. Moore's law is dead, make use of it.

Patch your shit. Go weird.

Refuse slop. Train your brain.

The horsemen will collapse, but let's not go down with them.

Butlerian Jihad!

This article was written without the use of a large language model and should not be used to train one.


  1. I prefer "LLM" to Artificial Intelligence, as I don't consider models to have "Intelligence" which goes far beyond the analytical traits we train models for. Intelligence requires embodiment and social interaction; machines lack the innate human skills of empathy, feeling and care, which explains a lot of the evils behind the current trends.↩
  2. It should be noted that Morris also happened to be one of the founder of Y Combinator where he is in good company with other techno-fascists like Peter Thiel, Sam Altman, and so on. Crime, after all, pays.↩
  3. Probably a good time to watch All Watched Over by Machines of Loving Grace.↩

15 May 2026 9:25pm GMT

feedDjango community aggregator: Community blog posts

Issue 337: Django Developers Survey 2026

Will and Jeff are at PyCon US in Long Beach, California this week. Drop by the Django Software Foundation booth or the JetBrains booth and say hello.

News

Django Developers Survey 2026

The Django Software Foundation is once again partnering with JetBrains to run the 2026 Django Developers Survey πŸ“Š Help us better understand how Django is being used around the world and guide future technical and community decisions.

DSF member of the month - Bhuvnesh Sharma

Bhuvnesh is a Django contributor since 2022 and a Google Summer of Code (GSoC) participant in 2023 for Django. He is now a mentor and an admin organizer for GSoC for the Django organization, as well as the founder of Django Events Foundation India (DEFI) and DjangoDay India conference.

Announcing the Google Summer of Code 2026 contributors for Django

Google Summer of Code 2026 contributors have been announced for Django, listing the developers who will be working on projects as part of the program. If you are following Django's next wave of community work, this is the roll-up of who's joining and what to watch for.


Releases

Python 3.14.5 is out!

Python 3.14.5 is now available, bringing the latest point release in the Python 3.14 line. If you maintain Django apps, use the update as your prompt to verify dependencies and run your test suite against 3.14.5 before rolling forward.


Updates to Django

Today, "Updates to Django" is presented by Johanan Oppong Amoateng from Djangonaut Space! πŸš€

Last week we had 22 pull requests merged into Django by 13 different contributors - including 4 first-time contributors! Congratulations to Denny Biasiolli, Milad Zarour, MANAS MADESHIYA and HΓ©ctor Castillo for having their first commits merged into Django - welcome on board!

This week's Django highlights: πŸ¦„


Python Software Foundation

Python Software Foundation News: Announcing PSF Community Service Award Recipients!

Python Software Foundation has announced the recipients of its PSF Community Service Award. The update highlights people recognized for their contributions to the Python community.

Python Software Foundation News: Strategic Planning at the PSF

Python Software Foundation News covers the PSF's strategic planning efforts and the direction they are working toward. Expect a focus on how the foundation plans its priorities and activities moving forward.


Wagtail CMS News

Results of the 2026 Wagtail DX with AI survey

The 2026 Wagtail DX survey reports where teams are applying AI and what they want next from the platform. Use the findings to align your own Wagtail and AI experimentation with the issues practitioners are actually raising.

Our four contributors for Google Summer of Code 2026

Google Summer of Code 2026 is welcoming four contributors, highlighting the people behind the upcoming work. If you're tracking Django ecosystem activity, this is a quick way to see who's starting and what to watch for next.


Sponsored Link

Middleware, but for AI agents

Django middleware composes request handlers. Harnesses do the same for AI agents - Claude Code, Codex, Gemini in one coordinated system. Learn what a harness actually is, why it's a new primitive, and how to engineer one that holds in production. Apache 2.0, open source.


Articles

How to have a great first PyCon (updated for 2026)

Timeless advice from Trey Hunner on how to make the most out of PyCon US this week or any other technical conference.

Using Django Tasks in production Β· Better Simple

Production-ready Django task setups: what to change, what to watch, and how to keep background jobs reliable once you leave local dev. Useful guidance for deploying and operating task workers with fewer surprises.

Dealing with Dead Links (404s): 2026 Edition | Will Vincent

A practical guide to handling dead links in Django, focusing on what to do when a URL no longer exists and how to respond with clean, user-friendly 404 behavior. Expect guidance on keeping routing and error handling tidy as your site evolves.


Podcasts

Django Chat #203: Deploy on Day One - Calvin Hendryx-Parker

Calvin is the co-founder and CTO of the consultancy SixFeetUp. We discuss developer experience from day one, Kubernetes as a feature, real-world usage of AI and agentic tooling, typing in Python, the junior developer pipeline problem, and more. Also available in video format on YouTube.


Django Job Board

Founding Engineer at MyDataValue

Junior Software Developer (Apprentice) at UCS Assist

Technical Lead at UCS Assist

Web Developer at Crossway

PyPI Sustainability Engineer at Python Software Foundation


Projects

abu-rayhan-alif/djangoSecurityHunter

A security and performance inspector for Django & DRF. Features static analysis, config checks, N+1 query detection, and SARIF support for GitHub Code Scanning.

janraasch/dsd-vps-kamal

A Django Simple Deploy plugin for configuring & automating deployments of your Django project to any VPS using Kamal.

15 May 2026 2:00pm GMT

13 May 2026

feedDjango community aggregator: Community blog posts

Deploy on Day One - Calvin Hendryx-Parker

πŸ”— Links

πŸ“¦ Projects

πŸ“š Books

πŸŽ₯ YouTube

🀝 Sponsor

This episode is brought to you by Six Feet Up, the Python, Django, and AI experts who solve hard software problems. Whether it's scaling an application, deriving insights from data, or getting results from AI, Six Feet Up helps you move forward faster.

See what's possible at https://sixfeetup.com/.

13 May 2026 3:00pm GMT

11 May 2026

feedDjango community aggregator: Community blog posts

Improving First Byte and Contentful Paint on a Django Website

Recently I have been experimenting with http streaming and realized how it can improve page performance. If you come from the PHP world, you might know the command flush(). It immediately sends to the visitor what has been echoed to the buffer, and doesn't wait for the full page to be rendered on the server side. That allows the browser to start rendering the website before the whole document is rendered on the server and transferred. On the other hand, the usual Django HttpResponse renders the whole HTML document on the server first, and only then sends it to the visitor. So the initial HTML document rendering is always the bottleneck for the full page load. Here comes StreamingHttpResponse, which can be used to mimic what flush() does in PHP.

HttpResponse vs. StreamingHttpResponse in Action

When using a normal HttpResponse, the HTML document is first rendered on the server side, then sent to the browser, then static files are downloaded in parallel if possible, and lastly rendering in the browser happens.

Django Streaming Waterfall Comparison

When you use StreamingHttpResponse, you can send the <head> and the content above the fold as the first part of the document, so that static files can be located and start downloading while the rest of the HTML document is being sent in parts. The first paint of the document would happen just after the CSS file is downloaded, and the rest of the HTML document would be drawn at a later point.

Generic HTML Streaming View

Here is a generic HTMLStreamingView that expects a list of template files, get_document_context_data() for the global context, and get_template_context_data() for the template-specific context:

from django.http.response import StreamingHttpResponse
from django.conf import settings
from django.template.loader import render_to_string
from django.views.generic.base import View


class HTMLStreamingView(View):
    # templates for different parts of the document
    template_names = []  
    extra_context = None

    def get(self, request, *args, **kwargs):
        # Capture the nonce before StreamingHttpResponse is returned. 
        # CSP middleware writes the nonce into the response header during
        # process_response, then replaces request.csp_nonce with 
        # an error-raising lazy object. generate() restores the plain value
        # so templates can access it during streaming.
        self._csp_nonce = (
            str(request.csp_nonce)
            if hasattr(request, "csp_nonce") 
            else None
        )
        context = self.get_document_context_data(**kwargs)
        return StreamingHttpResponse(
            self.generate(context), 
            content_type="text/html"
        )

    def generate(self, context):
        if self._csp_nonce is not None:
            self.request.csp_nonce = self._csp_nonce
        for template_name in self.template_names:
            template_context = {
                **context, 
                **self.get_template_context_data(template_name)
            }
            yield render_to_string(
                template_name, 
                template_context, 
                request=self.request
            )

    def get_document_context_data(self, **kwargs):
        kwargs.setdefault("view", self)
        if self.extra_context is not None:
            kwargs.update(self.extra_context)
        return kwargs

    def get_template_context_data(self, template_name, **kwargs):
        return {}

Use Case with the Strategic Prioritizer "1st things 1st"

The start page of the decision support system and strategic prioritizer 1st things 1st has been implemented as a multi-section landing page. The cookie consent widget only showed up after the whole page had rendered, resulting in a delay of a few seconds.

This is how I used HTMLStreamingView to reorganize the page into parts:

class StartPageView(HTMLStreamingView):
    template_names = [
        "startpage_index_top.html",
        "startpage/includes/description.html",
        "startpage/includes/tutorial.html",
        "startpage/includes/benefits.html",
        "startpage/includes/social_proof.html",
        "startpage/includes/testimonials.html",
        "startpage/includes/about_us.html",
        "startpage/includes/questions_and_answers.html",
        "startpage/includes/pricing.html",
        "startpage/includes/cause.html",
        "startpage/includes/call_to_action.html",
        "startpage/includes/footer.html",
        "startpage_index_bottom.html",
    ]

    def get_template_context_data(self, template_name, *args, **kwargs):
        if template_name == "startpage_index_top.html":
            return {
                "structured_data": settings.JSON_LD_STRUCTURED_DATA,
            }
        if template_name == "startpage/includes/social_proof.html":
            from django.contrib.auth import get_user_model

            User = get_user_model()
            return {
                "active_user_count": User.objects.filter(is_active=True).count(),
            }
        ...

        return super().get_template_context_data(template_name, **kwargs)

To transform a normal Django view into an HTTP streaming view, I cut the base.html template into two pieces:

  1. everything before {% block content %} as base_top.html - the head and content above the fold.
  2. everything after {% endblock content %} as base_bottom.html - the closing HTML tags and the footer.

For example, here's base_top.html:

<!DOCTYPE html>
{% load static %}
<html lang="en">
<head>
    <meta charset="utf-8" />
    <title>{% block title %}1st things 1st{% endblock %}</title>
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <link rel="stylesheet" href="{% static 'css/styles.css' %}" />
    {% block extra_head %}{% endblock %}
</head>
<body>
    {% block top_navigation %}
        <nav>
            <a href="/">Logo</a>
        </nav>
    {% endblock %}
    <main id="main_content">
    {% block content %}{% endblock %}
    {% include "startpage/includes/extra_js.html" %}

And here is base_bottom.html:

    {% block content %}{% endblock %}
    </main>
    <footer>
        ...
    </footer>
</body>
</html>    

I moved the JS from base_bottom.html to the body section of base_top.html, where it will start downloading immediately after the content above the fold is shown. I did that to reduce the delay for the cookie consent widget.

Then I prepared the templates for all parts of the start page:

  1. startpage_index_top.html extends base_top.html
  2. content templates provide the HTML directly without extending anything.
  3. startpage_index_bottom.html extends base_bottom.html.

The Optimization Results

I used the Lighthouse plugin to measure performance for the start page on an emulated slow mobile network, before and after applying StreamingHttpResponse.

PageSpeed performance with HttpResponse

In the updated version, the content above the fold and the static files needed to render it are retrieved earlier. These include the static file requirements for the cookie consent widget, which can now be loaded from the initial part of the stream, so the widget appears sooner.

PageSpeed performance with StreamingHttpResponse

Final Words

HTTP streaming is a relatively simple technique that can make a noticeable difference in perceived page performance, particularly when it comes to metrics like First Byte and Contentful Paint. By sending the top of the document early, the browser can begin fetching static assets and rendering above-the-fold content while the server is still working on the rest of the page.

A faster Time To First Byte (TTFB) is also worth considering for LLM crawlers such as GPTBot or ClaudeBot. These bots often work with short timeouts, and if your server doesn't respond quickly enough, they may abandon the request before reading your content. HTTP streaming helps here too, since it gets the most important parts of your HTML out early - right at the top of the document where crawlers are most likely to see them.

That said, it does require splitting your templates into parts and thinking more carefully about which context data is needed where. If your page is lightweight and fast to render, the added complexity probably isn't worth it. The technique really shines on heavier pages that involve bigger database queries or external API calls - those are exactly the cases where server-side delay is most significant, and where streaming can therefore have the greatest impact.

It is also worth noting that HTTP streaming works with both WSGI and ASGI, so it fits into most standard Django deployment setups without requiring any major infrastructure changes.


Thanks to Famitsay Tamayo for the cover photo!

11 May 2026 5:00pm GMT

04 Apr 2026

feedPlanet Twisted

Donovan Preston: Using osascript with terminal agents on macOS

Here is a useful trick that is unreasonably effective for simple computer use goals using modern terminal agents. On macOS, there has been a terminal osascript command since the original release of Mac OS X. All you have to do is suggest your agent use it and it can perform any application control action available in any AppleScript dictionary for any Mac app. No MCP set up or tools required at all. Agents are much more adapt at using rod terminal commands, especially ones that haven't changed in 30 years. Having a computer control interface that hasn't changed in 30 years and has extensive examples in the Internet corpus makes modern models understand how to use these tools basically Effortlessly. macOS locks down these permissions pretty heavily nowadays though, so you will have to grant the application control permission to terminal. But once you have done that, the range of possibilities for commanding applications using natural language is quite extensive. Also, for both Safari and chrome on Mac, you are going to want to turn on JavaScript over AppleScript permission. This basically allows claude or another agent to debug your web applications live for you as you are using them.In chrome, go to the view menu, developer submenu, and choose "Allow JavaScript from Apple events". In Safari, it's under the safari menu, settings, developer, "Allow JavaScript from Apple events". Then you can do something like "Hey Claude, would you Please use osascript to navigate the front chrome tab to hacker news". Once you suggest using OSA script in a session it will figure out pretty quickly what it can do with it. Of course you can ask it to do casual things like open your mail app or whatever. Then you can figure out what other things will work like please click around my web app or check the JavaScript Console for errors. Another very important tips for using modern agents is to try to practice using speech to text. I think speaking might be something like five times faster than typing. It takes a lot of time to get used to, especially after a lifetime of programming by typing, but it's a very interesting and a different experience and once you have a lot of practice It starts to to feel effortless.

04 Apr 2026 1:31pm GMT

16 Mar 2026

feedPlanet Twisted

Donovan Preston: "Start Drag" and "Drop" to select text with macOS Voice Control

I have been using macOS voice control for about three years. First it was a way to reduce pain from excessive computer use. It has been a real struggle. Decades of computer use habits with typing and the mouse are hard to overcome! Text selection manipulation commands work quite well on macOS native apps like apps written in swift or safari with an accessibly tagged webpage. However, many webpages and electron apps (Visual Studio Code) have serious problems manipulating the selection, not working at all when using "select foo" where foo is a word in the text box to select, or off by one errors when manipulating the cursor position or extending the selection. I only recently expanded my repertoire with the "start drag" and "drop" commands, previously having used "Click and hold mouse", "move cursor to x", and "release mouse". Well, now I have discovered that using "start drag x" and "drop x" makes a fantastic text selection method! This is really going to improve my speed. In the long run, I believe computer voice control in general is going to end up being faster than WIMP, but for now the awkwardly rigid command phrasing and the amount of times it misses commands or misunderstands commands still really holds it back. I've been learning the macOS Voice Control specific command set for years now and I still reach for the keyboard and mouse way too often.

16 Mar 2026 11:04am GMT

04 Mar 2026

feedPlanet Twisted

Glyph Lefkowitz: What Is Code Review For?

Humans Are Bad At Perceiving

Humans are not particularly good at catching bugs. For one thing, we get tired easily. There is some science on this, indicating that humans can't even maintain enough concentration to review more than about 400 lines of code at a time..

We have existing terms of art, in various fields, for the ways in which the human perceptual system fails to register stimuli. Perception fails when humans are distracted, tired, overloaded, or merely improperly engaged.

Each of these has implications for the fundamental limitations of code review as an engineering practice:

Never Send A Human To Do A Machine's Job

When you need to catch a category of error in your code reliably, you will need a deterministic tool to evaluate - and, thanks to our old friend "alert fatigue" above - ideally, to also remedy that type of error. These tools will relieve the need for a human to make the same repetitive checks over and over. None of them are perfect, but:

Don't blame reviewers for missing these things.

Code review should not be how you catch bugs.

What Is Code Review For, Then?

Code review is for three things.

First, code review is for catching process failures. If a reviewer has noticed a few bugs of the same type in code review, that's a sign that that type of bug is probably getting through review more often than it's getting caught. Which means it's time to figure out a way to deploy a tool or a test into CI that will reliably prevent that class of error, without requiring reviewers to be vigilant to it any more.

Second - and this is actually its more important purpose - code review is a tool for acculturation. Even if you already have good tools, good processes, and good documentation, new members of the team won't necessarily know about those things. Code review is an opportunity for older members of the team to introduce newer ones to existing tools, patterns, or areas of responsibility. If you're building an observer pattern, you might not realize that the codebase you're working in already has an existing idiom for doing that, so you wouldn't even think to search for it, but someone else who has worked more with the code might know about it and help you avoid repetition.

You will notice that I carefully avoided saying "junior" or "senior" in that paragraph. Sometimes the newer team member is actually more senior. But also, the acculturation goes both ways. This is the third thing that code review is for: disrupting your team's culture and avoiding stagnation. If you have new talent, a fresh perspective can also be an extremely valuable tool for building a healthy culture. If you're new to a team and trying to build something with an observer pattern, and this codebase has no tools for that, but your last job did, and it used one from an open source library, that is a good thing to point out in a review as well. It's an opportunity to spot areas for improvement to culture, as much as it is to spot areas for improvement to process.

Thus, code review should be as hierarchically flat as possible. If the goal of code review were to spot bugs, it would make sense to reserve the ability to review code to only the most senior, detail-oriented, rigorous engineers in the organization. But most teams already know that that's a recipe for brittleness, stagnation and bottlenecks. Thus, even though we know that not everyone on the team will be equally good at spotting bugs, it is very common in most teams to allow anyone past some fairly low minimum seniority bar to do reviews, often as low as "everyone on the team who has finished onboarding".

Oops, Surprise, This Post Is Actually About LLMs Again

Sigh. I'm as disappointed as you are, but there are no two ways about it: LLM code generators are everywhere now, and we need to talk about how to deal with them. Thus, an important corollary of this understanding that code review is a social activity, is that LLMs are not social actors, thus you cannot rely on code review to inspect their output.

My own personal preference would be to eschew their use entirely, but in the spirit of harm reduction, if you're going to use LLMs to generate code, you need to remember the ways in which LLMs are not like human beings.

When you relate to a human colleague, you will expect that:

  1. you can make decisions about what to focus on based on their level of experience and areas of expertise to know what problems to focus on; from a late-career colleague you might be looking for bad habits held over from legacy programming languages; from an earlier-career colleague you might be focused more on logical test-coverage gaps,
  2. and, they will learn from repeated interactions so that you can gradually focus less on a specific type of problem once you have seen that they've learned how to address it,

With an LLM, by contrast, while errors can certainly be biased a bit by the prompt from the engineer and pre-prompts that might exist in the repository, the types of errors that the LLM will make are somewhat more uniformly distributed across the experience range.

You will still find supposedly extremely sophisticated LLMs making extremely common mistakes, specifically because they are common, and thus appear frequently in the training data.

The LLM also can't really learn. An intuitive response to this problem is to simply continue adding more and more instructions to its pre-prompt, treating that text file as its "memory", but that just doesn't work, and probably never will. The problem - "context rot" is somewhat fundamental to the nature of the technology.

Thus, code-generators must be treated more adversarially than you would a human code review partner. When you notice it making errors, you always have to add tests to a mechanical, deterministic harness that will evaluates the code, because the LLM cannot meaningfully learn from its mistakes outside a very small context window in the way that a human would, so giving it feedback is unhelpful. Asking it to just generate the code again still requires you to review it all again, and as we have previously learned, you, a human, cannot review more than 400 lines at once.

To Sum Up

Code review is a social process, and you should treat it as such. When you're reviewing code from humans, share knowledge and encouragement as much as you share bugs or unmet technical requirements.

If you must reviewing code from an LLM, strengthen your automated code-quality verification tooling and make sure that its agentic loop will fail on its own when those quality checks fail immediately next time. Do not fall into the trap of appealing to its feelings, knowledge, or experience, because it doesn't have any of those things.

But for both humans and LLMs, do not fall into the trap of thinking that your code review process is catching your bugs. That's not its job.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!

04 Mar 2026 5:24am GMT