13 Nov 2025

feedPlanet Python

Python Engineering at Microsoft: Python in Visual Studio Code – November 2025 Release

We're excited to announce that the November 2025 release of the Python extension for Visual Studio Code is now available!

This release includes the following announcements:

If you're interested, you can check the full list of improvements in our changelogs for the Python and Pylance extensions.

Add Copilot Hover Summaries as docstring

You can now add your AI-generated documentation directly into your code as a docstring using the new Add as docstring command in Copilot Hover Summaries. When you generate a summary for a function or class, navigate to the symbol definition and hover over it to access the Add as docstring command, which inserts the summary below your cursor formatted as a proper docstring.

This streamlines the process of documenting your code, allowing you to quickly enhance readability and maintainability without retyping.

Add as docstring command in Copilot Hover Summaries

Localized Copilot Hover Summaries

GitHub Copilot Hover Summaries inside Pylance now respect your display language within VS Code. When you invoke an AI-generated summary, you'll get strings in the language you've set for your editor, making it easier to understand the generated documentation.

Copilot Hover Summary generated in Portuguese

Convert wildcard imports into Code Action

Wildcard imports (from module import *) are often discouraged in Python because they can clutter your namespace and make it unclear where names come from, reducing code clarity and maintainability. Pylance now helps you clean up modules that still rely on from module import * via a new Code Action. It replaces the wildcard with the explicit symbols, preserving aliases and keeping the import to a single statement. To try it out, you can click on the line with the wildcard import and press Ctrl + . (or Cmd + . on macOS) to select the Convert to explicit imports Code Action.

Convert wildcard imports Code Action

Debugger support for multiple interpreters via the Python Environments Extension

The Python Debugger extension now leverages the APIs from the Python Environments Extension (vscode-python-debugger#849). When enabled, the debugger can recognize and use different interpreters for each project within a workspace. If you have multiple folders configured as projects-each with its own interpreter - the debugger will now respect these selections and use the interpreter shown in the status bar when debugging.

To enable this functionality, set "python.useEnvironmentsExtension": true in your user settings. The new API integration is only active when this setting is turned on.

Please report any issues you encounter to the Python Debugger repository.

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include:

We would also like to extend special thanks to this month's contributors:

Try out these new improvements by downloading the Python extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code - November 2025 Release appeared first on Microsoft for Python Developers Blog.

13 Nov 2025 6:41pm GMT

12 Nov 2025

feedDjango community aggregator: Community blog posts

Django 20 Years Later - Adrian Holovaty

🔗 Links

🎥 YouTube

Sponsor

This episode was brought to you by HackSoft, your development partner beyond code. From custom software development to consulting, team augmentation, or opening an office in Bulgaria, they're ready to take your Django project to the next level!

https://www.hacksoft.io/solutions/django

12 Nov 2025 6:00pm GMT

feedPlanet Python

Python Software Foundation: Python is for everyone: Join in the PSF year-end fundraiser & membership drive!

The Python Software Foundation (PSF) is the charitable organization behind Python, dedicated to advancing, supporting, and protecting the Python programming language and the community that sustains it. That mission and cause are more than just words we believe in. Our tiny but mighty team works hard to deliver the projects and services that allow Python to be the thriving, independent, community-driven language it is today. Some of what the PSF does includes producing PyCon US, hosting the Python Package Index (PyPI), supporting 5 Developers-in-Residence, maintaining critical community infrastructure, and more.

Python is for teaching, learning, playing, researching, exploring, creating, working- the list goes on and on and on! Support this year's fundraiser with your donations and memberships to help the PSF, the Python community, and the language stay strong and sustainable. Because Python is for everyone, thanks to you.

There are two direct ways to join through donate.python.org:

>>> Donate or Become a Member Today! <<<

If you already donated and/or you're already a member, you can:

Your donations and support:

Highlights from 2025:

12 Nov 2025 5:03pm GMT

Real Python: The Python Standard REPL: Try Out Code and Ideas Quickly

The Python standard REPL (Read-Eval-Print Loop) lets you run code interactively, test ideas, and get instant feedback. You start it by running the python command, which opens an interactive shell included in every Python installation.

In this tutorial, you'll learn how to use the Python REPL to execute code, edit and navigate code history, introspect objects, and customize the REPL for a smoother coding workflow.

By the end of this tutorial, you'll understand that:

  • You can enter and run simple or compound statements in a REPL session.
  • The implicit _ variable stores the result of the last evaluated expression and can be reused in later expressions.
  • You can reload modules dynamically with importlib.reload() to test updates without restarting the REPL.
  • The modern Python REPL supports auto-indentation, history navigation, syntax highlighting, quick commands, and autocompletion, which improves your user experience.
  • You can customize the REPL with a startup file, color themes, and third-party libraries like Rich for a better experience.

With these skills, you can move beyond just running short code snippets and start using the Python REPL as a flexible environment for testing, debugging, and exploring new ideas.

Get Your Code: Click here to download the free sample code that you'll use to explore the capabilities of Python's standard REPL.

Take the Quiz: Test your knowledge with our interactive "The Python Standard REPL: Try Out Code and Ideas Quickly" quiz. You'll receive a score upon completion to help you track your learning progress:


The Python Standard REPL: Try Out Code and Ideas Quickly

Interactive Quiz

The Python Standard REPL: Try Out Code and Ideas Quickly

Test your understanding of the Python standard REPL. The Python REPL allows you to run Python code interactively, which is useful for testing new ideas, exploring libraries, refactoring and debugging code, and trying out examples.

Getting to Know the Python Standard REPL

In computer programming, you'll find two kinds of programming languages: compiled and interpreted languages. Compiled languages like C and C++ have an associated compiler program that converts the language's code into machine code.

This machine code is typically saved in an executable file. Once you have an executable, you can run your program on any compatible computer system without needing the compiler or the source code.

In contrast, interpreted languages like Python need an interpreter program. This means that you need to have a Python interpreter installed on your computer to run Python code. Some may consider this characteristic a drawback because it can make your code distribution process much more difficult.

However, in Python, having an interpreter offers one significant advantage that comes in handy during your development and testing process. The Python interpreter allows for what's known as an interactive Read-Eval-Print Loop (REPL), or shell, which reads a piece of code, evaluates it, and then prints the result to the console in a loop.

The Python REPL is a built-in interactive coding playground that you can start by typing python in your terminal. Once in a REPL session, you can run Python code:

Python
>>> "Python!" * 3
Python!Python!Python!
>>> 40 + 2
42

In the REPL, you can use Python as a calculator, but also try any Python code you can think of, and much more! Jump to starting and terminating REPL interactive sessions if you want to get your hands dirty right away, or keep reading to gather more background context first.

Note: In this tutorial, you'll learn about the CPython standard REPL, which is available in all the installers of this Python distribution. If you don't have CPython yet, then check out How to Install Python on Your System: A Guide for detailed instructions.

The standard REPL has changed significantly since Python 3.13 was released. Several limitations from earlier versions have been lifted. Throughout this tutorial, version differences are indicated when appropriate.

To dive deeper into the new REPL features, check out these resources:

The Python interpreter can execute Python code in two modes:

  1. Script, or program
  2. Interactive, or REPL

In script mode, you use the interpreter to run a source file-typically a .py file-as an executable program. In this case, Python loads the file's content and runs the code line by line, following the script or program's execution flow.

Alternatively, interactive mode is when you launch the interpreter using the python command and use it as a platform to run code that you type in directly.

In this tutorial, you'll learn how to use the Python standard REPL to run code interactively, which allows you to try ideas and test concepts when using and learning Python. Are you ready to take a closer look at the Python REPL? Keep reading!

What Is Python's Interactive Shell or REPL?

When you run the Python interpreter in interactive mode, you open an interactive shell, also known as an interactive session. In this shell, your keyboard is the input source, and your screen is the output destination.

Note: In this tutorial, the terms interactive shell, interactive session, interpreter session, and REPL session are used interchangeably.

Here's how the REPL works: it takes input consisting of Python code, which the interpreter parses and evaluates. Next, the interpreter displays the result on your screen, and the process starts again as a loop.

Read the full article at https://realpython.com/python-repl/ »


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

12 Nov 2025 2:00pm GMT

feedDjango community aggregator: Community blog posts

Behind the Curtain as a Conference Chair

This post is the first in a three-part series reflecting on DjangoCon US 2025 - In this post, I'm taking you behind the scenes of DjangoCon US 2025 to share what it taught me about leadership, community, and the power of holding space for others.

12 Nov 2025 12:00pm GMT

Django-Tailwind v4.4: Now with Zero Node.js Setup via Standalone Tailwind CLI

I've just released version 4.4 of Django-Tailwind, and it comes with a major new feature: you can now use Tailwind CSS without needing Node.js.

This is made possible by another one of my packages - Pytailwindcss - which brings support for Tailwind's Standalone CLI into the Python ecosystem. …

Read now

12 Nov 2025 10:44am GMT

11 Nov 2025

feedPlanet Twisted

Glyph Lefkowitz: The “Dependency Cutout” Workflow Pattern, Part I

Tell me if you've heard this one before.

You're working on an application. Let's call it "FooApp". FooApp has a dependency on an open source library, let's call it "LibBar". You find a bug in LibBar that affects FooApp.

To envisage the best possible version of this scenario, let's say you actively like LibBar, both technically and socially. You've contributed to it in the past. But this bug is causing production issues in FooApp today, and LibBar's release schedule is quarterly. FooApp is your job; LibBar is (at best) your hobby. Blocking on the full upstream contribution cycle and waiting for a release is an absolute non-starter.

What do you do?

There are a few common reactions to this type of scenario, all of which are bad options.

I will enumerate them specifically here, because I suspect that some of them may resonate with many readers:

  1. Find an alternative to LibBar, and switch to it.

    This is a bad idea because a transition to a core infrastructure component could be extremely expensive.

  2. Vendor LibBar into your codebase and fix your vendored version.

    This is a bad idea because carrying this one fix now requires you to maintain all the tooling associated with a monorepo1: you have to be able to start pulling in new versions from LibBar regularly, reconcile your changes even though you now have a separate version history on your imported version, and so on.

  3. Monkey-patch LibBar to include your fix.

    This is a bad idea because you are now extremely tightly coupled to a specific version of LibBar. By modifying LibBar internally like this, you're inherently violating its compatibility contract, in a way which is going to be extremely difficult to test. You can test this change, of course, but as LibBar changes, you will need to replicate any relevant portions of its test suite (which may be its entire test suite) in FooApp. Lots of potential duplication of effort there.

  4. Implement a workaround in your own code, rather than fixing it.

    This is a bad idea because you are distorting the responsibility for correct behavior. LibBar is supposed to do LibBar's job, and unless you have a full wrapper for it in your own codebase, other engineers (including "yourself, personally") might later forget to go through the alternate, workaround codepath, and invoke the buggy LibBar behavior again in some new place.

  5. Implement the fix upstream in LibBar anyway, because that's the Right Thing To Do, and burn credibility with management while you anxiously wait for a release with the bug in production.

    This is a bad idea because you are betraying your users - by allowing the buggy behavior to persist - for the workflow convenience of your dependency providers. Your users are probably giving you money, and trusting you with their data. This means you have both ethical and economic obligations to consider their interests.

    As much as it's nice to participate in the open source community and take on an appropriate level of burden to maintain the commons, this cannot sustainably be at the explicit expense of the population you serve directly.

    Even if we only care about the open source maintainers here, there's still a problem: as you are likely to come under immediate pressure to ship your changes, you will inevitably relay at least a bit of that stress to the maintainers. Even if you try to be exceedingly polite, the maintainers will know that you are coming under fire for not having shipped the fix yet, and are likely to feel an even greater burden of obligation to ship your code fast.

    Much as it's good to contribute the fix, it's not great to put this on the maintainers.

The respective incentive structures of software development - specifically, of corporate application development and open source infrastructure development - make options 1-4 very common.

On the corporate / application side, these issues are:

But there are problems on the open source side as well. Those problems are all derived from one big issue: because we're often working with relatively small sums of money, it's hard for upstream open source developers to consume either money or patches from application developers. It's nice to say that you should contribute money to your dependencies, and you absolutely should, but the cost-benefit function is discontinuous. Before a project reaches the fiscal threshold where it can be at least one person's full-time job to worry about this stuff, there's often no-one responsible in the first place. Developers will therefore gravitate to the issues that are either fun, or relevant to their own job.

These mutually-reinforcing incentive structures are a big reason that users of open source infrastructure, even teams who work at corporate users with zillions of dollars, don't reliably contribute back.

The Answer We Want

All those options are bad. If we had a good option, what would it look like?

It is both practically necessary3 and morally required4 for you to have a way to temporarily rely on a modified version of an open source dependency, without permanently diverging.

Below, I will describe a desirable abstract workflow for achieving this goal.

Step 0: Report the Problem

Before you get started with any of these other steps, write up a clear description of the problem and report it to the project as an issue; specifically, in contrast to writing it up as a pull request. Describe the problem before submitting a solution.

You may not be able to wait for a volunteer-run open source project to respond to your request, but you should at least tell the project what you're planning on doing.

If you don't hear back from them at all, you will have at least made sure to comprehensively describe your issue and strategy beforehand, which will provide some clarity and focus to your changes.

If you do hear back from them, in the worst case scenario, you may discover that a hard fork will be necessary because they don't consider your issue valid, but even that information will save you time, if you know it before you get started. In the best case, you may get a reply from the project telling you that you've misunderstood its functionality and that there is already a configuration parameter or usage pattern that will resolve your problems with no new code. But in all cases, you will benefit from early coordination on what needs fixing before you get to how to fix it.

Step 1: Source Code and CI Setup

Fork the source code for your upstream dependency to a writable location where it can live at least for the duration of this one bug-fix, and possibly for the duration of your application's use of the dependency. After all, you might want to fix more than one bug in LibBar.

You want to have a place where you can put your edits, that will be version controlled and code reviewed according to your normal development process. This probably means you'll need to have your own main branch that diverges from your upstream's main branch.

Remember: you're going to need to deploy this to your production, so testing gates that your upstream only applies to final releases of LibBar will need to be applied to every commit here.

Depending on your LibBar's own development process, this may result in slightly unusual configurations where, for example, your fixes are written against the last LibBar release tag, rather than its current5 main; if the project has a branch-freshness requirement, you might need two branches, one for your upstream PR (based on main) and one for your own use (based on the release branch with your changes).

Ideally for projects with really good CI and a strong "keep main release-ready at all times" policy, you can deploy straight from a development branch, but it's good to take a moment to consider this before you get started. It's usually easier to rebase changes from an older HEAD onto a newer one than it is to go backwards.

Speaking of CI, you will want to have your own CI system. The fact that GitHub Actions has become a de-facto lingua franca of continuous integration means that this step may be quite simple, and your forked repo can just run its own instance.

Optional Bonus Step 1a: Artifact Management

If you have an in-house artifact repository, you should set that up for your dependency too, and upload your own build artifacts to it. You can often treat your modified dependency as an extension of your own source tree and install from a GitHub URL, but if you've already gone to the trouble of having an in-house package repository, you can pretend you've taken over maintenance of the upstream package temporarily (which you kind of have) and leverage those workflows for caching and build-time savings as you would with any other internal repo.

Step 2: Do The Fix

Now that you've got somewhere to edit LibBar's code, you will want to actually fix the bug.

Step 2a: Local Filesystem Setup

Before you have a production version on your own deployed branch, you'll want to test locally, which means having both repositories in a single integrated development environment.

At this point, you will want to have a local filesystem reference to your LibBar dependency, so that you can make real-time edits, without going through a slow cycle of pushing to a branch in your LibBar fork, pushing to a FooApp branch, and waiting for all of CI to run on both.

This is useful in both directions: as you prepare the FooApp branch that makes any necessary updates on that end, you'll want to make sure that FooApp can exercise the LibBar fix in any integration tests. As you work on the LibBar fix itself, you'll also want to be able to use FooApp to exercise the code and see if you've missed anything - and this, you wouldn't get in CI, since LibBar can't depend on FooApp itself.

In short, you want to be able to treat both projects as an integrated development environment, with support from your usual testing and debugging tools, just as much as you want your deployment output to be an integrated artifact.

Step 2b: Branch Setup for PR

However, for continuous integration to work, you will also need to have a remote resource reference of some kind from FooApp's branch to LibBar. You will need 2 pull requests: the first to land your LibBar changes to your internal LibBar fork and make sure it's passing its own tests, and then a second PR to switch your LibBar dependency from the public repository to your internal fork.

At this step it is very important to ensure that there is an issue filed on your own internal backlog to drop your LibBar fork. You do not want to lose track of this work; it is technical debt that must be addressed.

Until it's addressed, automated tools like Dependabot will not be able to apply security updates to LibBar for you; you're going to need to manually integrate every upstream change. This type of work is itself very easy to drop or lose track of, so you might just end up stuck on a vulnerable version.

Step 3: Deploy Internally

Now that you're confident that the fix will work, and that your temporarily-internally-maintained version of LibBar isn't going to break anything on your site, it's time to deploy.

Some deployment heritage should help to provide some evidence that your fix is ready to land in LibBar, but at the next step, please remember that your production environment isn't necessarily emblematic of that of all LibBar users.

Step 4: Propose Externally

You've got the fix, you've tested the fix, you've got the fix in your own production, you've told upstream you want to send them some changes. Now, it's time to make the pull request.

You're likely going to get some feedback on the PR, even if you think it's already ready to go; as I said, despite having been proven in your production environment, you may get feedback about additional concerns from other users that you'll need to address before LibBar's maintainers can land it.

As you process the feedback, make sure that each new iteration of your branch gets re-deployed to your own production. It would be a huge bummer to go through all this trouble, and then end up unable to deploy the next publicly released version of LibBar within FooApp because you forgot to test that your responses to feedback still worked on your own environment.

Step 4a: Hurry Up And Wait

If you're lucky, upstream will land your changes to LibBar. But, there's still no release version available. Here, you'll have to stay in a holding pattern until upstream can finalize the release on their end.

Depending on some particulars, it might make sense at this point to archive your internal LibBar repository and move your pinned release version to a git hash of the LibBar version where your fix landed, in their repository.

Before you do this, check in with the LibBar core team and make sure that they understand that's what you're doing and they don't have any wacky workflows which may involve rebasing or eliding that commit as part of their release process.

Step 5: Unwind Everything

Finally, you eventually want to stop carrying any patches and move back to an official released version that integrates your fix.

You want to do this because this is what the upstream will expect when you are reporting bugs. Part of the benefit of using open source is benefiting from the collective work to do bug-fixes and such, so you don't want to be stuck off on a pinned git hash that the developers do not support for anyone else.

As I said in step 2b6, make sure to maintain a tracking task for doing this work, because leaving this sort of relatively easy-to-clean-up technical debt lying around is something that can potentially create a lot of aggravation for no particular benefit. Make sure to put your internal LibBar repository into an appropriate state at this point as well.

Up Next

This is part 1 of a 2-part series. In part 2, I will explore in depth how to execute this workflow specifically for Python packages, using some popular tools. I'll discuss my own workflow, standards like PEP 517 and pyproject.toml, and of course, by the popular demand that I just know will come, uv.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!


  1. if you already have all the tooling associated with a monorepo, including the ability to manage divergence and reintegrate patches with upstream, you already have the higher-overhead version of the workflow I am going to propose, so, never mind. but chances are you don't have that, very few companies do.

  2. In any business where one must wrangle with Legal, 3 hours is a wildly optimistic estimate.

  3. c.f. @mcc@mastodon.social

  4. c.f. @geofft@mastodon.social

  5. In an ideal world every project would keep its main branch ready to release at all times, no matter what but we do not live in an ideal world.

  6. In this case, there is no question. It's 2b only, no not-2b.

11 Nov 2025 1:44am GMT

15 Oct 2025

feedPlanet Plone - Where Developers And Integrators Write

Maurits van Rees: Jakob Kahl and Erico Andrei: Flying from one Plone version to another

This is a talk about migrating from Plone 4 to 6 with the newest toolset.

There are several challenges when doing Plone migrations:

  • Highly customized source instances: custom workflow, add-ons, not all of them with versions that worked on Plone 6.
  • Complex data structures. For example a Folder with a Link as default page, with pointed to some other content which meanwhile had been moved.
  • Migrating Classic UI to Volto
  • Also, you might be migrating from a completely different CMS to Plone.

How do we do migrations in Plone in general?

  • In place migrations. Run migration steps on the source instance itself. Use the standard upgrade steps from Plone. Suitable for smaller sites with not so much complexity. Especially suitable if you do only a small Plone version update.
  • Export - import migrations. You extract data from the source, transform it, and load the structure in the new site. You transform the data outside of the source instance. Suitable for all kinds of migrations. Very safe approach: only once you are sure everything is fine, do you switch over to the newly migrated site. Can be more time consuming.

Let's look at export/import, which has three parts:

  • Extraction: you had collective.jsonify, transmogrifier, and now collective.exportimport and plone.exportimport.
  • Transformation: transmogrifier, collective.exportimport, and new: collective.transmute.
  • Load: Transmogrifier, collective.exportimport, plone.exportimport.

Transmogrifier is old, we won't talk about it now. collective.exportimport: written by Philip Bauer mostly. There is an @@export_all view, and then @@import_all to import it.

collective.transmute is a new tool. This is made to transform data from collective.exportimport to the plone.exportimport format. Potentially it can be used for other migrations as well. Highly customizable and extensible. Tested by pytest. It is standalone software with a nice CLI. No dependency on Plone packages.

Another tool: collective.html2blocks. This is a lightweight Python replacement for the JavaScript Blocks conversion tool. This is extensible and tested.

Lastly plone.exportimport. This is a stripped down version of collective.exportimport. This focuses on extract and load. No transforms. So this is best suited for importing to a Plone site with the same version.

collective.transmute is in alpha, probably a 1.0.0 release in the next weeks. Still missing quite some documentation. Test coverage needs some improvements. You can contribute with PRs, issues, docs.

15 Oct 2025 3:44pm GMT

Maurits van Rees: Mikel Larreategi: How we deploy cookieplone based projects.

We saw that cookieplone was coming up, and Docker, and as game changer uv making the installation of Python packages much faster.

With cookieplone you get a monorepo, with folders for backend, frontend, and devops. devops contains scripts to setup the server and deploy to it. Our sysadmins already had some other scripts. So we needed to integrate that.

First idea: let's fork it. Create our own copy of cookieplone. I explained this in my World Plone Day talk earlier this year. But cookieplone was changing a lot, so it was hard to keep our copy updated.

Maik Derstappen showed me copier, yet another templating language. Our idea: create a cookieplone project, and then use copier to modify it.

What about the deployment? We are on GitLab. We host our runners. We use the docker-in-docker service. We develop on a branch and create a merge request (pull request in GitHub terms). This activates a piple to check-test-and-build. When it is merged, bump the version, use release-it.

Then we create deploy keys and tokens. We give these access to private GitLab repositories. We need some changes to SSH key management in pipelines, according to our sysadmins.

For deployment on the server: we do not yet have automatic deployments. We did not want to go too fast. We are testing the current pipelines and process, see if they work properly. In the future we can think about automating deployment. We just ssh to the server, and perform some commands there with docker.

Future improvements:

  • Start the docker containers and curl/wget the /ok endpoint.
  • lock files for the backend, with pip/uv.

15 Oct 2025 3:41pm GMT

Maurits van Rees: David Glick: State of plone.restapi

[Missed the first part.]

Vision: plone.restapi aims to provide a complete, stable, documented, extensible, language-agnostic API for the Plone CMS.

New services

  • @site: global site settings. These are overall, public settings that are needed on all pages and that don't change per context.
  • @login: choose between multiple login provider.
  • @navroot: contextual data from the navigation root of the current context.
  • @inherit: contextual data from any behavior. It looks for the closest parent that has this behavior defined, and gets this data.

Dynamic teaser blocks: you can choose to customize the teaser content. So the teaser links to the item you have selected, but if you want, you can change the title and other fields.

Roadmap:

  • Don't break it.
  • 10.0 release for Plone 6.2: remove setuptools namespace.
  • Continue to support migration path from older versions: use an old plone.restapi version on an old Plone version to export it, and being able to import this to the latest versions.
  • Recycle bin (work in progress): a lot of the work from Rohan is in Classic UI, but he is working on the restapi as well.

Wishlist, no one is working on this, but would be good to have:

  • @permissions endpoint
  • @catalog endpoint
  • missing control panel
  • folder type constraints
  • Any time that you find yourself going to the Classic UI to do something, that is a sign something is missing.
  • Some changes to relative paths to fix some use cases
  • Machine readable specifications for OpenAPI, MCP
  • New forms backend
  • Bulk operations
  • Streaming API
  • External functional test suite, that you could also run against e.g. guillotina or Nick to see if it works there as well.
  • Time travel: be able to see the state of the database from some time ago. The ZODB has some options here.

15 Oct 2025 3:39pm GMT

15 Aug 2025

feedPlanet Twisted

Glyph Lefkowitz: The Futzing Fraction

The most optimistic vision of generative AI1 is that it will relieve us of the tedious, repetitive elements of knowledge work so that we can get to work on the really interesting problems that such tedium stands in the way of. Even if you fully believe in this vision, it's hard to deny that today, some tedium is associated with the process of using generative AI itself.

Generative AI also isn't free, and so, as responsible consumers, we need to ask: is it worth it? What's the ROI of genAI, and how can we tell? In this post, I'd like to explore a logical framework for evaluating genAI expenditures, to determine if your organization is getting its money's worth.

Perpetually Proffering Permuted Prompts

I think most LLM users would agree with me that a typical workflow with an LLM rarely involves prompting it only one time and getting a perfectly useful answer that solves the whole problem.

Generative AI best practices, even from the most optimistic vendors all suggest that you should continuously evaluate everything. ChatGPT, which is really the only genAI product with significantly scaled adoption, still says at the bottom of every interaction:

ChatGPT can make mistakes. Check important info.

If we have to "check important info" on every interaction, it stands to reason that even if we think it's useful, some of those checks will find an error. Again, if we think it's useful, presumably the next thing to do is to perturb our prompt somehow, and issue it again, in the hopes that the next invocation will, by dint of either:

  1. better luck this time with the stochastic aspect of the inference process,
  2. enhanced application of our skill to engineer a better prompt based on the deficiencies of the current inference, or
  3. better performance of the model by populating additional context in subsequent chained prompts.

Unfortunately, given the relative lack of reliable methods to re-generate the prompt and receive a better answer2, checking the output and re-prompting the model can feel like just kinda futzing around with it. You try, you get a wrong answer, you try a few more times, eventually you get the right answer that you wanted in the first place. It's a somewhat unsatisfying process, but if you get the right answer eventually, it does feel like progress, and you didn't need to use up another human's time.

In fact, the hottest buzzword of the last hype cycle is "agentic". While I have my own feelings about this particular word3, its current practical definition is "a generative AI system which automates the process of re-prompting itself, by having a deterministic program evaluate its outputs for correctness".

A better term for an "agentic" system would be a "self-futzing system".

However, the ability to automate some level of checking and re-prompting does not mean that you can fully delegate tasks to an agentic tool, either. It is, plainly put, not safe. If you leave the AI on its own, you will get terrible results that will at best make for a funny story45 and at worst might end up causing serious damage67.

Taken together, this all means that for any consequential task that you want to accomplish with genAI, you need an expert human in the loop. The human must be capable of independently doing the job that the genAI system is being asked to accomplish.

When the genAI guesses correctly and produces usable output, some of the human's time will be saved. When the genAI guesses wrong and produces hallucinatory gibberish or even "correct" output that nevertheless fails to account for some unstated but necessary property such as security or scale, some of the human's time will be wasted evaluating it and re-trying it.

Income from Investment in Inference

Let's evaluate an abstract, hypothetical genAI system that can automate some work for our organization. To avoid implicating any specific vendor, let's call the system "Mallory".

Is Mallory worth the money? How can we know?

Logically, there are only two outcomes that might result from using Mallory to do our work.

  1. We prompt Mallory to do some work; we check its work, it is correct, and some time is saved.
  2. We prompt Mallory to do some work; we check its work, it fails, and we futz around with the result; this time is wasted.

As a logical framework, this makes sense, but ROI is an arithmetical concept, not a logical one. So let's translate this into some terms.

In order to evaluate Mallory, let's define the Futzing Fraction, " FF ", in terms of the following variables:

H

the average amount of time a Human worker would take to do a task, unaided by Mallory

I

the amount of time that Mallory takes to run one Inference8

C

the amount of time that a human has to spend Checking Mallory's output for each inference

P

the Probability that Mallory will produce a correct inference for each prompt

W

the average amount of time that it takes for a human to Write one prompt for Mallory

E

since we are normalizing everything to time, rather than money, we do also have to account for the dollar of Mallory as as a product, so we will include the Equivalent amount of human time we could purchase for the marginal cost of one9 inference.

As in last week's example of simple ROI arithmetic, we will put our costs in the numerator, and our benefits in the denominator.

FF = W+I+C+E P H

The idea here is that for each prompt, the minimum amount of time-equivalent cost possible is W+I+C+E. The user must, at least once, write a prompt, wait for inference to run, then check the output; and, of course, pay any costs to Mallory's vendor.

If the probability of a correct answer is P=13, then they will do this entire process 3 times10, so we put P in the denominator. Finally, we divide everything by H, because we are trying to determine if we are actually saving any time or money, versus just letting our existing human, who has to be driving this process anyway, do the whole thing.

If the Futzing Fraction evaluates to a number greater than 1, as previously discussed, you are a bozo; you're spending more time futzing with Mallory than getting value out of it.

Figuring out the Fraction is Frustrating

In order to even evaluate the value of the Futzing Fraction though, you have to have a sound method to even get a vague sense of all the terms.

If you are a business leader, a lot of this is relatively easy to measure. You vaguely know what H is, because you know what your payroll costs, and similarly, you can figure out E with some pretty trivial arithmetic based on Mallory's pricing table. There are endless YouTube channels, spec sheets and benchmarks to give you I. W is probably going to be so small compared to H that it hardly merits consideration11.

But, are you measuring C? If your employees are not checking the outputs of the AI, you're on a path to catastrophe that no ROI calculation can capture, so it had better be greater than zero.

Are you measuring P? How often does the AI get it right on the first try?

Challenges to Computing Checking Costs

In the fraction defined above, the term C is going to be large. Larger than you think.

Measuring P and C with a high degree of precision is probably going to be very hard; possibly unreasonably so, or too expensive12 to bother with in practice. So you will undoubtedly need to work with estimates and proxy metrics. But you have to be aware that this is a problem domain where your normal method of estimating is going to be extremely vulnerable to inherent cognitive bias, and find ways to measure.

Margins, Money, and Metacognition

First let's discuss cognitive and metacognitive bias.

My favorite cognitive bias is the availability heuristic and a close second is its cousin salience bias. Humans are empirically predisposed towards noticing and remembering things that are more striking, and to overestimate their frequency.

If you are estimating the variables above based on the vibe that you're getting from the experience of using an LLM, you may be overestimating its utility.

Consider a slot machine.

If you put a dollar in to a slot machine, and you lose that dollar, this is an unremarkable event. Expected, even. It doesn't seem interesting. You can repeat this over and over again, a thousand times, and each time it will seem equally unremarkable. If you do it a thousand times, you will probably get gradually more anxious as your sense of your dwindling bank account becomes slowly more salient, but losing one more dollar still seems unremarkable.

If you put a dollar in a slot machine and it gives you a thousand dollars, that will probably seem pretty cool. Interesting. Memorable. You might tell a story about this happening, but you definitely wouldn't really remember any particular time you lost one dollar.

Luckily, when you arrive at a casino with slot machines, you probably know well enough to set a hard budget in the form of some amount of physical currency you will have available to you. The odds are against you, you'll probably lose it all, but any responsible gambler will have an immediate, physical representation of their balance in front of them, so when they have lost it all, they can see that their hands are empty, and can try to resist the "just one more pull" temptation, after hitting that limit.

Now, consider Mallory.

If you put ten minutes into writing a prompt, and Mallory gives a completely off-the-rails, useless answer, and you lose ten minutes, well, that's just what using a computer is like sometimes. Mallory malfunctioned, or hallucinated, but it does that sometimes, everybody knows that. You only wasted ten minutes. It's fine. Not a big deal. Let's try it a few more times. Just ten more minutes. It'll probably work this time.

If you put ten minutes into writing a prompt, and it completes a task that would have otherwise taken you 4 hours, that feels amazing. Like the computer is magic! An absolute endorphin rush.

Very memorable. When it happens, it feels like P=1.

But... did you have a time budget before you started? Did you have a specified N such that "I will give up on Mallory as soon as I have spent N minutes attempting to solve this problem with it"? When the jackpot finally pays out that 4 hours, did you notice that you put 6 hours worth of 10-minute prompt coins into it in?

If you are attempting to use the same sort of heuristic intuition that probably works pretty well for other business leadership decisions, Mallory's slot-machine chat-prompt user interface is practically designed to subvert those sensibilities. Most business activities do not have nearly such an emotionally variable, intermittent reward schedule. They're not going to trick you with this sort of cognitive illusion.

Thus far we have been talking about cognitive bias, but there is a metacognitive bias at play too: while Dunning-Kruger, everybody's favorite metacognitive bias does have some problems with it, the main underlying metacognitive bias is that we tend to believe our own thoughts and perceptions, and it requires active effort to distance ourselves from them, even if we know they might be wrong.

This means you must assume any intuitive estimate of C is going to be biased low; similarly P is going to be biased high. You will forget the time you spent checking, and you will underestimate the number of times you had to re-check.

To avoid this, you will need to decide on a Ulysses pact to provide some inputs to a calculation for these factors that you will not be able to able to fudge if they seem wrong to you.

Problematically Plausible Presentation

Another nasty little cognitive-bias landmine for you to watch out for is the authority bias, for two reasons:

  1. People will tend to see Mallory as an unbiased, external authority, and thereby see it as more of an authority than a similarly-situated human13.
  2. Being an LLM, Mallory will be overconfident in its answers14.

The nature of LLM training is also such that commonly co-occurring tokens in the training corpus produce higher likelihood of co-occurring in the output; they're just going to be closer together in the vector-space of the weights; that's, like, what training a model is, establishing those relationships.

If you've ever used an heuristic to informally evaluate someone's credibility by listening for industry-specific shibboleths or ways of describing a particular issue, that skill is now useless. Having ingested every industry's expert literature, commonly-occurring phrases will always be present in Mallory's output. Mallory will usually sound like an expert, but then make mistakes at random.15.

While you might intuitively estimate C by thinking "well, if I asked a person, how could I check that they were correct, and how long would that take?" that estimate will be extremely optimistic, because the heuristic techniques you would use to quickly evaluate incorrect information from other humans will fail with Mallory. You need to go all the way back to primary sources and actually fully verify the output every time, or you will likely fall into one of these traps.

Mallory Mangling Mentorship

So far, I've been describing the effect Mallory will have in the context of an individual attempting to get some work done. If we are considering organization-wide adoption of Mallory, however, we must also consider the impact on team dynamics. There are a number of possible potential side effects that one might consider when looking at, but here I will focus on just one that I have observed.

I have a cohort of friends in the software industry, most of whom are individual contributors. I'm a programmer who likes programming, so are most of my friends, and we are also (sigh), charitably, pretty solidly middle-aged at this point, so we tend to have a lot of experience.

As such, we are often the folks that the team - or, in my case, the community - goes to when less-experienced folks need answers.

On its own, this is actually pretty great. Answering questions from more junior folks is one of the best parts of a software development job. It's an opportunity to be helpful, mostly just by knowing a thing we already knew. And it's an opportunity to help someone else improve their own agency by giving them knowledge that they can use in the future.

However, generative AI throws a bit of a wrench into the mix.

Let's imagine a scenario where we have 2 developers: Alice, a staff engineer who has a good understanding of the system being built, and Bob, a relatively junior engineer who is still onboarding.

The traditional interaction between Alice and Bob, when Bob has a question, goes like this:

  1. Bob gets confused about something in the system being developed, because Bob's understanding of the system is incorrect.
  2. Bob formulates a question based on this confusion.
  3. Bob asks Alice that question.
  4. Alice knows the system, so she gives an answer which accurately reflects the state of the system to Bob.
  5. Bob's understanding of the system improves, and thus he will have fewer and better-informed questions going forward.

You can imagine how repeating this simple 5-step process will eventually transform Bob into a senior developer, and then he can start answering questions on his own. Making sufficient time for regularly iterating this loop is the heart of any good mentorship process.

Now, though, with Mallory in the mix, the process now has a new decision point, changing it from a linear sequence to a flow chart.

We begin the same way, with steps 1 and 2. Bob's confused, Bob formulates a question, but then:

  1. Bob asks Mallory that question.

Here, our path then diverges into a "happy" path, a "meh" path, and a "sad" path.

The "happy" path proceeds like so:

  1. Mallory happens to formulate a correct answer.
  2. Bob's understanding of the system improves, and thus he will have fewer and better-informed questions going forward.

Great. Problem solved. We just saved some of Alice's time. But as we learned earlier,

Mallory can make mistakes. When that happens, we will need to check important info. So let's get checking:

  1. Mallory happens to formulate an incorrect answer.
  2. Bob investigates this answer.
  3. Bob realizes that this answer is incorrect because it is inconsistent with some of his prior, correct knowledge of the system, or his investigation.
  4. Bob asks Alice the same question; GOTO traditional interaction step 4.

On this path, Bob spent a while futzing around with Mallory, to no particular benefit. This wastes some of Bob's time, but then again, Bob could have ended up on the happy path, so perhaps it was worth the risk; at least Bob wasn't wasting any of Alice's much more valuable time in the process.16

Notice that beginning at the start of step 4, we must begin allocating all of Bob's time to C, so C already starts getting a bit bigger than if it were just Bob checking Mallory's output specifically on tasks that Bob is doing.

That brings us to the "sad" path.

  1. Mallory happens to formulate an incorrect answer.
  2. Bob investigates this answer.
  3. Bob does not realize that this answer is incorrect because he is unable to recognize any inconsistencies with his existing, incomplete knowledge of the system.
  4. Bob integrates Mallory's incorrect information of the system into his mental model.
  5. Bob proceeds to make a larger and larger mess of his work, based on an incorrect mental model.
  6. Eventually, Bob asks Alice a new, worse question, based on this incorrect understanding.
  7. Sadly we cannot return to the happy path at this point, because now Alice must unravel the complex series of confusing misunderstandings that Mallory has unfortunately conveyed to Bob at this point. In the really sad case, Bob actually doesn't believe Alice for a while, because Mallory seems unbiased17, and Alice has to waste even more time convincing Bob before she can simply explain to him.

Now, we have wasted some of Bob's time, and some of Alice's time. Everything from step 5-10 is C, and as soon as Alice gets involved, we are now adding to C at double real-time. If more team members are pulled in to the investigation, you are now multiplying C by the number of investigators, potentially running at triple or quadruple real time.

But That's Not All

Here I've presented a brief selection reasons why C will be both large, and larger than you expect. To review:

  1. Gambling-style mechanics of the user interface will interfere with your own self-monitoring and developing a good estimate.
  2. You can't use human heuristics for quickly spotting bad answers.
  3. Wrong answers given to junior people who can't evaluate them will waste more time from your more senior employees.

But this is a small selection of ways that Mallory's output can cost you money and time. It's harder to simplistically model second-order effects like this, but there's also a broad range of possibilities for ways that, rather than simply checking and catching errors, an error slips through and starts doing damage. Or ways in which the output isn't exactly wrong, but still sub-optimal in ways which can be difficult to notice in the short term.

For example, you might successfully vibe-code your way to launch a series of applications, successfully "checking" the output along the way, but then discover that the resulting code is unmaintainable garbage that prevents future feature delivery, and needs to be re-written18. But this kind of intellectual debt isn't even specific to technical debt while coding; it can even affect such apparently genAI-amenable fields as LinkedIn content marketing19.

Problems with the Prediction of P

C isn't the only challenging term though. P, is just as, if not more important, and just as hard to measure.

LLM marketing materials love to phrase their accuracy in terms of a percentage. Accuracy claims for LLMs in general tend to hover around 70%20. But these scores vary per field, and when you aggregate them across multiple topic areas, they start to trend down. This is exactly why "agentic" approaches for more immediately-verifiable LLM outputs (with checks like "did the code work") got popular in the first place: you need to try more than once.

Independently measured claims about accuracy tend to be quite a bit lower21. The field of AI benchmarks is exploding, but it probably goes without saying that LLM vendors game those benchmarks22, because of course every incentive would encourage them to do that. Regardless of what their arbitrary scoring on some benchmark might say, all that matters to your business is whether it is accurate for the problems you are solving, for the way that you use it. Which is not necessarily going to correspond to any benchmark. You will need to measure it for yourself.

With that goal in mind, our formulation of P must be a somewhat harsher standard than "accuracy". It's not merely "was the factual information contained in any generated output accurate", but, "is the output good enough that some given real knowledge-work task is done and the human does not need to issue another prompt"?

Surprisingly Small Space for Slip-Ups

The problem with reporting these things as percentages at all, however, is that our actual definition for P is 1attempts, where attempts for any given attempt, at least, must be an integer greater than or equal to 1.

Taken in aggregate, if we succeed on the first prompt more often than not, we could end up with a P>12, but combined with the previous observation that you almost always have to prompt it more than once, the practical reality is that P will start at 50% and go down from there.

If we plug in some numbers, trying to be as extremely optimistic as we can, and say that we have a uniform stream of tasks, every one of which can be addressed by Mallory, every one of which:

Thought experiments are a dicey basis for reasoning in the face of disagreements, so I have tried to formulate something here that is absolutely, comically, over-the-top stacked in favor of the AI optimist here.

Would that be a profitable? It sure seems like it, given that we are trading off 45 minutes of human time for 1 minute of Mallory-time and 10 minutes of human time. If we ask Python:

1
2
3
4
5
>>> def FF(H, I, C, P, W, E):
...     return (W + I + C + E) / (P * H)
... FF(H=45.0, I=1.0, C=5.0, P=1/2, W=5.0, E=0.01)
...
0.48933333333333334

We get a futzing fraction of about 0.4896. Not bad! Sounds like, at least under these conditions, it would indeed be cost-effective to deploy Mallory. But… realistically, do you reliably get useful, done-with-the-task quality output on the second prompt? Let's bump up the denominator on P just a little bit there, and see how we fare:

1
2
>>> FF(H=45.0, I=1.0, C=5.0, P=1/3, W=5.0, E=0.01)
0.734

Oof. Still cost-effective at 0.734, but not quite as good. Where do we cap out, exactly?

1
2
3
4
5
6
7
8
9
>>> from itertools import count
... for A in count(start=4):
...     print(A, result := FF(H=45.0, I=1.0, C=5.0, P=1 / A, W=5.0, E=1/60.))
...     if result > 1:
...         break
...
4 0.9792592592592594
5 1.224074074074074
>>>

With this little test, we can see that at our next iteration we are already at 0.9792, and by 5 tries per prompt, even in this absolute fever-dream of an over-optimistic scenario, with a futzing fraction of 1.2240, Mallory is now a net detriment to our bottom line.

Harm to the Humans

We are treating H as functionally constant so far, an average around some hypothetical Gaussian distribution, but the distribution itself can also change over time.

Formally speaking, an increase to H would be good for our fraction. Maybe it would even be a good thing; it could mean we're taking on harder and harder tasks due to the superpowers that Mallory has given us.

But an observed increase to H would probably not be good. An increase could also mean your humans are getting worse at solving problems, because using Mallory has atrophied their skills23 and sabotaged learning opportunities2425. It could also go up because your senior, experienced people now hate their jobs26.

For some more vulnerable folks, Mallory might just take a shortcut to all these complex interactions and drive them completely insane27 directly. Employees experiencing an intense psychotic episode are famously less productive than those who are not.

This could all be very bad, if our futzing fraction eventually does head north of 1 and you need to reconsider introducing human-only workflows, without Mallory.

Abridging the Artificial Arithmetic (Alliteratively)

To reiterate, I have proposed this fraction:

FF = W+I+C+E P H

which shows us positive ROI when FF is less than 1, and negative ROI when it is more than 1.

This model is heavily simplified. A comprehensive measurement program that tests the efficacy of any technology, let alone one as complex and rapidly changing as LLMs, is more complex than could be captured in a single blog post.

Real-world work might be insufficiently uniform to fit into a closed-form solution like this. Perhaps an iterated simulation with variables based on the range of values seem from your team's metrics would give better results.

However, in this post, I want to illustrate that if you are going to try to evaluate an LLM-based tool, you need to at least include some representation of each of these terms somewhere. They are all fundamental to the way the technology works, and if you're not measuring them somehow, then you are flying blind into the genAI storm.

I also hope to show that a lot of existing assumptions about how benefits might be demonstrated, for example with user surveys about general impressions, or by evaluating artificial benchmark scores, are deeply flawed.

Even making what I consider to be wildly, unrealistically optimistic assumptions about these measurements, I hope I've shown:

  1. in the numerator, C might be a lot higher than you expect,
  2. in the denominator, P might be a lot lower than you expect,
  3. repeated use of an LLM might make H go up, but despite the fact that it's in the denominator, that will ultimately be quite bad for your business.

Personally, I don't have all that many concerns about E and I. E is still seeing significant loss-leader pricing, and I might not be coming down as fast as vendors would like us to believe, if the other numbers work out I don't think they make a huge difference. However, there might still be surprises lurking in there, and if you want to rationally evaluate the effectiveness of a model, you need to be able to measure them and incorporate them as well.

In particular, I really want to stress the importance of the influence of LLMs on your team dynamic, as that can cause massive, hidden increases to C. LLMs present opportunities for junior employees to generate an endless stream of chaff that will simultaneously:

If you've already deployed LLM tooling without measuring these things and without updating your performance management processes to account for the strange distortions that these tools make possible, your Futzing Fraction may be much, much greater than 1, creating hidden costs and technical debt that your organization will not notice until a lot of damage has already been done.

If you got all the way here, particularly if you're someone who is enthusiastic about these technologies, thank you for reading. I appreciate your attention and I am hopeful that if we can start paying attention to these details, perhaps we can all stop futzing around so much with this stuff and get back to doing real work.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!


  1. I do not share this optimism, but I want to try very hard in this particular piece to take it as a given that genAI is in fact helpful.

  2. If we could have a better prompt on demand via some repeatable and automatable process, surely we would have used a prompt that got the answer we wanted in the first place.

  3. The software idea of a "user agent" straightforwardly comes from the legal principle of an agent, which has deep roots in common law, jurisprudence, philosophy, and math. When we think of an agent (some software) acting on behalf of a principal (a human user), this historical baggage imputes some important ethical obligations to the developer of the agent software. genAI vendors have been as eager as any software vendor to dodge responsibility for faithfully representing the user's interests even as there are some indications that at least some courts are not persuaded by this dodge, at least by the consumers of genAI attempting to pass on the responsibility all the way to end users. Perhaps it goes without saying, but I'll say it anyway: I don't like this newer interpretation of "agent".

  4. "Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents", Axel Backlund, Lukas Petersson, Feb 20, 2025

  5. "random thing are happening, maxed out usage on api keys", @leojr94 on Twitter, Mar 17, 2025

  6. "New study sheds light on ChatGPT's alarming interactions with teens"

  7. "Lawyers submitted bogus case law created by ChatGPT. A judge fined them $5,000", by Larry Neumeister for the Associated Press, June 22, 2023

  8. During which a human will be busy-waiting on an answer.

  9. Given the fluctuating pricing of these products, and fixed subscription overhead, this will obviously need to be amortized; including all the additional terms to actually convert this from your inputs is left as an exercise for the reader.

  10. I feel like I should emphasize explicitly here that everything is an average over repeated interactions. For example, you might observe that a particular LLM has a low probability of outputting acceptable work on the first prompt, but higher probability on subsequent prompts in the same context, such that it usually takes 4 prompts. For the purposes of this extremely simple closed-form model, we'd still consider that a P of 25%, even though a more sophisticated model, or a monte carlo simulation that sets progressive bounds on the probability, might produce more accurate values.

  11. No it isn't, actually, but for the sake of argument let's grant that it is.

  12. It's worth noting that all this expensive measuring itself must be included in C until you have a solid grounding for all your metrics, but let's optimistically leave all of that out for the sake of simplicity.

  13. "AI Company Poll Finds 45% of Workers Trust the Tech More Than Their Peers", by Suzanne Blake for Newsweek, Aug 13, 2025

  14. AI Chatbots Remain Overconfident - Even When They're Wrong by Jason Bittel for the Dietrich College of Humanities and Social Sciences at Carnegie Mellon University, July 22, 2025

  15. AI Mistakes Are Very Different From Human Mistakes by Bruce Schneier and Nathan E. Sanders for IEEE Spectrum, Jan 13, 2025

  16. Foreshadowing is a narrative device in which a storyteller gives an advance hint of an upcoming event later in the story.

  17. "People are worried about the misuse of AI, but they trust it more than humans"

  18. "Why I stopped using AI (as a Senior Software Engineer)", theSeniorDev YouTube channel, Jun 17, 2025

  19. "I was an AI evangelist. Now I'm an AI vegan. Here's why.", Joe McKay for the greatchatlinkedin YouTube channel, Aug 8, 2025

  20. "What LLM is The Most Accurate?"

  21. "Study Finds That 52 Percent Of ChatGPT Answers to Programming Questions are Wrong", by Sharon Adarlo for Futurism, May 23, 2024

  22. "Off the Mark: The Pitfalls of Metrics Gaming in AI Progress Races", by Tabrez Syed on BoxCars AI, Dec 14, 2023

  23. "I tried coding with AI, I became lazy and stupid", by Thomasorus, Aug 8, 2025

  24. "How AI Changes Student Thinking: The Hidden Cognitive Risks" by Timothy Cook for Psychology Today, May 10, 2025

  25. "Increased AI use linked to eroding critical thinking skills" by Justin Jackson for Phys.org, Jan 13, 2025

  26. "AI could end my job - Just not the way I expected" by Manuel Artero Anguita on dev.to, Jan 27, 2025

  27. "The Emerging Problem of "AI Psychosis"" by Gary Drevitch for Psychology Today, July 21, 2025.

15 Aug 2025 7:51am GMT

09 Aug 2025

feedPlanet Twisted

Glyph Lefkowitz: R0ML’s Ratio

My father, also known as "R0ML" once described a methodology for evaluating volume purchases that I think needs to be more popular.

If you are a hardcore fan, you might know that he has already described this concept publicly in a talk at OSCON in 2005, among other places, but it has never found its way to the public Internet, so I'm giving it a home here, and in the process, appropriating some of his words.1


Let's say you're running a circus. The circus has many clowns. Ten thousand clowns, to be precise. They require bright red clown noses. Therefore, you must acquire a significant volume of clown noses. An enterprise licensing agreement for clown noses, if you will.

If the nose plays, it can really make the act. In order to make sure you're getting quality noses, you go with a quality vendor. You select a vendor who can supply noses for $100 each, at retail.

Do you want to buy retail? Ten thousand clowns, ten thousand noses, one hundred dollars: that's a million bucks worth of noses, so it's worth your while to get a good deal.

As a conscientious executive, you go to the golf course with your favorite clown accessories vendor and negotiate yourself a 50% discount, with a commitment to buy all ten thousand noses.

Is this a good deal? Should you take it?

To determine this, we will use an analytical tool called R0ML's Ratio (RR).

The ratio has 2 terms:

  1. the Full Undiscounted Retail List Price of Units Used (FURLPoUU), which can of course be computed by the individual retail list price of a single unit (in our case, $100) multiplied by the number of units used
  2. the Total Price of the Entire Enterprise Volume Licensing Agreement (TPotEEVLA), which in our case is $500,000.

It is expressed as:

RR = TPotEEVLA FURLPoUU

Crucially, you must be able to compute the number of units used in order to complete this ratio. If, as expected, every single clown wears their nose at least once during the period of the license agreement, then our Units Used is 10,000, our FURLPoUU is $1,000,000 and our TPotEEVLA is $500,000, which makes our RR 0.5.

Congratulations. If R0ML's Ratio is less than 1, it's a good deal. Proceed.

But… maybe the nose doesn't play. Not every clown's costume is an exact clone of the traditional, stereotypical image of a clown. Many are avant-garde. Perhaps this plentiful proboscis pledge was premature. Here, I must quote the originator of this theoretical framework directly:

What if the wheeze doesn't please?

What if the schnozz gives some pause?

In other words: what if some clowns don't wear their noses?

If we were to do this deal, and then ask around afterwards to find out that only 200 of our 10,000 clowns were to use their noses, then FURLPoUU comes out to 200 * $100, for a total of $20,000. In that scenario, RR is 25, which you may observe is substantially greater than 1.

If you do a deal where R0ML's ratio is greater than 1, then you are the bozo.


I apologize if I have belabored this point. As R0ML expressed in the email we exchanged about this many years ago,

I do not mind if you blog about it - and I don't mind getting the credit - although one would think it would be obvious.

And yeah, one would think this would be obvious? But I have belabored it because many discounted enterprise volume purchasing agreements still fail the R0ML's Ratio Bozo Test.2

In the case of clown noses, if you pay the discounted price, at least you get to keep the nose; maybe lightly-used clown noses have some resale value. But in software licensing or SaaS deals, once you've purchased the "discounted" software or service, once you have provisioned the "seats", the money is gone, and if your employees don't use it, then no value for your organization will ever result.

Measuring number of units used is very important. Without this number, you have no idea if you are a bozo or not.

It is often better to give your individual employees a corporate card and allow them to make arbitrary individual purchases of software licenses and SaaS tools, with minimal expense-reporting overhead; this will always keep R0ML's Ratio at 1.0, and thus, you will never be a bozo.

It is always better to do that the first time you are purchasing a new software tool, because the first time making such a purchase you (almost by definition) have no information about "units used" yet. You have no idea - you cannot have any idea - if you are a bozo or not.

If you don't know who the bozo is, it's probably you.

Acknowledgments

Thank you for reading, and especially thank you to my patrons who are supporting my writing on this blog. Of course, extra thanks to dad for, like, having this idea and doing most of the work here beyond my transcription. If you like my dad's ideas and you'd like to post more of them, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!


  1. One of my other favorite posts on this blog was just stealing another one of his ideas, so hopefully this one will be good too.

  2. This concept was first developed in 2001, but it has some implications for extremely recent developments in the software industry; but that's a post for another day.

09 Aug 2025 4:41am GMT