30 Apr 2026

feedPlanet Grep

Mattias Geniar: Finding Dutch audio across streaming services

We have 4 streaming services at home: Netflix, Disney+, Prime Video, and Apple TV+. Try finding which films and series are actually available with Dutch audio.

30 Apr 2026 11:10am GMT

Frank Goossens: Gelezen; “Mijn Lieve Gunsteling” van Lucas Rijneveld en “Vuur” van John Boyne

"Mijn Lieve Gunsteling" en "Vuur" zijn allebei boeken over misbruik van jonge tieners door volwassenen. "Gunsteling" van Lucas Rijneveld (ook gekend van "De avond is ongemak") is prachtig geschreven, maar het graaft door de ogen van de pleger héél diep in de getormenteerde psyche van zowel slachtoffer als pleger. Het bleek daardoor voor mij bijna ondragelijk emotioneel intens en ik heb de 363…

Source

30 Apr 2026 11:10am GMT

Dries Buytaert: AI rewards strict APIs

An astronaut explores a surreal landscape beneath rainbow-colored planetary rings, symbolizing the journey into AI's transformative potential for Drupal.

Every framework's API surface sits on a spectrum, from strict (typed interfaces, schemas, service containers) to loose (string keys, naming conventions, untyped hooks). Strict APIs cost more upfront: more boilerplate, more to learn before writing code. Loose APIs shift that cost later: more ambiguity, more reliance on naming conventions, and more bugs that are harder to detect and fix.

AI changes who pays. Boilerplate and learning curves don't slow agents down. What slows them down is missing feedback: code that runs but does the wrong thing, errors that don't point to the cause, conventions that have to be guessed. Magic-name binding, untyped hooks, unvalidated configuration, and conventions the code doesn't enforce produce exactly those failure modes.

Magic strings break the loop

For example, both Drupal and WordPress have long used magic-string hooks. In Drupal, you write a function like mymodule_user_login. WordPress uses a related pattern: a string action name passed to add_action(). In both cases, the binding is a string the language can't validate.

Get the name wrong and the system silently skips your code: no error, no warning, nothing in the logs. The function just sits there, unloved.

The signature is a convention, not a contract: the documentation says the user_login hook receives a $user object, but nothing enforces it. To your IDE or a static analyzer like PHPStan, it's just a function. They don't know it's wired into the platform's login flow, so they can't warn you when it's wrong.

A typed alternative makes the binding explicit. With a PHP attribute like #[Hook('user_login')] on a registered service, the class must exist, the method signature is type-checked, and the container wires the dependencies. IDEs, static analyzers, and AI coding agents can follow the chain from the attribute to the implementation.

For AI agents, this keeps the feedback loop tight instead of turning it into trial and error. That means they can move faster, spend less time debugging, and use fewer tokens.

At DrupalCon Chicago this March, AI coding tools migrated a Lovable-generated site into Drupal in hours. The strict APIs kept the agent on track.

A bet made before AI existed

This didn't start with AI. Drupal 8, which we shipped in 2015, introduced Symfony's routing, services, and event dispatcher, replacing large parts of the procedural hook system. Since then, we've kept reducing magic hooks. The attribute-based approach (#[Hook('user_login')]) landed in Drupal 11.1 and helps remove more of the remaining procedural-only paths.

Hooks aren't the only place Drupal has been getting stricter. Drupal stores a lot of configuration in YAML, which was one of the loosest parts of the system. A multi-year validation effort has been tightening that.

When an agent generates a content type definition or editor configuration, validation catches missing keys, invalid values, and broken references before anything is saved. The agent gets a precise error pointing to the exact field, instead of a runtime failure. That tight feedback loop is what makes Drupal a strong CMS for AI-assisted development.

Drupal made this bet early, and it was painful. The Drupal 7 to Drupal 8 transition broke backward compatibility and took years to recover from. But it left the platform much stricter. More than ten years in, we're still making Drupal stricter.

Meanwhile, WordPress made a different bet, prioritizing backward compatibility over stricter APIs. That kept the platform stable for a long time. It also kept the looseness.

Those trade-offs now determine how efficiently AI agents can work with each platform.

What was style is now speed

What used to be a stylistic choice is now a speed and cost problem. Loose APIs mean more debugging and guesswork. Strict APIs mean faster, more precise feedback. This was always true for humans. It's now also true for AI agents. But today that cost shows up in tokens.

30 Apr 2026 11:10am GMT

feedPlanet Debian

Sergio Cipriano: How to build reverse dependencies using Salsa CI

How to build reverse dependencies using Salsa CI

Last week, I attended MiniDebConf Campinas, and one of my favorites talks was "Salsa CI, showing features that almost nobody knows" by Aquila Macedo.

One of the things I learned is that we can easily build reverse dependencies using:

$ git push -o ci.variable="SALSA_CI_DISABLE_BUILD_REVERSE_DEPENDENCIES=0"

I tried this option before uploading typer version 0.20.0-1:

example of salsa ci build rdeps working

This is an amazing feature. Thanks to everyone involved in making it happen!

30 Apr 2026 2:27am GMT

Otto Kekäläinen: Mentoring Mondays for aspiring Debian contributors

Featured image of post Mentoring Mondays for aspiring Debian contributors

I mentor several people in Debian, and have been repeatedly asked to offer an opportunity to ask questions on a live call. I have now started a recurring video call for exactly that, which I call Mentoring Mondays, and it is open for anyone aspiring to contribute to Debian, one of the oldest and most widely used Linux distributions.

Mentoring Mondays have already been happening for the past few Mondays, and this week we had a record 20 people on the call. During the calls so far we have had a demo of updating a package for a new upstream release using gbp, and of how to create a Merge Request on Salsa for a new upstream version. There is clearly a need for this, so I am announcing this now also on my blog, just as I have publicly announced that I offer mentoring for aspiring Debian contributors.

What is Mentoring Mondays?

Mentoring Mondays is a recurring video call that lasts roughly 45 minutes with the agenda:

This is ideal for you if you:

The call is mainly intended for those who want to contribute to Debian (or Debian derivatives, with Ubuntu being the most popular), but anyone can join to learn about things related to contributing to a Linux distribution. Please note that video chat uses Debian Social Jitsi. Joining the call requires authentication using a Salsa account, which anyone contributing to Debian should have anyway.

Calls are not recorded, so participants can chat freely, and are also encouraged to be on-camera for an enhanced sense of community.

Next call: Monday May 4th, 2026

Make sure you are logged into Salsa first, before opening the call on Debian's Jitsi instance.

Matrix channel and future meeting time announcements

Please join the Matrix channel #mentoringmondays:matrix.debian.social if you plan to attend Mentoring Mondays. All future meeting times will be announced there. It is also the channel to post questions about Debian packaging to be answered during the call.

The current meeting time is friendly to people in Europe, Asia and Australian time zones, and will repeat at the same time slot on:

Starting in mid-June the meeting time will change to accommodate participation in different time zones.

Spread the word

Feel free to extend the invite to anyone you think might be interested in joining!

If you mention this on social media, please post using tag #mentoringmondays, or simply boost the existing posts on the social media of your preference: Mastodon, Lemmy, Reddit, Bluesky, LinkedIn, Farcaster, X.

Thanks

A big thanks to Jason Kregting for helping organize. I would also like to thank in advance all the Debian Developers who are able to join the call and be available to participate in discussions and help answer questions.

30 Apr 2026 12:00am GMT

29 Apr 2026

feedPlanet Lisp

Yukari Hafner: On Lisp, LLMs, and Community

https://studio.tymoon.eu/api/studio/file?id=3892

In 2015 in London I attended my first European Lisp Symposium. I was 21 at the time, and while this wasn't my first time abroad on my own, it was still a pretty stressful affair. I remember it still pretty clearly to that day: meeting Robert Strandh, Zach Beane, Didier Verna, Daniel Kochmański and many other people I'd previously admired from afar through many discussions on IRC. It was an important event for me, and was the first time I'd felt like I was in a group of people I could talk with about my interests and ambitions.

Last year in 2025 I was the local chair for ELS in Zürich. It was a stressful time and I don't remember much of it other than how the stage looked, the food, and me rushing all over to get supplies and take care of other emergencies. I barely talked to anyone because I was either rushing about, stressed, or too tired.

In that time, my life has changed significantly. Over the years I took on more and more organisational roles for ELS itself: remaking the website, handling the transition to a hybrid online conference, handling the live streaming on-site, and last year being local chair.

But for other parts of the broader Lisp community I gradually changed in the opposite direction: I stopped religiously reading the #lisp/#commonlisp IRC channels. I left the Lisp Discord. I stopped replying on and ultimately altogether reading the /r/lisp subreddit. I stopped blogging about what was going on both in other places and with my own projects.

All of these changes happened over time as I found myself with less tolerance for things that annoyed me and wasted my time and energy. The endless debates about why there wasn't a new standard, the constant humm-hawwing about what """the community""" should do, why Lisp wasn't more widely used if it's so great, someone starting yet another project that was already done instead of contributing to an existing implementation, and so on and so forth.

And then I found myself thinking today: "gee, I'm not very excited to go to ELS'26, huh? Whatever happened?" I've already booked my flight and hotel, and I'll be there anyway, partly because I have to for organisational reasons. But now that I'm thinking about how I feel, I can't say for certain if I will be back next year, too. Both for financial and emotional reasons.

In recent years I've found myself more and more disconnected with male-dominated spaces in general. I don't feel at home in them. I'm already not a very social person and struggle with any kind of gathering that has more than 6 people, but a lot more so still if it's mostly men. Not necessarily because I feel like I'm in any kind of danger, but simply because I don't feel like I belong. And... you know, that's sad. Obviously me leaving won't make the situation better for the other women that do attend, but that's the dilemma with all of these situations: unless the organisation creates intentional pressure to correct the situation, it will inevitably only reinforce itself.[1]

And then there's what I can, in the nicest way, only describe as "The LLM Situation," though I will be increasingly un-nice going forward. As of early this year SBCL has happily accepted patches that are authored by or with the use of LLMs, and the maintainers have rebuffed complaints about this practise. The mailing list has also gotten its fair share of useless blather by apologists and pointless drivel dreamed up by LLMs that only wastes everyone's time, to the point where I had to just stop reading it altogether. A few maintainers of other significant projects seem to also have embraced the capitalist wasteland mass exploitation machine that disguises itself as "technology."

On the other side of things as the lead developer of Shirakumo I've decided to put out a blanket ban on all of this garbage. I do not care if LLMs work at all, or if they will ever work, or whatever. The usability of LLMs is completely irrelevant. By using them you are happily handing over the single last remaining shred of your human spirit to the capitalists to help them burn everyone else and the world with it to the ground.

I think back to the impromptu "LLM roundtable" discussion that took place at the end of ELS last year, along with the usual apologist bullshit that was spread in the ELS Signal group at the time, and some of the lightning talks that were shown. And as I think about this, I am filled with trepidation about the coming conference.

Obviously I have no idea what it will be like yet, and I have no idea what the programme will be, nor what people will be there, or what the general vibe is going to be. But nevertheless, I really hope I won't have to "crash out" as the kids say. I already lost my mind last year, seemingly being the only one that wanted to hold a firm stance against this wave of shit at the time.

So what does this all mean going forward? Well, for just now, nothing. I'll continue to be in the places I have dug out myself: mastodon, the shirakumo lichat/libera channel, my patreon, and other small, purpose-driven communities. But it's very possible I'll be leaving ELS behind me permanently after this year, cutting off even the last part of the community that used to be most of my world.

Regardless, I will still be working on my Lisp projects. If nothing else, one of the nice things about the looming tower of software I've built over all these years is that I am in control of the vast majority of it, and replacing any particular part I didn't write should it get enshittified is not that big of an endeavour.

Make no mistake though: I will continue to be increasingly outspoken and annoying about political matters that I consider important and relevant, and this will also be visible in the software I write, be that in licensing, ecosystem integration, or documentation.

I hope that more people will speak out publicly about their stance. It's important to show what you stand for, even if you're just a small part. What is considered "normal" and acceptable is only ever a matter of what people get to see, regardless of how prevalent that stance is among the population. Currently people are getting to see a lot of folks proudly and loudly making trash and littering it all over the place. This normalisation is dangerous, because it makes the average joes think it's OK for them to do it too, or even that they should be doing it.

Just the same way as any other social movement, you 🫵 play a role in it, and your voice matters. Whether you use your voice for the betterment of humans, for the betterment of the ghouls feeding off of us, or silently let the ghouls feed off of us.

See you at ELS'26 in Kraków!


  1. A very dramatic but clear demonstration of this principle is found in the "Nazi Bar" anecdote. People that don't feel comfortable will just leave, even when there is no explicit and obvious push for them to do so.

29 Apr 2026 5:12pm GMT

28 Apr 2026

feedPlanet Debian

Abhijith PA: Patience could've saved me time.

If I had been patient, it would have saved me time. One such instance is following.

From my early blogs, you might know I am using mutt to do email. Just after I get along with mutt, I started using notmuch. Because limit search in mutt is always a pain when you have multiple folders. And what better tool out there than notmuch-mutt to bind both these.

notmuch-mutt provide three macros by default.

macro index <F8> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<shell-escape>notmuch-mutt -r --prompt search<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
      "notmuch: search mail"
macro index <F9> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt -r thread<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
      "notmuch: reconstruct thread"
macro index <F6> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt tag -- -inbox<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
      "notmuch: remove message from inbox"

One for search, one for reconstructing threads and one for manipulating tags, which I missed.

Now my impatient part. I have already mapped f6 for my folder movements and in my initial days of notmuch, I only use just search. So I never cared about the f6 macro provided by notmuch-mutt. As time goes by I got very comfortable with notmuch. I was stretching my notmuch legs. I started to live more on notmuch search results date:today tag:unread than more on the mutt index. To the problem, since notmuch-mutt dump all results to a temp maildir location, can't perform flag changes back to the original maildir which was annoying, because we need to distinguish what mail you read and what not when you subscribed to most of all debian mailing list.

I was under the impression that, the notmuch-mutt is not capable of doing so and I just went like that without checking docs. I started doing all crazy hack to sync these maildirs.

I even started reading notmuch-mutt codebase.

Later, I settled on notmuch-vim. Cause I can manipulate flags sync back from notmuch to maildir.

And while searching for something, I accidentally revisited the the the notmuch-mutt macro page and saw the tag manipulation. I was like :( .

If I read about the third macro patiently when added that to config, I could've saved time by not doing ugly hacks around it.

I think I learned my lesson.

28 Apr 2026 6:33am GMT

25 Apr 2026

feedFOSDEM 2026

All FOSDEM 2026 videos are online

All video recordings from FOSDEM 2026 that are worth publishing have been processed and released. Videos are linked from the individual schedule pages for the talks and the full schedule page. They are also available, organised by room, at video.fosdem.org/2026. While all released videos have been reviewed by a human, it remains possible that one or more issues fell through the cracks. If you notice any problem with a video you care about, please let us know as soon as possible so we can look into it before the video-processing infrastructure is shut down for this edition. To report any舰

25 Apr 2026 10:00pm GMT

16 Apr 2026

feedPlanet Lisp

Tim Bradshaw: Structures of arrays

Or, second system.

A while ago, I decided that I'd like to test my intuition that Lisp (specifically implementations of Common Lisp) was not, in fact, bad at floating-point code and that the ease of designing languages in Lisp could make traditional Fortran-style array-bashing numerical code pretty pleasant to write.

I used an intentionally naïve numerical solution to a gravitating many-body system as a benchmark, so I could easily compare Lisp & C versions. The brief result is that the Lisp code is a little slower than C, but not much: Lisp is not, in fact, slow. Who knew?

The point here though, is that I wanted to dress up the array-bashing code so it looked a lot more structured. To do this I wrote a macro which hid what was in fact an array of (for instance) double floats behind a bunch of syntax which made it look like an array of structures. That macro took a couple of hours.

This was fine and pretty simple, but it only dealt with a single type for each conceptual array of objects, there was no inheritance and it was restricted in various other ways. In particular it really was syntactic sugar on a vector: there was no distinct implementational type at all. So I thought well, I could make it more general and nicer.

Big mistake.

The second system

Here is an example of what I wanted to be able to do (this is in fact the current syntax):

(define-soa-class example ()
  ((x :array t :type double-float)
   (y :array t :type double-float)
   (p :array t :type double-float :group pq)
   (q :array t :type double-float :group pq)
   (r :array t :type fixnum)
   (s)))

This defines a class, instances of which have five array slots and one scalar slot. Of the array slots:

The implementation will tell you this:

> (describe (make-instance 'example :dimensions '(2 2)))
#<example 8010059EEB> is an example
[...]
dimensions      (2 2)
total-size      4
rank            2
tick            1
its class example has a valid layout
it has 3 arrays:
 index 0, element type double-float, 2 slots
 index 1, element type (signed-byte 64), 1 slot
 index 2, element type double-float, 2 slots
it has 5 array slots:
 name x, index 0 offset 0
 name y, index 0 offset 1
 name r, index 1 offset 0
 name p, index 2 offset 0
 name q, index 2 offset 1

This is already too complicated: the ability to control sharing via groups is almost certainly never going to be useful: it's only even there because I thought of it quite early on and never removed it.

The class definition macro then needs to arrange life so that enough information is available so that a macro can be written which turns indexed slot access into indexed array access of the underlying arrays which are secretly stored in instances, inserting declarations to make this as fast as possible: anything slower than explicit array access is not acceptable. This might (and does) look like this, for example:

(with-array-slots (x y) (thing example)
  (for* ((i ...) (j ...))
    (setf (x i j) (- (y i j) (y j i)))))

As you can see from this, the resulting objects should be allowed to have rank other than 1. Inheritance should also work, including for array slots. Redefinition should be supported and obsolete macro expansions and instances at least detected.

In other words there are exactly two things I should have aimed at achieving: the ability to define fields of various types and have them grouped into (generally fewer) underlying arrays, and an implementational type to hold these things. Everything else was just unnecessary baggage which made the implementation much more complicated than it needed to be.

I had not finished making mistakes. The system needs to store some metadata about how slots map onto the underlying arrays, element types and so on, so the macro can use this to compile efficient code. There are two obvious ways to do this: use the property list of the class name, or subclass standard-class and store the metadata in the class. The first approach is simple, portable, has clear semantics, but it's 'hacky'; the second is more complicated, not portable, has unclear semantics1, but it's The Right Thing2. Another wrong decision I made without even trying.

The only thing that saved me was that the nature of software is that you can only make a finite number of bad decisions in a finite time.

More bad decisions

I was not done. Early on, I thought that, well, I could make this whole thing be a shim around defstruct: single inheritance was more than enough, and obviously I could store metadata on the property list of the type name as described above. And there's no nausea with multiple accessors or any of that nonsense.

But, somehow, I found writing a thing which would process the (structure-name ...) case of defstruct too painful, so I decided to go for the shim-around-defclass version instead. I even have a partly-complete version of the defstructy code which I abandoned. Another mistake.

I also decided that The Right Thing was to have the system support objects of rank 0. That constrains the underlying array representation (it needs to use rank \(n+1\) arrays for an object of rank \(n\)) in a way which I thought for a long time might limit performance.

Things I already knew

At any point during the implementation of this I could have told you that it was too general and the implementation was going to be too complicated for no real gain. I don't know why I made so many bad choices.

The whole process took weeks and I nearly just gave up several times.

The light at the end of the tunnel

Or: all-up testing.

Eventually, I had a thing I thought might work. The macro syntax was a bit ugly (that macro still exists, with a different name) but it seemed to work. But since the whole purpose of the thing was performance, that needed to be checked. I wasn't optimistic.

What I did was to write a version of my naïve gravitational many-body system using the new code, based closely on the previous one. The function that updates the state of the particles looks like this:

(defun/quickly step-pvs (source destination from below dt G &aux
                                (n (particle-vector-length source)))
  ;; Step a source particle vector into a destination one.
  ;;
  ;; Operation count:
  ;;  3
  ;;  + (below - from) * (n - 1) * (3 + 8 + 9)
  ;;  + (below - from) * (12 + 6)
  ;;  = (below - from) * (20 * (n - 1) + 18) + 3
  (declare (type particle-vector source destination)
           (type vector-index from)
           (type vector-dimension below)
           (type fpv dt G)
           (type vector-dimension n))
  (when (eq source destination)
    (error "botch"))
  (let*/fpv ((Gdt (* G dt))
             (Gdt^2/2 (/ (* Gdt dt) (fpv 2.0))))
    (binding-array-slots (((source particle-vector :check nil :rank 1 :suffix _s)
                           m x y z vx vy vz)
                          ((destination particle-vector :check nil :rank 1 :suffix _d)
                           m x y z vx vy vz))
      (for ((i1 (in-naturals :initially from :bound below :fixnum t)))
        (let/fpv ((ax/G zero.fpv)
                  (ay/G zero.fpv)
                  (az/G zero.fpv)
                  (x1 (x_s i1))
                  (y1 (y_s i1))
                  (z1 (z_s i1))
                  (vx1 (vx_s i1))
                  (vy1 (vy_s i1))
                  (vz1 (vz_s i1)))
          (for ((i2 (in-naturals n t)))
            (when (= i1 i2) (next))
            (let/fpv ((m2 (m_s i2))
                      (x2 (x_s i2))
                      (y2 (y_s i2))
                      (z2 (z_s i2)))
              (let/fpv ((rx (- x2 x1))
                        (ry (- y2 y1))
                        (rz (- z2 z1)))
                (let/fpv ((r^3 (let* ((r^2 (+ (* rx rx) (* ry ry) (* rz rz)))
                                      (r (sqrt r^2)))
                                 (declare (type nonnegative-fpv r^2 r))
                                 (* r r r))))
                  (incf ax/G (/ (* rx m2) r^3))
                  (incf ay/G (/ (* ry m2) r^3))
                  (incf az/G (/ (* rz m2) r^3))))))
          (setf (x_d i1) (+ x1 (* vx1 dt) (* ax/G Gdt^2/2))
                (y_d i1) (+ y1 (* vy1 dt) (* ay/G Gdt^2/2))
                (z_d i1) (+ z1 (* vz1 dt) (* az/G Gdt^2/2)))
          (setf (vx_d i1) (+ vx1 (* ax/G Gdt))
                (vy_d i1) (+ vy1 (* ay/G Gdt))
                (vz_d i1) (+ vz1 (* az/G Gdt)))))))
  destination)

And it not only worked, the performance was very close to the previous version, straight out of the gate. The syntax is not as nice as that of the initial, quick-and-dirty version, but it is much more general, so I think that's worth it on the whole.

There have been problems since then: in particular the dependency on when classes get defined. It will never be as portable as I'd like because of the unnecessary MOP dependencies3, but it is usable and quick4.

Was it worth it? May be, but it should have been simpler.


  1. When exactly do classes get defined? Right.

  2. Nothing that uses the AMOP MOP is ever The Right Thing, because the whole thing was designed by people who were extremely smart, but still not as smart as they needed to be and thought they were. It's unclear if any MOP for CLOS can ever be satisfactory, in part because CLOS itself suffers from the same smart-but-not-smart-enough problem to a large extent not helped by bring dropped wholesale into CL at the last minute: by the time CL was standardised people had written large systems in it, but almost nobody had written anything significant using CLOS, let alone the AMOP MOP.

  3. A mistake I somehow managed to avoid was using the whole slot-definition mechanism the MOP wants you to use.

  4. I will make it available at some point.

16 Apr 2026 11:01am GMT

14 Apr 2026

feedPlanet Lisp

Robert Smith: Not all elementary functions can be expressed with exp-minus-log

By Robert Smith

All Elementary Functions from a Single Operator is a paper by Andrzej Odrzywołek that has been making rounds on the internet lately, being called everything from a "breakthrough" to "groundbreaking". Some are going as far as to suggest that the entire foundations of computer engineering and machine learning should be re-built as a result of this. The paper says that the function

$$ E(x,y) := \exp x - \log y $$

together with variables and the constant $1$, which we will call EML terms, are sufficient to express all elementary functions, and proceeds to give constructions for many constants and functions, from addition to $\pi$ to hyperbolic trigonometry.

I think the result is neat and thought-provoking. Odrzywołek is explicit about his definition of "elementary function". His Table 1 fixes "elementary" as 36 specific symbols, and under that definition his theorem is correct and clever, so long as we accept some of his modifications to the conventional $\log$ function and do arithmetic with infinities.

My concern is that the word "elementary" in the title carries a much broader meaning in standard mathematical usage. Odrzywołek recognizes this, saying little more than "[t]hat generality is not needed here" and that his work takes "the ordinary scientific-calculator point of view". He does not offer further commentary.

What is this more general setting, and does his claim still hold? In modern pure mathematics, dating back to the 19th century, the definition of "elementary function" has been well established. We'll get to a definition shortly, but to cut to the chase, the titular result does not hold in this setting. As such, in layman's terms, I do not consider the "Exp-Minus-Log" function to be the continuous analog of the Boolean NAND gate or the universal quantum CCNOT/CSWAP gates.

The rough TL;DR is this: Elementary functions typically include arbitrary polynomial root functions, and EML terms cannot express them. Below, I'll give a relatively technical argument that EML terms are not sufficient to express what I consider standard elementary functions.

To avoid any confusion, the purpose of this blog post is manifold:

  1. To elucidate what many mathematicians consider to be an "elementary function", which is the foundation for a variety of rich and interesting math (especially if you like computer science).
  2. To prove a result about EML terms using topological Galois theory.
  3. To demonstrate how this result may be used to show an elementary function not expressible by EML terms.

This blog post is not a refutation of Odrzywołek's work, though the title might be considered just as clickbait (and accurate) as his, depending on where you sit in the hall of mathematics and computation.

Disclaimer: I audited graduate-level mathematics courses almost 20 years ago, and I am not a professional mathematician. Please email me if my statements are clumsy or incorrect.

The 19th century is where all modern understanding of elementary functions was developed, Liouville being one of the big names with countless theorems of analysis and algebra named after him. One such result is about integration: do the outputs of integrals look the same as their inputs? Well, what does "input" and "look the same" mean? Liouville defined a class of functions called elementary functions, and said that the integral of an elementary function will sometimes be elementary, and when it is, it will always resemble the input in a specific way, plus potential extra logarithmic factors.

Since then, elementary functions have been defined by starting with rational functions and closing under arithmetic operations, composition, exponentiation, logarithms, and polynomial roots. While EML terms are quite expressive, they are unable to capture the "polynomial roots" in full generality. We will show this by using Khovanskii's topological Galois theory: the monodromy group of a function built from rational functions by composition with $\exp$ and $\log$ is solvable. For anybody that has studied Galois theory in an algebra course, this will be familiar, as the destination here is effectively the same, but with more powerful intermediate tooling to wrangle exponentials and logarithms.

First, let's be more precise by what we mean by an EML term and by a standard elementary function.

Definition (EML Term): An EML term in the variables $x_1,\dots,x_n$ is any expression obtained recursively, starting from $\{1, x_1,\dots,x_n\}$, by the rule $$ T,S \mapsto \exp T-\log S. $$ Each such term, evaluated at a point where all the $\log$ arguments are nonzero, determines an analytic germ; we take $\mathcal T_n$ to be the class of germs representable this way, together with their maximal analytic continuations.

Definition (Standard Elementary Function): The standard elementary functions $\mathcal{E}_n$ are the smallest class of multivalued analytic functions on domains in $\mathbb{C}^n$ containing the rational functions and closed under

What we will show is that the class of elementary functions defined this way is strictly larger than the class induced by EML terms.

Lemma: Every EML term has solvable monodromy group. In particular, if $f\in\mathcal T_n$ is algebraic over $\mathbb C(x_1,\dots,x_n)$, then its monodromy group is a finite solvable group.

Proof: We prove by induction on EML term construction. Constants and coordinate functions have trivial monodromy.

For the inductive step, suppose $f = \exp A-\log B$ with $A,B\in\mathcal T_n$, and assume that $\mathrm{Mon}(A)$ and $\mathrm{Mon}(B)$ are solvable. We argue in three steps.

Step 1: $\mathrm{Mon}(\exp A)$ is solvable. The germs of $\exp A$ are images under $\exp$ of the germs of $A$, with germs of $A$ differing by $2\pi i\mathbb Z$ collapsing to the same value. So there is a surjection $\mathrm{Mon}(A)\twoheadrightarrow\mathrm{Mon}(\exp A)$, and a quotient of a solvable group is solvable.

Step 2: $\mathrm{Mon}(\log B)$ is solvable. At a generic point $p$, germs of $\log B$ are parameterized by pairs $(b,k)$ where $b$ is a germ of $B$ at $p$ and $k\in\mathbb Z$ selects the branch of $\log$. A loop $\gamma$ acts by $$ (b,k)\mapsto\bigl(\rho_B(\gamma)(b), k+n(\gamma,b)\bigr), $$ where $\rho_B(\gamma)$ is the monodromy action of $\gamma$ on germs of $B$, and $n(\gamma,b)\in\mathbb Z$ is the winding number around $0$ of the analytic continuation of $b$ along $\gamma$. The projection $\mathrm{Mon}(\log B)\to\mathrm{Mon}(B)$ onto the first component is a surjective homomorphism. Its kernel consists of the elements of $\mathrm{Mon}(\log B)$ induced by loops $\gamma$ with $\rho_B(\gamma)=\mathrm{id}$, which then act only by integer shifts on the $k$-coordinate. Let $S_B$ be the set of germs of $B$ at $p$. For each $b\in S_B$, such a loop determines an integer shift $n(\gamma,b)$, so the kernel embeds in the direct product $\mathbb Z^{S_B}$. In particular, the kernel is abelian. Hence $\mathrm{Mon}(\log B)$ is an extension of $\mathrm{Mon}(B)$ by an abelian group, and extensions of solvable groups by abelian groups are solvable.

Step 3: $\mathrm{Mon}(f)$ is solvable. At a generic point, a germ of $f=\exp A-\log B$ is obtained by subtraction from a pair (germ of $\exp A$, germ of $\log B$), and analytic continuation acts componentwise on such pairs. This gives a surjection of $\pi_1$ onto some subgroup $$ H \le \mathrm{Mon}(\exp A)\times\mathrm{Mon}(\log B), $$ and, since $f$ is obtained from the pair by subtraction, this descends to a surjection $H\twoheadrightarrow\mathrm{Mon}(f)$. So $\mathrm{Mon}(f)$ is a quotient of a subgroup of a direct product of solvable groups, hence solvable.

The second statement of the lemma follows: an algebraic function has finitely many branches, so its monodromy group is finite; a solvable group that is finite is, well, finite and solvable. ∎

Remark. This is the core of Khovanskii's topological Galois theory; see Topological Galois Theory: Solvability and Unsolvability of Equations in Finite Terms.

Theorem: $\mathcal T_n \subsetneq \mathcal E_n$.

Proof: $\mathcal E_n$ is closed under algebraic adjunction, so any local branch of an algebraic function is elementary. In particular, a branch of a root of the generic quintic $$ f^5+a_1f^4+a_2f^3+a_3f^2+a_4f+a_5=0 $$ is elementary.

Suppose for contradiction that at some point $p$ a germ of a branch of this root agrees with a germ of an EML term $T$. By uniqueness of analytic continuation, the Riemann surfaces obtained by maximally continuing these two germs coincide, so in particular their monodromy groups coincide. The monodromy group of the generic quintic is $S_5$, which is not solvable. But by the lemma, the monodromy group of any EML term is solvable. Contradiction.

Hence $\mathcal T_n$ is a strict subset of $\mathcal E_n$. ∎

Edit (15 April 2026): This article used to have an example proving that the real and complex absolute value cannot be expressed over their entire domain as EML terms under the conventional definition of $\log$. I wrote it to emphasize that Odrzywołek's approach required mathematical "patching" in order to work as intended. However, it ended up more distracting than illuminating, and was tangential to the point about the definition of "elementary", so it has been removed.

14 Apr 2026 12:00am GMT

29 Jan 2026

feedFOSDEM 2026

Join the FOSDEM Treasure Hunt!

Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰

29 Jan 2026 11:00pm GMT

26 Jan 2026

feedFOSDEM 2026

Call for volunteers

With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰

26 Jan 2026 11:00pm GMT