My productivity has increased by at least 300% with AI assistance. You can get amazing results nowadays. If you use the tools right. Discover 4 key ingredients that make the tools work for you in this post.
Many people have only tried free AI via ChatGPT or similar web chatbots. It's easy to dismiss those tools, since they lack all 4 ingredients.
1. Context. If I ask you how I could improve my business, you won't be able to provide a good answer. You don't know all the details about my business that matter. All you can do is offer generic advice or make guesses (hallucinations). It's the same with AI tools. Don't rely on the knowledge baked into the LLM base models. Either provide this knowledge, or provide the tools to obtain that knowledge. You can include "all relevant knowledge" in the prompt, but this is labor-intensive. This is why you want an agentic tool.
2. Agentic tools. I've been using Claude Code, a CLI tool that provides agentic AI for knowledge tasks (not just coding). There is also Claude Cowork, a desktop tool, and alternatives from vendors besides Antrophic. These tools use a loop in which the AI determines whether it needs more information and then goes looking for it. You can give these tools a task or a question, and then they will, if called for, run hundreds of searches and commands. They can look at your documents, codebases, and web resources. Tell these tools "Fix GitHub issue $link", and they'll look at the issue, anything references on the issue, as your codebase, make changes, run tests, make more changes, check results via the browser, fix some final issues, create a draft pull request, and provide you with a summary of what was done and possible next steps.
3. Feedback harness. When writing code, you often don't get everything correct the first time. Which is why automated tests are great. More generally, fast feedback loops are great, regardless of whether you're doing software development. For software development, you'll get much better results if the AI tools can actually run the code and run tests and other CI tools to verify everything is correct.
4. Model. AI capabilities are increasing at an incredible pace. If you're using the latest models, your experience will be worlds apart from those using 2-year-old models. For maximum quality, there are 3 metrics to max out: model size/capability, model version, and effort parameters. In other words, use the latest version of the biggest model with "max effort". At the time of writing this post, that is Claude Opus 4.7 with max-effort when using Antrophic, or GPT-5.4 Pro with heavy thinking when using OpenAI. These settings eat tokens, so you will quickly run into your subscription limits of the basic tiers. Then again, paying 200 USD a month for the higher tiers so you can 5x your productivity is quite the bargain.
Those 4 points provide a conceptual framework. There is more to learn, and the AI space is evolving quickly. Ask your favorite AI tool how you can improve your AI workflows, starting from this post, to get specifics.
Some more tips:
Know how to use CLAUDE.md / AGENTS.md
Create sandboxed environments so you can let agents run autonomously for longer periods of time
Mind the current model's tendency to sycophancy when prompting. If you go, "Here is my idea, is it good?" LLMs will often say yes even if there are issues. Adding "Be brutally honest" to your prompt or CLAUDE.md helps. It takes some practice to build up an understanding of how to prompt and in which ways responses should be distrusted. As a starting point, treat current LLMs as overeager sycophantic juniors with an inhuman, jagged skill profile, who tirelessly work at superhuman speeds.
Claude Code (or similar) plus local files in a text format works well. I've been enjoying Obsidian + Claude Code for personal knowledge management.
Shameless plug: my company provides an AI Assistant for MediaWiki, giving you AI capabilities on top of collaborative knowledge management, ideal for organizations.
Ik ben niet zo voor lijstjes, maar als ik onder dreiging van foltering een boeken top 3 zou moeten geven, dan zou "Max, Micha & het Tet-offensief" daar zeker in staan. Van auteur Johan Hardstad verscheen in 2024 al een nieuwe roman in het Noors onder de titel "Under brosteinen, stranden!" en volgens doorgaans goed ingelichte bronnen (ik mailde met de uitgever) zou in het najaar van 2026 de…
This post was co-authored with Nicholas Gates, senior policy advisor at OpenForum Europe. It was originally published on EUobserver, an independent online newspaper widely read by EU policymakers, journalists and advocacy groups. The article summarizes a series of posts I've been writing about digital sovereignty.
European digital assets have a habit of not staying European - a problem current discussions about sovereignty are overlooking.
For example, Skype had Swedish and Danish founders, Estonian engineers, a Luxembourg headquarters, and proprietary code.
Every sovereignty credential was correct on the day it would have been assessed - and meaningless after eBay acquired it, Microsoft bought it, and eventually shut it down in 2025.
This speaks to a core tension at the heart of Europe's digital sovereignty moment. The real story has to do with licensing, dependencies, and supply chains more than it has to do with ownership or operational control - both of which can (and often do) change in Europe.
The current conception of cloud sovereignty asks the right questions about where data is stored, where companies are headquartered, and whether supply chains are European.
What they don't yet ask is whether the sovereignty they are assessing is durable and resilient - for example, whether it will survive a change of ownership, a corporate acquisition, or a disruption in the infrastructure the software depends on.
The European Commission's Cloud Sovereignty Framework provides a non-legislative assessment tool designed to evaluate the digital independence of cloud services in Europe.
It enables public authorities to rank services based on factors such as immunity from non-EU laws, operational control, and data protection.
That said, while both are serious and welcome efforts, they are likely to solve only part of the problem.
'Buy European' is a fragile concept
Europe's 'Buy European' strategy is being built on two fragile foundations it hasn't yet explicitly addressed, and this could have disastrous implications in the cloud domain in particular.
Proprietary software with a perfect sovereignty score today is one acquisition away from a different answer tomorrow. Open Source software means the question doesn't arise.
The legal right to fork changes the power dynamic entirely: it gives you leverage, lets a community step in, and means the technology cannot be held hostage.
This is the distinction the Cloud Sovereignty Framework currently misses.
When Oracle acquired Sun Microsystems in 2010, governments running MySQL faced an immediate question: what happens to this software now?
The answer turned on one thing - the licence. Because MySQL was GPL-licensed, the right to fork and maintain it independently was already being exercised before the acquisition even completed.
MySQL's creator, Monty Widenius, forked it in 2009 precisely because he saw the acquisition coming - that fork exists today as MariaDB. The licence didn't prevent Oracle from buying Sun. It meant the acquisition couldn't end the software, and anyone paying attention could act on that right before any harm materialised.
Getting the licence right is necessary, but it is not sufficient.
In 2024, a conflict between WordPress co-founder Matt Mullenweg and WP Engine disrupted updates for millions of websites.
The code was Open Source. The delivery infrastructure had a single point of control. Most programming languages rely on a single central registry and most are controlled by US companies.
In 2019, GitHub restricted access for developers in sanctioned countries; since GitHub also owns npm, the JavaScript ecosystem's delivery infrastructure became subject to the same trade controls. These aren't interchangeable download sites you can swap out.
Sovereign software on fragile infrastructure is not sovereign. It is software waiting for a supply chain to break.
Both fragility problems point to the same conclusion: a 'Buy European' label is not a sovereignty guarantee unless it embraces licensing as a tool and helps to safeguard the supply chains the software depends on.
Consider two scenarios. A government running proprietary software on a European cloud has jurisdiction, but no exit if the provider is acquired - replacing the software could take years.
A government running Open Source software on Amazon Web Services (AWS) in Europe can move the same software to a European provider whenever it wants. Neither is ideal, but they are not equal.
Europe's sovereignty frameworks need to internalise this asymmetry. Structural sovereignty - the kind that survives change - requires open foundations that flow from licensing through the critical supply chains on which that software depends.
A call-to-action for the Cloud and AI Development Act
CAIDA should not make the same mistakes as the Cloud Sovereignty Framework. It would be a mistake to simply extend a 'Buy European' checklist. The legislation should instead define what makes sovereignty durable.
Two concrete steps would make an immediate difference.
First, it can make Open Source licensing a pass/fail gate for mission-critical procurement under the Cloud Sovereignty Framework - a condition of eligibility at the highest assurance levels, not a weighted factor in a composite score.
Second, it should require supply chain resilience assessments that distinguish between dependencies switchable in weeks and those that would take an entire language community years to replicate, with federated or mirrored European alternatives required where no fallback exists.
Yes, requiring Open Source for mission-critical systems narrows the field in the short term.
But the providers you lose are the ones whose sovereignty credentials don't survive change.
In the longer term, these requirements push European companies toward Open Source software - technology that no one can take away.
I was hosted for a long time, free of charge, on https://www.branchable.com/ by Joey and Lars. Branchable and Ikiwiki were wonderful ideas that never took off as much as they deserved. To avoid being a burden now that Branchable is nearing its end, I migrated to a VPS at Sakura.
However, I have not left Ikiwiki. I only use it as a site engine, but I haven't found any equivalent that gives me both native Git integration, wiki syntax for a personal site, the creativity of its directives (you can do anything with inline and pagespec), and its multilingual support through the po plugin.
If you have recently installed a very up-to-date Linux distribution with a desktop environment, or upgraded your system on a rolling-release distribution, you might have noticed that your home directory has a new folder: "Projects"
Why?
With the recent 0.20 release of xdg-user-dirs we enabled the "Projects" directory by default. Support for this has already existed since 2007, but was never formally enabled. This closes a more than 11 year old bug report that asked for this feature.
The purpose of the Projects directory is to give applications a default location to place project files that do not cleanly belong into one of the existing categories (Documents, Music, Pictures, Videos). Examples of this are software engineering projects, scientific projects, 3D printing projects, CAD design or even things like video editing projects, where project files would end up in the "Projects" directory, with output video being more at home in "Videos".
By enabling this by default, and subsequently in the coming months adding support to GLib, Flatpak, desktops and applications that want to make use of it, we hope to give applications that do operate in a "project-centric" manner with mixed media a better default storage location. As of now, those tools either default to the home directory, or will clutter the "Documents" folder, both of which is not ideal. It also gives users a default organization structure, hopefully leading to less clutter overall and better storage layouts.
This sucks, I don't like it!
As usual, you are in control and can modify your system's behavior. If you do not like the "Projects" folder, simply delete it! The xdg-user-dirs utility will not try to create it again, and instead adjust the default location for this directory to your home directory. If you want more control, you can influence exactly what goes where by editing your ~/.config/user-dirs.dirs configuration file.
If you are a system administrator or distribution vendor and want to set default locations for the default XDG directories, you can edit the /etc/xdg/user-dirs.defaults file to set global defaults that affect all users on the system (users can still adjust the settings however they like though).
What else is new?
Besides this change, the 0.20 release of xdg-user-dirs brings full support for the Meson build system (dropping Automake), translation updates, and some robustness improvements to its code. We also fixed the "arbitrary code execution from unsanitized input" bug that the Arch Linux Wiki mentions here for the xdg-user-dirs utility, by replacing the shell script with a C binary.
Thanks to everyone who contributed to this release!
This post contains a bit of consumerism and is full of references to commercial products, none of which caused me to receive any money nor non-monetary compensation.
This post has also been written after eating in one meal the amount of bread-like stuff that we usually have in more than 24 hours.
I've been baking bread since a long time ago. I don't know exactly when, but probably it was the early 2000s or so, and remained a regular-ish thing until 2020, when it became an extremely regular thing, as in I believe I bake bread on average every other day.
In the before times, I've had a chance to bake pizza in a wood fired oven a few times: a friend had one and would offer the house, my partner would mind the fire, and I would get there with the dough and prepare the pizza.
Now that we have moved to a new house, we don't have a good and convenient place for a proper wood fired oven in masonry, but we can use one of the portable ones, and having dealt with more urgent expenses, I decided that just before the potential collapse of the global economy was a good time as any to buy the oven I had been looking at since we found this house.
I decided to get an Ooni Karu 2, having heard good things about the brand, and since it looked like a good balance between size and portability. I also didn't consider their gas fired ovens (nor did I buy the gas burner) because I'm trying to get rid of gas, not add stuff that uses it, and I didn't get an electric one because I'm not at all unhappy with the bakery-style pizza we make in our regular oven, and I have to admit we also wanted to play with fire1.
We also needed an outdoor table suitable to use the oven on and store it. Here I looked for inspiration at the Ooni tables (and for cheaper alternatives in the same style), but my mother who shares the outdoor area with us wasn't happy with the idea of steel2. And then I was browsing the modern viking shores, and found that there was a new piece in the NÄMMARÖ series my mother likes (and of which we already have some reclining chairs): a kitchen unit in wood with a steel top.
At first I expected to just skip the back panel, since it would be in the way when using the oven, but then I realized that it could probably be assembled upside down, down from the top between the table legs, and we decided to try that option.
This week everything had arrived, and we could try it.
Yesterday evening, after dinner (around 21, I think) I prepared the dough with the flour I usually use for bakery-style pizza: Farina di Grano Tenero Tipo 0 PANE (320 - 340 W); since I wanted to make things easier for myself I only used 55% hydration, so the recipe was:
Then this morning we assembled the NÄMMARÖ, then I divided the dough in eight balls, put them in a covered - but not sealed - container 3, well floured with rice flour and then we fired the oven (as in: my partner did, I looked for a short while and then set the table and stuff), using charcoal, because we already had some, and could conveniently get more at the supermarket.
When the oven had reached temperatures in the orange range4 I stretched the smallest ball out, working on my wooden peel, sprayed it with water5, sprinkled it with coarse salt and put it in the oven.
After 30 seconds I turned it around with the new metal peel, then again after 30 seconds, and then I lost count of how many times I repeated this6, but it was probably 2 or 3 minutes until it looked good.
And it was good. The kind of pizza that is quite soft, especially near the borders.
We ate it with fresh mozzarella and tomatoes, and then made another one the same way, to finish the mozzarella.
This was supposed to be our lunch, but we decided to try one with some leftover cooked radicchio, and that also worked quite nicely.
And finally, we decided we needed to try a more classical pizza, with tomato sauce and cured meat, of which we forgot to take pictures.
Up to here we had eaten about half of the dough, and we were getting full: I had prepared significantly more than what I expected to eat, to be able to accidentally burn some, but also with the idea to bake something else to be eaten later.
So I made two more focaccias with just water and salt, and then I tried to cook some bread with what I expected to be residual heat.
Except that the oven was getting a bit too cold, so my partner added some charcoal, and when I put the last two unflattened balls right at the back of the oven where it was still warmer, that side carbonized. After 5 minutes I moved them to the middle of the oven, and turned them, and then after another turn and 5 more minutes they were ready. And other than the burnt crust, they were pretty edible.
So, the thoughts after our first experience. Everybody around the table (my SO, my mother and me) was quite happy with the results, and they are different enough from the ones I could get with the regular oven.
As I should have expected, it's much faster than a masonry oven, both in getting to temperature and in cooling down: my plan for residual heat bread cooking will have to be adjusted with experience.
We were able to get it hot enough, but not as hot as it's supposed to be able to get: we suspect that using just charcoal may have influenced it, and next week we'll try to get some wood, and try with a mix.
As for the recipe, dividing the dough in eight parts worked quite well: maybe the pizzas are a bit on the smaller side, but since they come one at a time it's more convenient to cut and share them, and maybe make a couple more at the end.
Of course, I'll want to try different recipes, for different styles of pizzas (including some almost-trademark-violating ones) and for other types of flatbread.
I expect it won't be hard to find volunteers to help us with the experiments. :D
any insinuation that there may have been considerations of having a way to have freshly baked bread in case of a prolonged blackout may or may not be based on reality. But it wasn't the only - or even the main - reason.↩︎
come on! it's made of STEEL. how can it be not good? :D↩︎
IKEA 365+ 3.1 glass, the one that is 32 cm × 21 cm × 9 cm; it was just big enough for the amount of dough, and then I covered it with a lid that is missing the seal.↩︎
why did they put a thermometer on it, and not add labels with the actual temperature? WHY???↩︎
if you don't have dietary restrictions a bit of olive oil would taste even better.↩︎
numbers above 2 are all basically the same, right?↩︎
A while ago, I decided that I'd like to test my intuition that Lisp (specifically implementations of Common Lisp) was not, in fact, bad at floating-point code and that the ease of designing languages in Lisp could make traditional Fortran-style array-bashing numerical code pretty pleasant to write.
I used an intentionally naïve numerical solution to a gravitating many-body system as a benchmark, so I could easily compare Lisp & C versions. The brief result is that the Lisp code is a little slower than C, but not much: Lisp is not, in fact, slow. Who knew?
The point here though, is that I wanted to dress up the array-bashing code so it looked a lot more structured. To do this I wrote a macro which hid what was in fact an array of (for instance) double floats behind a bunch of syntax which made it look like an array of structures. That macro took a couple of hours.
This was fine and pretty simple, but it only dealt with a single type for each conceptual array of objects, there was no inheritance and it was restricted in various other ways. In particular it really was syntactic sugar on a vector: there was no distinct implementational type at all. So I thought well, I could make it more general and nicer.
Big mistake.
The second system
Here is an example of what I wanted to be able to do (this is in fact the current syntax):
(define-soa-class example ()
((x :array t :type double-float)
(y :array t :type double-float)
(p :array t :type double-float :group pq)
(q :array t :type double-float :group pq)
(r :array t :type fixnum)
(s)))
This defines a class, instances of which have five array slots and one scalar slot. Of the array slots:
x and y share an array and will be neighbouring elements;
p and q share a different array, because the group option says they must not share with x and y;
r will be in its own array, unless the upgraded element type of fixnum is the same as that of double-float;
s is just a slot.
The implementation will tell you this:
> (describe (make-instance 'example :dimensions '(2 2)))
#<example 8010059EEB> is an example
[...]
dimensions (2 2)
total-size 4
rank 2
tick 1
its class example has a valid layout
it has 3 arrays:
index 0, element type double-float, 2 slots
index 1, element type (signed-byte 64), 1 slot
index 2, element type double-float, 2 slots
it has 5 array slots:
name x, index 0 offset 0
name y, index 0 offset 1
name r, index 1 offset 0
name p, index 2 offset 0
name q, index 2 offset 1
This is already too complicated: the ability to control sharing via groups is almost certainly never going to be useful: it's only even there because I thought of it quite early on and never removed it.
The class definition macro then needs to arrange life so that enough information is available so that a macro can be written which turns indexed slot access into indexed array access of the underlying arrays which are secretly stored in instances, inserting declarations to make this as fast as possible: anything slower than explicit array access is not acceptable. This might (and does) look like this, for example:
As you can see from this, the resulting objects should be allowed to have rank other than 1. Inheritance should also work, including for array slots. Redefinition should be supported and obsolete macro expansions and instances at least detected.
In other words there are exactly two things I should have aimed at achieving: the ability to define fields of various types and have them grouped into (generally fewer) underlying arrays, and an implementational type to hold these things. Everything else was just unnecessary baggage which made the implementation much more complicated than it needed to be.
I had not finished making mistakes. The system needs to store some metadata about how slots map onto the underlying arrays, element types and so on, so the macro can use this to compile efficient code. There are two obvious ways to do this: use the property list of the class name, or subclass standard-class and store the metadata in the class. The first approach is simple, portable, has clear semantics, but it's 'hacky'; the second is more complicated, not portable, has unclear semantics1, but it's The Right Thing2. Another wrong decision I made without even trying.
The only thing that saved me was that the nature of software is that you can only make a finite number of bad decisions in a finite time.
More bad decisions
I was not done. Early on, I thought that, well, I could make this whole thing be a shim around defstruct: single inheritance was more than enough, and obviously I could store metadata on the property list of the type name as described above. And there's no nausea with multiple accessors or any of that nonsense.
But, somehow, I found writing a thing which would process the (structure-name ...) case of defstruct too painful, so I decided to go for the shim-around-defclass version instead. I even have a partly-complete version of the defstructy code which I abandoned. Another mistake.
I also decided that The Right Thing was to have the system support objects of rank 0. That constrains the underlying array representation (it needs to use rank \(n+1\) arrays for an object of rank \(n\)) in a way which I thought for a long time might limit performance.
Things I already knew
At any point during the implementation of this I could have told you that it was too general and the implementation was going to be too complicated for no real gain. I don't know why I made so many bad choices.
The whole process took weeks and I nearly just gave up several times.
The light at the end of the tunnel
Or: all-up testing.
Eventually, I had a thing I thought might work. The macro syntax was a bit ugly (that macro still exists, with a different name) but it seemed to work. But since the whole purpose of the thing was performance, that needed to be checked. I wasn't optimistic.
What I did was to write a version of my naïve gravitational many-body system using the new code, based closely on the previous one. The function that updates the state of the particles looks like this:
And it not only worked, the performance was very close to the previous version, straight out of the gate. The syntax is not as nice as that of the initial, quick-and-dirty version, but it is much more general, so I think that's worth it on the whole.
There have been problems since then: in particular the dependency on when classes get defined. It will never be as portable as I'd like because of the unnecessary MOP dependencies3, but it is usable and quick4.
Was it worth it? May be, but it should have been simpler.
Nothing that uses the AMOP MOP is ever The Right Thing, because the whole thing was designed by people who were extremely smart, but still not as smart as they needed to be and thought they were. It's unclear if any MOP for CLOS can ever be satisfactory, in part because CLOS itself suffers from the same smart-but-not-smart-enough problem to a large extent not helped by bring dropped wholesale into CL at the last minute: by the time CL was standardised people had written large systems in it, but almost nobody had written anything significant using CLOS, let alone the AMOP MOP. ↩
A mistake I somehow managed to avoid was using the whole slot-definition mechanism the MOP wants you to use. ↩
All Elementary Functions from a Single Operator is a paper by Andrzej Odrzywołek that has been making rounds on the internet lately, being called everything from a "breakthrough" to "groundbreaking". Some are going as far as to suggest that the entire foundations of computer engineering and machine learning should be re-built as a result of this. The paper says that the function
$$ E(x,y) := \exp x - \log y $$
together with variables and the constant $1$, which we will call EML terms, are sufficient to express all elementary functions, and proceeds to give constructions for many constants and functions, from addition to $\pi$ to hyperbolic trigonometry.
I think the result is neat and thought-provoking. Odrzywołek is explicit about his definition of "elementary function". His Table 1 fixes "elementary" as 36 specific symbols, and under that definition his theorem is correct and clever, so long as we accept some of his modifications to the conventional $\log$ function and do arithmetic with infinities.
My concern is that the word "elementary" in the title carries a much broader meaning in standard mathematical usage. Odrzywołek recognizes this, saying little more than "[t]hat generality is not needed here" and that his work takes "the ordinary scientific-calculator point of view". He does not offer further commentary.
What is this more general setting, and does his claim still hold? In modern pure mathematics, dating back to the 19th century, the definition of "elementary function" has been well established. We'll get to a definition shortly, but to cut to the chase, the titular result does not hold in this setting. As such, in layman's terms, I do not consider the "Exp-Minus-Log" function to be the continuous analog of the Boolean NAND gate or the universal quantum CCNOT/CSWAP gates.
The rough TL;DR is this: Elementary functions typically include arbitrary polynomial root functions, and EML terms cannot express them. Below, I'll give a relatively technical argument that EML terms are not sufficient to express what I consider standard elementary functions.
To avoid any confusion, the purpose of this blog post is manifold:
To elucidate what many mathematicians consider to be an "elementary function", which is the foundation for a variety of rich and interesting math (especially if you like computer science).
To prove a result about EML terms using topological Galois theory.
To demonstrate how this result may be used to show an elementary function not expressible by EML terms.
This blog post is not a refutation of Odrzywołek's work, though the title might be considered just as clickbait (and accurate) as his, depending on where you sit in the hall of mathematics and computation.
Disclaimer: I audited graduate-level mathematics courses almost 20 years ago, and I am not a professional mathematician. Please email me if my statements are clumsy or incorrect.
The 19th century is where all modern understanding of elementary functions was developed, Liouville being one of the big names with countless theorems of analysis and algebra named after him. One such result is about integration: do the outputs of integrals look the same as their inputs? Well, what does "input" and "look the same" mean? Liouville defined a class of functions called elementary functions, and said that the integral of an elementary function will sometimes be elementary, and when it is, it will always resemble the input in a specific way, plus potential extra logarithmic factors.
Since then, elementary functions have been defined by starting with rational functions and closing under arithmetic operations, composition, exponentiation, logarithms, and polynomial roots. While EML terms are quite expressive, they are unable to capture the "polynomial roots" in full generality. We will show this by using Khovanskii's topological Galois theory: the monodromy group of a function built from rational functions by composition with $\exp$ and $\log$ is solvable. For anybody that has studied Galois theory in an algebra course, this will be familiar, as the destination here is effectively the same, but with more powerful intermediate tooling to wrangle exponentials and logarithms.
First, let's be more precise by what we mean by an EML term and by a standard elementary function.
Definition (EML Term): An EML term in the variables $x_1,\dots,x_n$ is any expression obtained recursively, starting from $\{1, x_1,\dots,x_n\}$, by the rule $$ T,S \mapsto \exp T-\log S. $$ Each such term, evaluated at a point where all the $\log$ arguments are nonzero, determines an analytic germ; we take $\mathcal T_n$ to be the class of germs representable this way, together with their maximal analytic continuations.
Definition (Standard Elementary Function): The standard elementary functions $\mathcal{E}_n$ are the smallest class of multivalued analytic functions on domains in $\mathbb{C}^n$ containing the rational functions and closed under
arithmetic operations and composition,
exponentiation and logarithms,
algebraic adjunctions: if $P(Y)\in K[Y]$ is a polynomial whose coefficients lie in a previously constructed class $K$, then any local branch of a solution of $P(Y)=0$ is admitted.
What we will show is that the class of elementary functions defined this way is strictly larger than the class induced by EML terms.
Lemma: Every EML term has solvable monodromy group. In particular, if $f\in\mathcal T_n$ is algebraic over $\mathbb C(x_1,\dots,x_n)$, then its monodromy group is a finite solvable group.
Proof: We prove by induction on EML term construction. Constants and coordinate functions have trivial monodromy.
For the inductive step, suppose $f = \exp A-\log B$ with $A,B\in\mathcal T_n$, and assume that $\mathrm{Mon}(A)$ and $\mathrm{Mon}(B)$ are solvable. We argue in three steps.
Step 1: $\mathrm{Mon}(\exp A)$ is solvable. The germs of $\exp A$ are images under $\exp$ of the germs of $A$, with germs of $A$ differing by $2\pi i\mathbb Z$ collapsing to the same value. So there is a surjection $\mathrm{Mon}(A)\twoheadrightarrow\mathrm{Mon}(\exp A)$, and a quotient of a solvable group is solvable.
Step 2: $\mathrm{Mon}(\log B)$ is solvable. At a generic point $p$, germs of $\log B$ are parameterized by pairs $(b,k)$ where $b$ is a germ of $B$ at $p$ and $k\in\mathbb Z$ selects the branch of $\log$. A loop $\gamma$ acts by $$ (b,k)\mapsto\bigl(\rho_B(\gamma)(b), k+n(\gamma,b)\bigr), $$ where $\rho_B(\gamma)$ is the monodromy action of $\gamma$ on germs of $B$, and $n(\gamma,b)\in\mathbb Z$ is the winding number around $0$ of the analytic continuation of $b$ along $\gamma$. The projection $\mathrm{Mon}(\log B)\to\mathrm{Mon}(B)$ onto the first component is a surjective homomorphism. Its kernel consists of the elements of $\mathrm{Mon}(\log B)$ induced by loops $\gamma$ with $\rho_B(\gamma)=\mathrm{id}$, which then act only by integer shifts on the $k$-coordinate. Let $S_B$ be the set of germs of $B$ at $p$. For each $b\in S_B$, such a loop determines an integer shift $n(\gamma,b)$, so the kernel embeds in the direct product $\mathbb Z^{S_B}$. In particular, the kernel is abelian. Hence $\mathrm{Mon}(\log B)$ is an extension of $\mathrm{Mon}(B)$ by an abelian group, and extensions of solvable groups by abelian groups are solvable.
Step 3: $\mathrm{Mon}(f)$ is solvable. At a generic point, a germ of $f=\exp A-\log B$ is obtained by subtraction from a pair (germ of $\exp A$, germ of $\log B$), and analytic continuation acts componentwise on such pairs. This gives a surjection of $\pi_1$ onto some subgroup $$ H \le \mathrm{Mon}(\exp A)\times\mathrm{Mon}(\log B), $$ and, since $f$ is obtained from the pair by subtraction, this descends to a surjection $H\twoheadrightarrow\mathrm{Mon}(f)$. So $\mathrm{Mon}(f)$ is a quotient of a subgroup of a direct product of solvable groups, hence solvable.
The second statement of the lemma follows: an algebraic function has finitely many branches, so its monodromy group is finite; a solvable group that is finite is, well, finite and solvable. ∎
Proof: $\mathcal E_n$ is closed under algebraic adjunction, so any local branch of an algebraic function is elementary. In particular, a branch of a root of the generic quintic $$ f^5+a_1f^4+a_2f^3+a_3f^2+a_4f+a_5=0 $$ is elementary.
Suppose for contradiction that at some point $p$ a germ of a branch of this root agrees with a germ of an EML term $T$. By uniqueness of analytic continuation, the Riemann surfaces obtained by maximally continuing these two germs coincide, so in particular their monodromy groups coincide. The monodromy group of the generic quintic is $S_5$, which is not solvable. But by the lemma, the monodromy group of any EML term is solvable. Contradiction.
Hence $\mathcal T_n$ is a strict subset of $\mathcal E_n$. ∎
Edit (15 April 2026): This article used to have an example proving that the real and complex absolute value cannot be expressed over their entire domain as EML terms under the conventional definition of $\log$. I wrote it to emphasize that Odrzywołek's approach required mathematical "patching" in order to work as intended. However, it ended up more distracting than illuminating, and was tangential to the point about the definition of "elementary", so it has been removed.
A couple of weeks ago I released FSet 2.4.0, which brought a CHAMP implementation of bags, filling out the suite of CHAMP types. 🚀 FSet users should have a look at the release page, as it also contained a number of bug fixes and minor changes.
I've since released v2.4.1 and v2.4.2, with some more bug fixes.
But the big news is the book! It brings together all the introductory material I have written, plus a lot more, along with a complete API Reference chapter.
FSet is now in the state I decided last summer I wanted to get it into: faster, better tested and debugged, more feature-complete, and much better documented than it has ever been in its nearly two decades of existence. I am, of course, very much hoping that these months of work have made the library more interesting and accessible to CL programmers who haven't tried it yet. I am even hoping that its existence helps attract newcomers to the CL community. Time will tell!
Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰
If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.
With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰