23 Apr 2026
Planet Grep
Jeroen De Dauw: Boot Your Productivity With AI
My productivity has increased by at least 300% with AI assistance. You can get amazing results nowadays. If you use the tools right. Discover 4 key ingredients that make the tools work for you in this post.
Many people have only tried free AI via ChatGPT or similar web chatbots. It's easy to dismiss those tools, since they lack all 4 ingredients.
1. Context. If I ask you how I could improve my business, you won't be able to provide a good answer. You don't know all the details about my business that matter. All you can do is offer generic advice or make guesses (hallucinations). It's the same with AI tools. Don't rely on the knowledge baked into the LLM base models. Either provide this knowledge, or provide the tools to obtain that knowledge. You can include "all relevant knowledge" in the prompt, but this is labor-intensive. This is why you want an agentic tool.
2. Agentic tools. I've been using Claude Code, a CLI tool that provides agentic AI for knowledge tasks (not just coding). There is also Claude Cowork, a desktop tool, and alternatives from vendors besides Antrophic. These tools use a loop in which the AI determines whether it needs more information and then goes looking for it. You can give these tools a task or a question, and then they will, if called for, run hundreds of searches and commands. They can look at your documents, codebases, and web resources. Tell these tools "Fix GitHub issue $link", and they'll look at the issue, anything references on the issue, as your codebase, make changes, run tests, make more changes, check results via the browser, fix some final issues, create a draft pull request, and provide you with a summary of what was done and possible next steps.
3. Feedback harness. When writing code, you often don't get everything correct the first time. Which is why automated tests are great. More generally, fast feedback loops are great, regardless of whether you're doing software development. For software development, you'll get much better results if the AI tools can actually run the code and run tests and other CI tools to verify everything is correct.
4. Model. AI capabilities are increasing at an incredible pace. If you're using the latest models, your experience will be worlds apart from those using 2-year-old models. For maximum quality, there are 3 metrics to max out: model size/capability, model version, and effort parameters. In other words, use the latest version of the biggest model with "max effort". At the time of writing this post, that is Claude Opus 4.7 with max-effort when using Antrophic, or GPT-5.4 Pro with heavy thinking when using OpenAI. These settings eat tokens, so you will quickly run into your subscription limits of the basic tiers. Then again, paying 200 USD a month for the higher tiers so you can 5x your productivity is quite the bargain.
Those 4 points provide a conceptual framework. There is more to learn, and the AI space is evolving quickly. Ask your favorite AI tool how you can improve your AI workflows, starting from this post, to get specifics.
Some more tips:
- Know how to use CLAUDE.md / AGENTS.md
- Create sandboxed environments so you can let agents run autonomously for longer periods of time
- Mind the current model's tendency to sycophancy when prompting. If you go, "Here is my idea, is it good?" LLMs will often say yes even if there are issues. Adding "Be brutally honest" to your prompt or CLAUDE.md helps. It takes some practice to build up an understanding of how to prompt and in which ways responses should be distrusted. As a starting point, treat current LLMs as overeager sycophantic juniors with an inhuman, jagged skill profile, who tirelessly work at superhuman speeds.
- Claude Code (or similar) plus local files in a text format works well. I've been enjoying Obsidian + Claude Code for personal knowledge management.
You can stay up to speed with AI capabilities development via Don't Worry About the Vase and Astral Codex Ten, which I both highly recommend.
Shameless plug: my company provides an AI Assistant for MediaWiki, giving you AI capabilities on top of collaborative knowledge management, ideal for organizations.
The post Boot Your Productivity With AI appeared first on Blog of Jeroen De Dauw.
23 Apr 2026 9:23am GMT
Frank Goossens: De nieuwe Harstad “Onder de kasseien, het strand” is er bijna maar nog niet helemaal
Ik ben niet zo voor lijstjes, maar als ik onder dreiging van foltering een boeken top 3 zou moeten geven, dan zou "Max, Micha & het Tet-offensief" daar zeker in staan. Van auteur Johan Hardstad verscheen in 2024 al een nieuwe roman in het Noors onder de titel "Under brosteinen, stranden!" en volgens doorgaans goed ingelichte bronnen (ik mailde met de uitgever) zou in het najaar van 2026 de…
23 Apr 2026 9:23am GMT
Dries Buytaert: What does 'Buy European' even mean?
This post was co-authored with Nicholas Gates, senior policy advisor at OpenForum Europe. It was originally published on EUobserver, an independent online newspaper widely read by EU policymakers, journalists and advocacy groups. The article summarizes a series of posts I've been writing about digital sovereignty.
European digital assets have a habit of not staying European - a problem current discussions about sovereignty are overlooking.
For example, Skype had Swedish and Danish founders, Estonian engineers, a Luxembourg headquarters, and proprietary code.
Every sovereignty credential was correct on the day it would have been assessed - and meaningless after eBay acquired it, Microsoft bought it, and eventually shut it down in 2025.
This speaks to a core tension at the heart of Europe's digital sovereignty moment. The real story has to do with licensing, dependencies, and supply chains more than it has to do with ownership or operational control - both of which can (and often do) change in Europe.
The current conception of cloud sovereignty asks the right questions about where data is stored, where companies are headquartered, and whether supply chains are European.
What they don't yet ask is whether the sovereignty they are assessing is durable and resilient - for example, whether it will survive a change of ownership, a corporate acquisition, or a disruption in the infrastructure the software depends on.
The European Commission's Cloud Sovereignty Framework provides a non-legislative assessment tool designed to evaluate the digital independence of cloud services in Europe.
It enables public authorities to rank services based on factors such as immunity from non-EU laws, operational control, and data protection.
The forthcoming Cloud and AI Development Act (CAIDA) - expected at the end of May - will possibly go further.
That said, while both are serious and welcome efforts, they are likely to solve only part of the problem.
'Buy European' is a fragile concept
Europe's 'Buy European' strategy is being built on two fragile foundations it hasn't yet explicitly addressed, and this could have disastrous implications in the cloud domain in particular.
Proprietary software with a perfect sovereignty score today is one acquisition away from a different answer tomorrow. Open Source software means the question doesn't arise.
The legal right to fork changes the power dynamic entirely: it gives you leverage, lets a community step in, and means the technology cannot be held hostage.
This is the distinction the Cloud Sovereignty Framework currently misses.
When Oracle acquired Sun Microsystems in 2010, governments running MySQL faced an immediate question: what happens to this software now?
The answer turned on one thing - the licence. Because MySQL was GPL-licensed, the right to fork and maintain it independently was already being exercised before the acquisition even completed.
MySQL's creator, Monty Widenius, forked it in 2009 precisely because he saw the acquisition coming - that fork exists today as MariaDB. The licence didn't prevent Oracle from buying Sun. It meant the acquisition couldn't end the software, and anyone paying attention could act on that right before any harm materialised.
Getting the licence right is necessary, but it is not sufficient.
In 2024, a conflict between WordPress co-founder Matt Mullenweg and WP Engine disrupted updates for millions of websites.
The code was Open Source. The delivery infrastructure had a single point of control. Most programming languages rely on a single central registry and most are controlled by US companies.
In 2019, GitHub restricted access for developers in sanctioned countries; since GitHub also owns npm, the JavaScript ecosystem's delivery infrastructure became subject to the same trade controls. These aren't interchangeable download sites you can swap out.
Sovereign software on fragile infrastructure is not sovereign. It is software waiting for a supply chain to break.
Both fragility problems point to the same conclusion: a 'Buy European' label is not a sovereignty guarantee unless it embraces licensing as a tool and helps to safeguard the supply chains the software depends on.
Consider two scenarios. A government running proprietary software on a European cloud has jurisdiction, but no exit if the provider is acquired - replacing the software could take years.
A government running Open Source software on Amazon Web Services (AWS) in Europe can move the same software to a European provider whenever it wants. Neither is ideal, but they are not equal.
Europe's sovereignty frameworks need to internalise this asymmetry. Structural sovereignty - the kind that survives change - requires open foundations that flow from licensing through the critical supply chains on which that software depends.
A call-to-action for the Cloud and AI Development Act
CAIDA should not make the same mistakes as the Cloud Sovereignty Framework. It would be a mistake to simply extend a 'Buy European' checklist. The legislation should instead define what makes sovereignty durable.
Two concrete steps would make an immediate difference.
First, it can make Open Source licensing a pass/fail gate for mission-critical procurement under the Cloud Sovereignty Framework - a condition of eligibility at the highest assurance levels, not a weighted factor in a composite score.
Second, it should require supply chain resilience assessments that distinguish between dependencies switchable in weeks and those that would take an entire language community years to replicate, with federated or mirrored European alternatives required where no fallback exists.
Yes, requiring Open Source for mission-critical systems narrows the field in the short term.
But the providers you lose are the ones whose sovereignty credentials don't survive change.
In the longer term, these requirements push European companies toward Open Source software - technology that no one can take away.
23 Apr 2026 9:23am GMT
22 Apr 2026
Planet Debian
Dirk Eddelbuettel: nanotime 0.3.14 on CRAN: Upstream Maintenance

Another minor update 0.3.14 for our nanotime package is now on CRAN, and has compiled for r2u (and will have to wait to be uploaded to Debian until dependency bit64 has been updated there). nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.
This release has been driven almost entirely by Michael, who took over as bit64 maintainer and has been making changes there that have an effect on us 'downstream'. He reached out with a number of PRs which (following occassional refinement and smoothing) have all been integrated. There are no user-facing changes, or behavioural changes or enhancements, in this release.
The NEWS snippet below has the fuller details.
Changes in version 0.3.14 (2026-04-22)
Tests were refactored to use
NA_integer64_(Michael Chirico in #149 and Dirk in #156)
nanodurationwas updated for changes in nanotime 4.8.0 (Michael Chirico in #152 fixing #151)Use of
as.integer64(keep.names=TRUE)has been refactored (Michael Chirico in #154 fixing #153)In tests, nanotime is attached after bit64; this still needs a better fix (Michael Chirico in #155)
The package now has a hard dependency on the just released bit64 version 4.8.0 (or later)
Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository - and all documentation is provided at the nanotime documentation site.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.
22 Apr 2026 8:34pm GMT
Vincent Bernat: CSS & vertical rhythm for text, images, and tables
Vertical rhythm aligns lines to a consistent spacing cadence down the page. It creates a predictable flow for the eye to follow. Thanks to the rlh CSS unit, vertical rhythm is now easier to implement for text.1 But illustrations and tables can disrupt the layout. The amateur typographer in me wants to follow Bringhurst's wisdom:
Headings, subheads, block quotations, footnotes, illustrations, captions and other intrusions into the text create syncopations and variations against the base rhythm of regularly leaded lines. These variations can and should add life to the page, but the main text should also return after each variation precisely on beat and in phase.
― Robert Bringhurst, The Elements of Typographic Style
Text
Three factors govern vertical rhythm: font size, line height and margin or padding. Let's set our baseline with an 18-pixel font and a 1.5 line height:
html { font-size: 112.5%; line-height: 1.5; } h1, h2, h3, h4 { font-size: 100%; } html, body, h1, h2, h3, h4, p, blockquote, dl, dt, dd, ol, ul, li { margin: 0; padding: 0; }
CSS Values and Units Module Level 4 defines the rlh unit, equal to the computed line height of the root element. All browsers support it since 2023.2 Use it to insert vertical spaces or to fix the line height when altering font size:3
h1, h2, h3, h4 { margin-top: 2rlh; margin-bottom: 1rlh; } h1 { font-size: 2.4rem; line-height: 2rlh; } h2 { font-size: 1.5rem; line-height: 1rlh; } h3 { font-size: 1.2rem; line-height: 1rlh; } p, blockquote, pre { margin-top: 1rlh; } aside { font-size: 0.875rem; line-height: 1rlh; }
We can check the result by overlaying a grid4 on the content:

rlh unit to set vertical space works well for text. You can display the grid using Ctrl+Shift+G.If a child element uses a font with taller intrinsic metrics, it may stretch the line's box beyond the configured line height.5 A workaround is to reduce the line height to 1. The glyphs overflow but don't push the line taller.
code, kbd { line-height: 1; }
Responsive images
Responsive images are difficult to align on the grid because we don't know their height. CSS Rhythmic Sizing Module Level 1 introduces the block-step property to adjust the height of an element to a multiple of a step unit. But most browsers don't support it yet.
With JavaScript, we can add padding around the image so it does not disturb the vertical rhythm:
const targets = document.querySelectorAll(".lf-media-outer"); const adjust = (el, height) => { const rlh = parseFloat(getComputedStyle(document.documentElement).lineHeight); const padding = Math.ceil(height / rlh) * rlh - height; el.style.padding = `${padding / 2}px 0`; }; targets.forEach((el) => adjust(el, el.clientHeight));

As the image is responsive, its height can change. We need to wrap a resize observer around the adjust() function:
const ro = new ResizeObserver((entries) => { for (const entry of entries) { const height = entry.contentBoxSize[0].blockSize; adjust(entry.target, height); } }); for (const target of targets) { ro.observe(target); }
Tables
Table cells could set 1rlh as their height but they would feel constricted. Using 2rlh wastes too much space. Instead, we use incremental leading: we align one in every five lines.
table { border-spacing: 2px 0; border-collapse: separate; th { padding: 0.4rlh 1em; } td { padding: 0.2rlh 0.5em; } }
To align the elements after the table, we need to add some padding. We can either reuse the JavaScript code from images or use a few lines of CSS that count the regular rows and compute the missing vertical padding:
table:has(tbody tr:nth-child(5n):last-child) { padding-bottom: 0.2rlh; } table:has(tbody tr:nth-child(5n+1):last-child) { padding-bottom: 0.8rlh; } table:has(tbody tr:nth-child(5n+2):last-child) { padding-bottom: 0.4rlh; } table:has(tbody tr:nth-child(5n+3):last-child) { padding-bottom: 0 } table:has(tbody tr:nth-child(5n+4):last-child) { padding-bottom: 0.6rlh; }
A header cell has twice the padding of a regular cell. With two regular rows, the total padding is 2×2×0.2+2×0.4=1.6. We need to add 0.4rlh to reach 2rlh of extra vertical padding across the table.

None of this is necessary. But once you start looking, you can't unsee it. Until browsers implement CSS Rhythmic Sizing, a bit of CSS wizardry and a touch of JavaScript is enough to pull it off. The main text now returns after each intrusion "precisely on beat and in phase." 🎼
-
See "Vertical rhythm using CSS
lhandrlhunits" by Paweł Grzybek. ❦ -
For broader compatibility, you can replace
2rlhwithcalc(var(--line-height) * 2rem)and set the--line-heightcustom property in the:rootpseudo-class. I wrote a simple PostCSS plugin for this purpose. ❦ -
It would have been nicer to compute the line height with
calc(round(up, calc(2.4rem / 1rlh), 0) * 1rlh). Unfortunately, typed arithmetic is not supported by Firefox yet. Moreover, browsers supportround()only since 2024. Instead, I coded a PostCSS plugin for this as well. ❦ -
The following CSS code defines a grid tracking the line height:
body::after { content: ""; z-index: 9999; background: linear-gradient(180deg, #c8e1ff99 1px, transparent 1px); background-size: 20px 1rlh; pointer-events: none; }
-
See "Deep dive CSS: font metrics, line-height and vertical-align" by Vincent De Oliveira. ❦
22 Apr 2026 7:48pm GMT
21 Apr 2026
Planet Debian
Dirk Eddelbuettel: RcppArmadillo 15.2.6-1 on CRAN: Several Updates


Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language-and is widely used by (currently) 1263 other packages on CRAN, downloaded 45.7 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 683 times according to Google Scholar.
This versions updates to the 15.2.5 and 15.2.6 upstream Armadillo releases from, respectively, two and five days ago. The package has already been updated for Debian, and built for r2u. When we ran the reverse-dependency check for 15.2.5 at the end of last week, one package failed. I got in touch with the authors, filed an issue, poked some more, isolated the one line that caused an example to fail … and right then 15.2.6 came out fixing just that. It was after all an upstream issue. We used to ran these checks before Conrad made a release, he now skips this and hence needed a quick follow-up release. It can happen.
The other big change is that this R package release phases out the 'dual support' for both C++14 or newer (as in current Armadillo) along with a C++11 fallback for more slowly updating packages. I am happy to say that after over eight months of this managed transition (during which CRAN expulsed some laggard packages that were not moving in from C++11) we are now at all packages using C++14 or newer which is nice. And I will take this as an opportunity to stress that one can in fact manage a disruptive API change this way as we just demonstrated. Sadly, R Core does not seem to have gotten that message and rollout of this package was also still a little delayed because of the commotion created by the last minute API changes preceding the R 4.6.0 release later this week.
Smaller changes in the package are a switch in pdf vignette production to the Rcpp::asis() driver, and a higher-precision computation in rmultinom() (matching a change made in R-devel during last week in its use of Kahan summation). All detailed changes since the last CRAN release follow.
Changes in RcppArmadillo version 15.2.6-1 (2026-04-20)
Upgraded to Armadillo release 15.2.6 (Medium Roast Deluxe)
- Ensure internally computed tolerances are not
NaNThe
rmultinomdeploys 'Kahan summation' as R-devel does now.Changes in RcppArmadillo version 15.2.5-1 [github-only] (2026-04-18)
Upgraded to Armadillo release 15.2.5 (Medium Roast Deluxe)
Fix for handling NaN elements in
.is_zero()Fix for handling NaN in tolerance and conformance checks
Faster handling of diagonal views and submatrices with one row>
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.
21 Apr 2026 11:20pm GMT
16 Apr 2026
Planet Lisp
Tim Bradshaw: Structures of arrays
Or, second system.
A while ago, I decided that I'd like to test my intuition that Lisp (specifically implementations of Common Lisp) was not, in fact, bad at floating-point code and that the ease of designing languages in Lisp could make traditional Fortran-style array-bashing numerical code pretty pleasant to write.
I used an intentionally naïve numerical solution to a gravitating many-body system as a benchmark, so I could easily compare Lisp & C versions. The brief result is that the Lisp code is a little slower than C, but not much: Lisp is not, in fact, slow. Who knew?
The point here though, is that I wanted to dress up the array-bashing code so it looked a lot more structured. To do this I wrote a macro which hid what was in fact an array of (for instance) double floats behind a bunch of syntax which made it look like an array of structures. That macro took a couple of hours.
This was fine and pretty simple, but it only dealt with a single type for each conceptual array of objects, there was no inheritance and it was restricted in various other ways. In particular it really was syntactic sugar on a vector: there was no distinct implementational type at all. So I thought well, I could make it more general and nicer.
Big mistake.
The second system
Here is an example of what I wanted to be able to do (this is in fact the current syntax):
(define-soa-class example ()
((x :array t :type double-float)
(y :array t :type double-float)
(p :array t :type double-float :group pq)
(q :array t :type double-float :group pq)
(r :array t :type fixnum)
(s)))
This defines a class, instances of which have five array slots and one scalar slot. Of the array slots:
xandyshare an array and will be neighbouring elements;pandqshare a different array, because the group option says they must not share withxandy;rwill be in its own array, unless the upgraded element type offixnumis the same as that ofdouble-float;sis just a slot.
The implementation will tell you this:
> (describe (make-instance 'example :dimensions '(2 2)))
#<example 8010059EEB> is an example
[...]
dimensions (2 2)
total-size 4
rank 2
tick 1
its class example has a valid layout
it has 3 arrays:
index 0, element type double-float, 2 slots
index 1, element type (signed-byte 64), 1 slot
index 2, element type double-float, 2 slots
it has 5 array slots:
name x, index 0 offset 0
name y, index 0 offset 1
name r, index 1 offset 0
name p, index 2 offset 0
name q, index 2 offset 1
This is already too complicated: the ability to control sharing via groups is almost certainly never going to be useful: it's only even there because I thought of it quite early on and never removed it.
The class definition macro then needs to arrange life so that enough information is available so that a macro can be written which turns indexed slot access into indexed array access of the underlying arrays which are secretly stored in instances, inserting declarations to make this as fast as possible: anything slower than explicit array access is not acceptable. This might (and does) look like this, for example:
(with-array-slots (x y) (thing example)
(for* ((i ...) (j ...))
(setf (x i j) (- (y i j) (y j i)))))
As you can see from this, the resulting objects should be allowed to have rank other than 1. Inheritance should also work, including for array slots. Redefinition should be supported and obsolete macro expansions and instances at least detected.
In other words there are exactly two things I should have aimed at achieving: the ability to define fields of various types and have them grouped into (generally fewer) underlying arrays, and an implementational type to hold these things. Everything else was just unnecessary baggage which made the implementation much more complicated than it needed to be.
I had not finished making mistakes. The system needs to store some metadata about how slots map onto the underlying arrays, element types and so on, so the macro can use this to compile efficient code. There are two obvious ways to do this: use the property list of the class name, or subclass standard-class and store the metadata in the class. The first approach is simple, portable, has clear semantics, but it's 'hacky'; the second is more complicated, not portable, has unclear semantics1, but it's The Right Thing2. Another wrong decision I made without even trying.
The only thing that saved me was that the nature of software is that you can only make a finite number of bad decisions in a finite time.
More bad decisions
I was not done. Early on, I thought that, well, I could make this whole thing be a shim around defstruct: single inheritance was more than enough, and obviously I could store metadata on the property list of the type name as described above. And there's no nausea with multiple accessors or any of that nonsense.
But, somehow, I found writing a thing which would process the (structure-name ...) case of defstruct too painful, so I decided to go for the shim-around-defclass version instead. I even have a partly-complete version of the defstructy code which I abandoned. Another mistake.
I also decided that The Right Thing was to have the system support objects of rank 0. That constrains the underlying array representation (it needs to use rank \(n+1\) arrays for an object of rank \(n\)) in a way which I thought for a long time might limit performance.
Things I already knew
At any point during the implementation of this I could have told you that it was too general and the implementation was going to be too complicated for no real gain. I don't know why I made so many bad choices.
The whole process took weeks and I nearly just gave up several times.
The light at the end of the tunnel
Or: all-up testing.
Eventually, I had a thing I thought might work. The macro syntax was a bit ugly (that macro still exists, with a different name) but it seemed to work. But since the whole purpose of the thing was performance, that needed to be checked. I wasn't optimistic.
What I did was to write a version of my naïve gravitational many-body system using the new code, based closely on the previous one. The function that updates the state of the particles looks like this:
(defun/quickly step-pvs (source destination from below dt G &aux
(n (particle-vector-length source)))
;; Step a source particle vector into a destination one.
;;
;; Operation count:
;; 3
;; + (below - from) * (n - 1) * (3 + 8 + 9)
;; + (below - from) * (12 + 6)
;; = (below - from) * (20 * (n - 1) + 18) + 3
(declare (type particle-vector source destination)
(type vector-index from)
(type vector-dimension below)
(type fpv dt G)
(type vector-dimension n))
(when (eq source destination)
(error "botch"))
(let*/fpv ((Gdt (* G dt))
(Gdt^2/2 (/ (* Gdt dt) (fpv 2.0))))
(binding-array-slots (((source particle-vector :check nil :rank 1 :suffix _s)
m x y z vx vy vz)
((destination particle-vector :check nil :rank 1 :suffix _d)
m x y z vx vy vz))
(for ((i1 (in-naturals :initially from :bound below :fixnum t)))
(let/fpv ((ax/G zero.fpv)
(ay/G zero.fpv)
(az/G zero.fpv)
(x1 (x_s i1))
(y1 (y_s i1))
(z1 (z_s i1))
(vx1 (vx_s i1))
(vy1 (vy_s i1))
(vz1 (vz_s i1)))
(for ((i2 (in-naturals n t)))
(when (= i1 i2) (next))
(let/fpv ((m2 (m_s i2))
(x2 (x_s i2))
(y2 (y_s i2))
(z2 (z_s i2)))
(let/fpv ((rx (- x2 x1))
(ry (- y2 y1))
(rz (- z2 z1)))
(let/fpv ((r^3 (let* ((r^2 (+ (* rx rx) (* ry ry) (* rz rz)))
(r (sqrt r^2)))
(declare (type nonnegative-fpv r^2 r))
(* r r r))))
(incf ax/G (/ (* rx m2) r^3))
(incf ay/G (/ (* ry m2) r^3))
(incf az/G (/ (* rz m2) r^3))))))
(setf (x_d i1) (+ x1 (* vx1 dt) (* ax/G Gdt^2/2))
(y_d i1) (+ y1 (* vy1 dt) (* ay/G Gdt^2/2))
(z_d i1) (+ z1 (* vz1 dt) (* az/G Gdt^2/2)))
(setf (vx_d i1) (+ vx1 (* ax/G Gdt))
(vy_d i1) (+ vy1 (* ay/G Gdt))
(vz_d i1) (+ vz1 (* az/G Gdt)))))))
destination)
And it not only worked, the performance was very close to the previous version, straight out of the gate. The syntax is not as nice as that of the initial, quick-and-dirty version, but it is much more general, so I think that's worth it on the whole.
There have been problems since then: in particular the dependency on when classes get defined. It will never be as portable as I'd like because of the unnecessary MOP dependencies3, but it is usable and quick4.
Was it worth it? May be, but it should have been simpler.
-
When exactly do classes get defined? Right. ↩
-
Nothing that uses the AMOP MOP is ever The Right Thing, because the whole thing was designed by people who were extremely smart, but still not as smart as they needed to be and thought they were. It's unclear if any MOP for CLOS can ever be satisfactory, in part because CLOS itself suffers from the same smart-but-not-smart-enough problem to a large extent not helped by bring dropped wholesale into CL at the last minute: by the time CL was standardised people had written large systems in it, but almost nobody had written anything significant using CLOS, let alone the AMOP MOP. ↩
-
A mistake I somehow managed to avoid was using the whole slot-definition mechanism the MOP wants you to use. ↩
-
I will make it available at some point. ↩
16 Apr 2026 11:01am GMT
14 Apr 2026
Planet Lisp
Robert Smith: Not all elementary functions can be expressed with exp-minus-log
By Robert Smith
All Elementary Functions from a Single Operator is a paper by Andrzej Odrzywołek that has been making rounds on the internet lately, being called everything from a "breakthrough" to "groundbreaking". Some are going as far as to suggest that the entire foundations of computer engineering and machine learning should be re-built as a result of this. The paper says that the function
$$ E(x,y) := \exp x - \log y $$
together with variables and the constant $1$, which we will call EML terms, are sufficient to express all elementary functions, and proceeds to give constructions for many constants and functions, from addition to $\pi$ to hyperbolic trigonometry.
I think the result is neat and thought-provoking. Odrzywołek is explicit about his definition of "elementary function". His Table 1 fixes "elementary" as 36 specific symbols, and under that definition his theorem is correct and clever, so long as we accept some of his modifications to the conventional $\log$ function and do arithmetic with infinities.
My concern is that the word "elementary" in the title carries a much broader meaning in standard mathematical usage. Odrzywołek recognizes this, saying little more than "[t]hat generality is not needed here" and that his work takes "the ordinary scientific-calculator point of view". He does not offer further commentary.
What is this more general setting, and does his claim still hold? In modern pure mathematics, dating back to the 19th century, the definition of "elementary function" has been well established. We'll get to a definition shortly, but to cut to the chase, the titular result does not hold in this setting. As such, in layman's terms, I do not consider the "Exp-Minus-Log" function to be the continuous analog of the Boolean NAND gate or the universal quantum CCNOT/CSWAP gates.
The rough TL;DR is this: Elementary functions typically include arbitrary polynomial root functions, and EML terms cannot express them. Below, I'll give a relatively technical argument that EML terms are not sufficient to express what I consider standard elementary functions.
To avoid any confusion, the purpose of this blog post is manifold:
- To elucidate what many mathematicians consider to be an "elementary function", which is the foundation for a variety of rich and interesting math (especially if you like computer science).
- To prove a result about EML terms using topological Galois theory.
- To demonstrate how this result may be used to show an elementary function not expressible by EML terms.
This blog post is not a refutation of Odrzywołek's work, though the title might be considered just as clickbait (and accurate) as his, depending on where you sit in the hall of mathematics and computation.
Disclaimer: I audited graduate-level mathematics courses almost 20 years ago, and I am not a professional mathematician. Please email me if my statements are clumsy or incorrect.
The 19th century is where all modern understanding of elementary functions was developed, Liouville being one of the big names with countless theorems of analysis and algebra named after him. One such result is about integration: do the outputs of integrals look the same as their inputs? Well, what does "input" and "look the same" mean? Liouville defined a class of functions called elementary functions, and said that the integral of an elementary function will sometimes be elementary, and when it is, it will always resemble the input in a specific way, plus potential extra logarithmic factors.
Since then, elementary functions have been defined by starting with rational functions and closing under arithmetic operations, composition, exponentiation, logarithms, and polynomial roots. While EML terms are quite expressive, they are unable to capture the "polynomial roots" in full generality. We will show this by using Khovanskii's topological Galois theory: the monodromy group of a function built from rational functions by composition with $\exp$ and $\log$ is solvable. For anybody that has studied Galois theory in an algebra course, this will be familiar, as the destination here is effectively the same, but with more powerful intermediate tooling to wrangle exponentials and logarithms.
First, let's be more precise by what we mean by an EML term and by a standard elementary function.
Definition (EML Term): An EML term in the variables $x_1,\dots,x_n$ is any expression obtained recursively, starting from $\{1, x_1,\dots,x_n\}$, by the rule $$ T,S \mapsto \exp T-\log S. $$ Each such term, evaluated at a point where all the $\log$ arguments are nonzero, determines an analytic germ; we take $\mathcal T_n$ to be the class of germs representable this way, together with their maximal analytic continuations.
Definition (Standard Elementary Function): The standard elementary functions $\mathcal{E}_n$ are the smallest class of multivalued analytic functions on domains in $\mathbb{C}^n$ containing the rational functions and closed under
- arithmetic operations and composition,
- exponentiation and logarithms,
- algebraic adjunctions: if $P(Y)\in K[Y]$ is a polynomial whose coefficients lie in a previously constructed class $K$, then any local branch of a solution of $P(Y)=0$ is admitted.
What we will show is that the class of elementary functions defined this way is strictly larger than the class induced by EML terms.
Lemma: Every EML term has solvable monodromy group. In particular, if $f\in\mathcal T_n$ is algebraic over $\mathbb C(x_1,\dots,x_n)$, then its monodromy group is a finite solvable group.
Proof: We prove by induction on EML term construction. Constants and coordinate functions have trivial monodromy.
For the inductive step, suppose $f = \exp A-\log B$ with $A,B\in\mathcal T_n$, and assume that $\mathrm{Mon}(A)$ and $\mathrm{Mon}(B)$ are solvable. We argue in three steps.
Step 1: $\mathrm{Mon}(\exp A)$ is solvable. The germs of $\exp A$ are images under $\exp$ of the germs of $A$, with germs of $A$ differing by $2\pi i\mathbb Z$ collapsing to the same value. So there is a surjection $\mathrm{Mon}(A)\twoheadrightarrow\mathrm{Mon}(\exp A)$, and a quotient of a solvable group is solvable.
Step 2: $\mathrm{Mon}(\log B)$ is solvable. At a generic point $p$, germs of $\log B$ are parameterized by pairs $(b,k)$ where $b$ is a germ of $B$ at $p$ and $k\in\mathbb Z$ selects the branch of $\log$. A loop $\gamma$ acts by $$ (b,k)\mapsto\bigl(\rho_B(\gamma)(b), k+n(\gamma,b)\bigr), $$ where $\rho_B(\gamma)$ is the monodromy action of $\gamma$ on germs of $B$, and $n(\gamma,b)\in\mathbb Z$ is the winding number around $0$ of the analytic continuation of $b$ along $\gamma$. The projection $\mathrm{Mon}(\log B)\to\mathrm{Mon}(B)$ onto the first component is a surjective homomorphism. Its kernel consists of the elements of $\mathrm{Mon}(\log B)$ induced by loops $\gamma$ with $\rho_B(\gamma)=\mathrm{id}$, which then act only by integer shifts on the $k$-coordinate. Let $S_B$ be the set of germs of $B$ at $p$. For each $b\in S_B$, such a loop determines an integer shift $n(\gamma,b)$, so the kernel embeds in the direct product $\mathbb Z^{S_B}$. In particular, the kernel is abelian. Hence $\mathrm{Mon}(\log B)$ is an extension of $\mathrm{Mon}(B)$ by an abelian group, and extensions of solvable groups by abelian groups are solvable.
Step 3: $\mathrm{Mon}(f)$ is solvable. At a generic point, a germ of $f=\exp A-\log B$ is obtained by subtraction from a pair (germ of $\exp A$, germ of $\log B$), and analytic continuation acts componentwise on such pairs. This gives a surjection of $\pi_1$ onto some subgroup $$ H \le \mathrm{Mon}(\exp A)\times\mathrm{Mon}(\log B), $$ and, since $f$ is obtained from the pair by subtraction, this descends to a surjection $H\twoheadrightarrow\mathrm{Mon}(f)$. So $\mathrm{Mon}(f)$ is a quotient of a subgroup of a direct product of solvable groups, hence solvable.
The second statement of the lemma follows: an algebraic function has finitely many branches, so its monodromy group is finite; a solvable group that is finite is, well, finite and solvable. ∎
Remark. This is the core of Khovanskii's topological Galois theory; see Topological Galois Theory: Solvability and Unsolvability of Equations in Finite Terms.
Theorem: $\mathcal T_n \subsetneq \mathcal E_n$.
Proof: $\mathcal E_n$ is closed under algebraic adjunction, so any local branch of an algebraic function is elementary. In particular, a branch of a root of the generic quintic $$ f^5+a_1f^4+a_2f^3+a_3f^2+a_4f+a_5=0 $$ is elementary.
Suppose for contradiction that at some point $p$ a germ of a branch of this root agrees with a germ of an EML term $T$. By uniqueness of analytic continuation, the Riemann surfaces obtained by maximally continuing these two germs coincide, so in particular their monodromy groups coincide. The monodromy group of the generic quintic is $S_5$, which is not solvable. But by the lemma, the monodromy group of any EML term is solvable. Contradiction.
Hence $\mathcal T_n$ is a strict subset of $\mathcal E_n$. ∎
Edit (15 April 2026): This article used to have an example proving that the real and complex absolute value cannot be expressed over their entire domain as EML terms under the conventional definition of $\log$. I wrote it to emphasize that Odrzywołek's approach required mathematical "patching" in order to work as intended. However, it ended up more distracting than illuminating, and was tangential to the point about the definition of "elementary", so it has been removed.
14 Apr 2026 12:00am GMT
13 Apr 2026
Planet Lisp
Scott L. Burson: FSet v2.4.2: CHAMP Bags, and v1.0 of my FSet book!
A couple of weeks ago I released FSet 2.4.0, which brought a CHAMP implementation of bags, filling out the suite of CHAMP types. 🚀 FSet users should have a look at the release page, as it also contained a number of bug fixes and minor changes.
I've since released v2.4.1 and v2.4.2, with some more bug fixes.
But the big news is the book! It brings together all the introductory material I have written, plus a lot more, along with a complete API Reference chapter.
FSet is now in the state I decided last summer I wanted to get it into: faster, better tested and debugged, more feature-complete, and much better documented than it has ever been in its nearly two decades of existence. I am, of course, very much hoping that these months of work have made the library more interesting and accessible to CL programmers who haven't tried it yet. I am even hoping that its existence helps attract newcomers to the CL community. Time will tell!
13 Apr 2026 6:21am GMT
29 Jan 2026
FOSDEM 2026
Join the FOSDEM Treasure Hunt!
Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰
29 Jan 2026 11:00pm GMT
26 Jan 2026
FOSDEM 2026
Guided sightseeing tours
If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.
26 Jan 2026 11:00pm GMT
Call for volunteers
With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰
26 Jan 2026 11:00pm GMT