24 Apr 2026

feedPlanet Grep

Jeroen De Dauw: Boot Your Productivity With AI

My productivity has increased by at least 300% with AI assistance. You can get amazing results nowadays. If you use the tools right. Discover 4 key ingredients that make the tools work for you in this post.

Many people have only tried free AI via ChatGPT or similar web chatbots. It's easy to dismiss those tools, since they lack all 4 ingredients.

1. Context. If I ask you how I could improve my business, you won't be able to provide a good answer. You don't know all the details about my business that matter. All you can do is offer generic advice or make guesses (hallucinations). It's the same with AI tools. Don't rely on the knowledge baked into the LLM base models. Either provide this knowledge, or provide the tools to obtain that knowledge. You can include "all relevant knowledge" in the prompt, but this is labor-intensive. This is why you want an agentic tool.

2. Agentic tools. I've been using Claude Code, a CLI tool that provides agentic AI for knowledge tasks (not just coding). There is also Claude Cowork, a desktop tool, and alternatives from vendors besides Antrophic. These tools use a loop in which the AI determines whether it needs more information and then goes looking for it. You can give these tools a task or a question, and then they will, if called for, run hundreds of searches and commands. They can look at your documents, codebases, and web resources. Tell these tools "Fix GitHub issue $link", and they'll look at the issue, anything references on the issue, as your codebase, make changes, run tests, make more changes, check results via the browser, fix some final issues, create a draft pull request, and provide you with a summary of what was done and possible next steps.

3. Feedback harness. When writing code, you often don't get everything correct the first time. Which is why automated tests are great. More generally, fast feedback loops are great, regardless of whether you're doing software development. For software development, you'll get much better results if the AI tools can actually run the code and run tests and other CI tools to verify everything is correct.

4. Model. AI capabilities are increasing at an incredible pace. If you're using the latest models, your experience will be worlds apart from those using 2-year-old models. For maximum quality, there are 3 metrics to max out: model size/capability, model version, and effort parameters. In other words, use the latest version of the biggest model with "max effort". At the time of writing this post, that is Claude Opus 4.7 with max-effort when using Antrophic, or GPT-5.4 Pro with heavy thinking when using OpenAI. These settings eat tokens, so you will quickly run into your subscription limits of the basic tiers. Then again, paying 200 USD a month for the higher tiers so you can 5x your productivity is quite the bargain.

Those 4 points provide a conceptual framework. There is more to learn, and the AI space is evolving quickly. Ask your favorite AI tool how you can improve your AI workflows, starting from this post, to get specifics.

Some more tips:

You can stay up to speed with AI capabilities development via Don't Worry About the Vase and Astral Codex Ten, which I both highly recommend.

Shameless plug: my company provides an AI Assistant for MediaWiki, giving you AI capabilities on top of collaborative knowledge management, ideal for organizations.

The post Boot Your Productivity With AI appeared first on Blog of Jeroen De Dauw.

24 Apr 2026 9:14am GMT

Frank Goossens: De nieuwe Harstad “Onder de kasseien, het strand” is er bijna maar nog niet helemaal

Ik ben niet zo voor lijstjes, maar als ik onder dreiging van foltering een boeken top 3 zou moeten geven, dan zou "Max, Micha & het Tet-offensief" daar zeker in staan. Van auteur Johan Hardstad verscheen in 2024 al een nieuwe roman in het Noors onder de titel "Under brosteinen, stranden!" en volgens doorgaans goed ingelichte bronnen (ik mailde met de uitgever) zou in het najaar van 2026 de…

Source

24 Apr 2026 9:14am GMT

Dries Buytaert: What does 'Buy European' even mean?

This post was co-authored with Nicholas Gates, senior policy advisor at OpenForum Europe. It was originally published on EUobserver, an independent online newspaper widely read by EU policymakers, journalists and advocacy groups. The article summarizes a series of posts I've been writing about digital sovereignty.

European digital assets have a habit of not staying European - a problem current discussions about sovereignty are overlooking.

For example, Skype had Swedish and Danish founders, Estonian engineers, a Luxembourg headquarters, and proprietary code.

Every sovereignty credential was correct on the day it would have been assessed - and meaningless after eBay acquired it, Microsoft bought it, and eventually shut it down in 2025.

This speaks to a core tension at the heart of Europe's digital sovereignty moment. The real story has to do with licensing, dependencies, and supply chains more than it has to do with ownership or operational control - both of which can (and often do) change in Europe.

The current conception of cloud sovereignty asks the right questions about where data is stored, where companies are headquartered, and whether supply chains are European.

What they don't yet ask is whether the sovereignty they are assessing is durable and resilient - for example, whether it will survive a change of ownership, a corporate acquisition, or a disruption in the infrastructure the software depends on.

The European Commission's Cloud Sovereignty Framework provides a non-legislative assessment tool designed to evaluate the digital independence of cloud services in Europe.

It enables public authorities to rank services based on factors such as immunity from non-EU laws, operational control, and data protection.

The forthcoming Cloud and AI Development Act (CAIDA) - expected at the end of May - will possibly go further.

That said, while both are serious and welcome efforts, they are likely to solve only part of the problem.

'Buy European' is a fragile concept

Europe's 'Buy European' strategy is being built on two fragile foundations it hasn't yet explicitly addressed, and this could have disastrous implications in the cloud domain in particular.

Proprietary software with a perfect sovereignty score today is one acquisition away from a different answer tomorrow. Open Source software means the question doesn't arise.

The legal right to fork changes the power dynamic entirely: it gives you leverage, lets a community step in, and means the technology cannot be held hostage.

This is the distinction the Cloud Sovereignty Framework currently misses.

When Oracle acquired Sun Microsystems in 2010, governments running MySQL faced an immediate question: what happens to this software now?

The answer turned on one thing - the licence. Because MySQL was GPL-licensed, the right to fork and maintain it independently was already being exercised before the acquisition even completed.

MySQL's creator, Monty Widenius, forked it in 2009 precisely because he saw the acquisition coming - that fork exists today as MariaDB. The licence didn't prevent Oracle from buying Sun. It meant the acquisition couldn't end the software, and anyone paying attention could act on that right before any harm materialised.

Getting the licence right is necessary, but it is not sufficient.

In 2024, a conflict between WordPress co-founder Matt Mullenweg and WP Engine disrupted updates for millions of websites.

The code was Open Source. The delivery infrastructure had a single point of control. Most programming languages rely on a single central registry and most are controlled by US companies.

In 2019, GitHub restricted access for developers in sanctioned countries; since GitHub also owns npm, the JavaScript ecosystem's delivery infrastructure became subject to the same trade controls. These aren't interchangeable download sites you can swap out.

Sovereign software on fragile infrastructure is not sovereign. It is software waiting for a supply chain to break.

Both fragility problems point to the same conclusion: a 'Buy European' label is not a sovereignty guarantee unless it embraces licensing as a tool and helps to safeguard the supply chains the software depends on.

Consider two scenarios. A government running proprietary software on a European cloud has jurisdiction, but no exit if the provider is acquired - replacing the software could take years.

A government running Open Source software on Amazon Web Services (AWS) in Europe can move the same software to a European provider whenever it wants. Neither is ideal, but they are not equal.

Europe's sovereignty frameworks need to internalise this asymmetry. Structural sovereignty - the kind that survives change - requires open foundations that flow from licensing through the critical supply chains on which that software depends.

A call-to-action for the Cloud and AI Development Act

CAIDA should not make the same mistakes as the Cloud Sovereignty Framework. It would be a mistake to simply extend a 'Buy European' checklist. The legislation should instead define what makes sovereignty durable.

Two concrete steps would make an immediate difference.

First, it can make Open Source licensing a pass/fail gate for mission-critical procurement under the Cloud Sovereignty Framework - a condition of eligibility at the highest assurance levels, not a weighted factor in a composite score.

Second, it should require supply chain resilience assessments that distinguish between dependencies switchable in weeks and those that would take an entire language community years to replicate, with federated or mirrored European alternatives required where no fallback exists.

Yes, requiring Open Source for mission-critical systems narrows the field in the short term.

But the providers you lose are the ones whose sovereignty credentials don't survive change.

In the longer term, these requirements push European companies toward Open Source software - technology that no one can take away.

24 Apr 2026 9:14am GMT

23 Apr 2026

feedPlanet Debian

Dirk Eddelbuettel: dtts 0.1.4 on CRAN: Maintenance

Leonardo and I are happy to announce another maintenance release 0.1.4 of our dtts package which has been on CRAN for four years now. dtts builds upon our nanotime package as well as the beloved data.table to bring high-performance and high-resolution indexing at the nanosecond level to data frames. dtts aims to offers the time-series indexing versatility of xts (and zoo) to the immense power of data.table while supporting highest nanosecond resolution.

This release, not unlike yesterday's release of nanotime, is driven by recent changes in the bit64 package which underlies it. Michael, who now maintains it, had sent in two PRs to prepare for these changes. I updated continuous integration, and switched to Authors@R, and that pretty much is the release. The short list of changes follows.

Changes in version 0.1.4 (2026-04-23)

  • Continuous integration has received some routine updates

  • Adapt align() column names with changes in 'data.table' (Michael Chirico in #20)

  • Narrow imports to functions used for packages 'bit64', 'data.table' and 'nanotime' (Michael Chirico in #21)

Courtesy of my CRANberries, there is also a [diffstat repor]tbsdiffstat for this release. Questions, comments, issue tickets can be brought to the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

23 Apr 2026 6:58pm GMT

Sergio Talens-Oliag: Developing a Git Worktree Helper with Copilot

Over the past few weeks I've been developing and using a personal command-line tool called gwt (Git Worktree) to manage Git repositories using worktrees. This article explains what the tool does, how it evolved, and how I used GitHub Copilot CLI to develop it (in fact the idea of building the script was also to test the tool).

The Problem: Managing Multiple Branches

I was working on a project with multiple active branches, including orphans; the regular branches are for fixes or features, while the orphans are used to keep copies of remote documents or store processed versions of those documents.

The project also uses a special orphan branch that contains the scripts and the CI/CD configuration to store and process the external documents (it is on a separate branch to avoid mixing its operation with the main project code).

The plan is trigger a pipeline against the special branch from remote projects to create or update the doc branch for it in our git repository, retrieving artifacts from the remote projects to get the files and put them on an orphan branch (initially I added new commits after each update, but I changed the system to use force pushes and keep only one commit, as the history is not really needed).

The original documents have to be changed, so, after ingesting them, we run a script that modifies them and adds or updates another branch with the processed version; the contents of that branch are used by the main branch build process (there we use git fetch and git archive to retrieve its contents).

When working on the scripts to manage the orphan branches I discovered the worktree feature of git, a functionality that allows me to keep multiple branches checked out in parallel using a single .git folder, removing the need to use git switch and git stash when changing between branches (until now I've been a heavy user of those commands).

Reading about it I found that a lot of people use worktrees with the help of a wrapper script to simplify the management. After looking at one or two posts and the related scripts I decided to create my own using a specific directory structure to simplify things.

That's how I started to work on the gwt script; as I also wanted to test copilot I decided to build it using its help (I have a pro license at work and wanted to play with the cli version instead of integrated into an editor, as I didn't want to learn a lot of new keyboard shortcuts).

The gwt Philosophy: Opinionated and Transparent

gwt enforces a simple, filesystem-visible model:

  • Exactly one bare repository named bare.git (treated as an implementation detail)
  • One worktree directory per branch where the directory name matches the branch name
  • Single responsibility: gwt doesn't try to be a general git wrapper; it only handles operations that map cleanly to this layout

The repository structure looks like this:

my-repo/
+-- bare.git/           # the Git repository (internal)
+-- main/               # worktree for branch "main"
+-- feature/api/        # worktree for branch "feature/api"
+-- fix/docs/           # worktree for branch "fix/docs"
+-- orphan-history/     # worktree for the "orphan-history" branch

The tool follows five core design principles:

  1. Explicit over clever: Git commands are not hidden or reinterpreted
  2. Transparent execution: Every operation is printed before it happens
  3. Safe, preview-first operations: Destructive commands default to preview, confirmation, then apply
  4. Shell-agnostic core: The script never changes the caller's working directory (shell wrappers handle that)
  5. Opinionated but minimal: Only commands that fit the layout model are included

Core Commands

The script provides these essential commands:

  • gwt init <url> - Clone a repository and set up the gwt layout
  • gwt convert <dir> - Convert an existing Git checkout to the gwt layout
  • gwt add [--orphan] <branch> [<base>] - Create a new worktree (optionally orphaned)
  • gwt remove <branch> - Remove a worktree and unregister it (asks the user to remove the local branch too, useful when removing already merged branches)
  • gwt rename <old> <new> - Rename a branch AND its worktree directory
  • gwt list - List all worktrees
  • gwt default [<branch>] - Get or set the default branch
  • gwt current - Print the current worktree or branch name

Except init and convert all of the commands work inside a directory structure that follows the gwt layout, which looks for the bare.git folder to find the root folder of the structure.

As I don't want to hide which commands are really used by the wrapper, all git and filesystem operations pass through a single run shell function that prints each command before executing it. This gives complete visibility into what the tool is doing.

Also, destructive operations (remove, rename) default to preview mode:

$ gwt remove feature-old --dry-run

+ git -C bare.git branch -d feature-old
+ git -C bare.git worktree remove feature-old/

Apply these changes? [y/N]:

The user sees exactly what will happen, can verify it's correct, and only then confirm execution.

Incremental Development with Copilot

The gwt script has grown from 597 lines in its original version (git-wt) to 1,111 lines when writing the first draft of this post.

This growth happened through incremental, test-driven development, with each feature being refined based on real usage patterns.

What follows is a little history of the script evolution written with the help of git log.

Initial version

First I wrote a design document and asked copilot to create the initial version of the git-wt script with the original core commands.

I started to use the tool with a remote repostory (I made copies of the branches in some cases to avoid missing work) and fixed bugs (trivial ones with neovim, larger ones asking copilot to fix the issues for me, so I had less typing to do).

First command update

One of the first commands I had to enhance was rename:

  • as I normally use branches with / on their name and my tool checks out the worktrees using the branch name as the path inside the gwt root folder (i.e. a fix/rename branch creates the fix directory and checks the branch inside the fix/rename folder) the rename command had to clean up the empty parent directories
  • when renaming a worktree we move the folders and fix the references using the worktree repair command to make things work locally, but the rename also affects the remote branch reference, to avoid surprises the command unsets the remote branch reference so it can be pushed again using the new name (of course, the user is responsible of managing the old remote branch, as the gwt can't guess what it should do with it).

Integration with the shell

As I use zsh with the Powerlevel10k theme I asked copilot to help me add visual elements to the prompt when working with gwt folders, something that I would have never tried without help, as it would have required a lot of digging on my part on how to do it, as I never looked into it.

The initial version of the code was on an independent file that I sourced from my .zshrc file and it prints on the right part of the prompt when we are inside a gwt folder (note that if the folder is a worktree we see the existing git integration text right before it, so we have the previous behavior and we see that it is a gwt friendly repo) and if we are on the root folder or the bare.git folder we see gwt or bare (I added the text because there are no git promts on those folders).

I also asked copilot to create zsh autocompletion functions (I only use zsh, so I didn't add autocompletion for other shells). The good thing here is that I wouldn't have done that manually, as it would have required some reading to get it right, but the output of copilot worked and I can update things using it or manually if I need to.

One thing I was missing from the script was the possibility of changing the working directory easily, so I wrote a gwt wrapper function for zsh that intercepts commands that require shell cooperation (changing the working directory) and delegates everything else to the core script.

Currently the function supports the following enhanced commands:

  • cd [<branch>]: change into a worktree or the default one if missing
  • convert <dir>: convert a checkout, then cd into the initial worktree
  • add [--orphan] <branch> [<base>]: create a worktree, then cd into it on success
  • rename <old> <new>: rename a worktree, then cd into it if we were inside it

Note that the cd command will not work on other shells or if the user does not load my wrapper, but the rest will still work without the working directory changes.

Renaming the command

As I felt that git-wt was a long name I renamed the tool to gwt, I could have done it by hand, but using copilot I didn't have to review all files by myself and it did it right (note that I have it configured to always ask me before doing changes, as it sometimes tries to do something I don't want and I like to check its changes …​ as I have the files in git repos, I manually add the files when I like the status and if the cli output is not clear I allow it to apply it and check the effects with git diff so I can validate or revert what was done).

The convert command

After playing with one repo I added the convert subcommand for migrating existing checkouts, it seemed a simple task at first, but it took multiple iterations to get it right, as I found multiple issues while testing (in fact I did copies of the existing checkouts to be able to re-test each update, as some of the iterations broke them).

The version of the function when this post was first edited had the following comment explaining what it does:

# ---------------------------------------------------------------------------
# convert - convert an existing checkout into the gwt layout
# ---------------------------------------------------------------------------
#
# Must be run from the parent directory of <dir>.
#
# Steps:
#   1. Read branch from the checkout's HEAD
#   2. Rename <dir> to <dir>.wt.tmp (sibling, same filesystem)
#   3. Create <dir>/ as the new gwt root
#   4. Move <dir>.wt.tmp/.git to <dir>/bare.git; set core.bare = true
#   5. Fix fetch refspec (bare clone default maps refs directly, no remotes/)
#   6. Add a --no-checkout worktree so git wires up the metadata and
#      creates <dir>/<branch>/.git (the only file in that dir)
#   7. Move that .git file into the real working tree (<dir>.wt.tmp)
#   8. Remove the now-empty placeholder directory
#   9. Move the real working tree into place as <dir>/<branch>
#  10. Reset the index to HEAD so git status is clean
#      (--no-checkout leaves the index empty)
#  11. Create <dir>/.git -> bare.git symlink so plain git commands work
#      from the root without --git-dir
#
# The .git file ends up at the same absolute path git recorded in step 5,
# so no worktree repair is needed. Working tree files are never modified.

The .git link was added when I noticed that I could run commands that don't need the checked out files on the root of the gwt structure, which is handy sometimes (i.e. a git fetch or a git log, that shows the log of the branch marked as default).

After playing with commands that used the bare.git folder I updated the init and convert commands to keep the origin refs, ensuring that the remote tracking works correctly.

Improving the add command

While playing with the tool on more repos I noticed that I also had to enhance the add command to better handle worktree creation, depending on my needs.

Right now the tool supports the following use cases:

  • if the branch exists locally or on origin, it just checks it out.
  • if the branch does not exist, we create it using the given base branch or, if no base is given, the current worktree (if we are in the root folder or bare.git the command fails).
  • as I needed it for my project, I added a --orphan option to be able to create orphan branches directly.

Moving to a single file

Eventually I decided to make the tool self contained; I removed the design document (I moved the content to comments on the top of the script and details to comments on each function definition) and added a pair of commands to print the code to source for the p10k and zsh integration (autocompletion & functions), leaving everything in a single file.

Now my .zshrc file adds the following to source both things:

# After loading the p10k configuration
if type gwt >/dev/null 2>&1; then
  source <(gwt p10k)
fi
[...]
# After loading autocompletion
if type gwt >/dev/null 2>&1; then
  source <(gwt zsh)
fi

Versioning

As I modified the script I found interesting to use CalVer-based versioning (the version variable has the format YYYY.mm.dd-r#) so I added a subcommand to show its value or bump it using the current date and computing the right revision number.

About the use of copilot

Although I've never been a fan of AI tools I have to admit that the copilot CLI has been very useful for building the tool:

  • Rapid prototyping: Each commit represented a small feature or fix that I could implement, test immediately in my actual workflow, and iterate on based on the result
  • Edge case handling: Rather than trying to anticipate every scenario upfront, I could ask Copilot how to handle edge cases as they appeared in real usage
  • Script refinement: Questions like "how do I clean up empty directories after a rename" or "how do I detect if I'm inside a specific worktree" were quickly answered with working code
  • Shell integration: The Zsh wrapper and completion system grew from simple prototypes to sophisticated features, with each iteration informed by how I actually used the tool

For example, the convert command started as a simple rename operation, but evolved to also create a .git symlink and intelligently handle various migration scenarios-all because I used it repeatedly and refined the implementation each time.

Self-Contained and Opinionated

gwt is deliberately opinionated:

  • Zsh & Powerlevel10k Integration: The tool includes built-in Zsh shell integration, accessed via source <(gwt zsh) and supports adding a prompt segment when using p10k, as described earlier.
  • Directory Structure: The bare.git directory name is non-negotiable. This is how gwt discovers the repository root from any subdirectory, and how the tool knows whether a directory is a gwt repository. The simplicity of this marker means the discovery mechanism is foolproof and requires no configuration.
  • No Configuration Files: gwt deliberately has no configuration. There are no .gwtrc files or config directories. This makes it portable; the tool works the same way everywhere, and repositories can be shared across systems without synchronizing configuration.

From Script to System

What started as a small helper script for managing worktrees has become a complete system:

  1. Core script (gwt): 1,111 lines of pure shell, no external dependencies
  2. Shell integration: Zsh functions and completions
  3. Prompt integration: Powerlevel10k segment
  4. Documentation: Built-in help and design philosophy documentation

The script is self-contained, everything needed for the tool to work is in a single file.

This makes it trivial to update (just replace the script) or audit (no hidden dependencies).

Development with AI support

Developing gwt with copilot taught me some things:

  • Incremental refinement works well for small tools: Each iteration informed the next, resulting in a tool that handles real use cases elegantly
  • Transparency is a feature: Making operations visible builds confidence and is easier to debug
  • Opinionated tools can be powerful: By constraining the problem space (one bare repo, one worktree per branch), the solution becomes simpler and more robust
  • Shell integration matters: The same core commands are easier to use when they can automatically change directories and provide completions
  • Real-world testing is essential: I wouldn't have discovered the need for automatic directory cleanup or context-aware cd behavior without actually using the tool daily

What was next?

The tool is stable and handles my daily workflow well, so my guess is that I would keep using it and fixing issues if or when I found them, but I do not plan to include additional features unless I find a use case that justifies it (i.e. I never added support for some of the worktree subcommands, as it is easier to use the git versions if I ever needed them).

What really happened

While editing this post I discovered that I needed to add another command to it and fixed a bug (see below).

With those changes and the inclusion of a license and copyright notice (just in case I distribute it at some point) now the script is 1,217 lines long instead of the 1,111 it had when I started to write this entry.

Submodule Support

When I converted this blog repository to the gwt format and tried to preview the post using docker compose, it failed because the worktree I was on didn't have the Git submodule initialized.

My blog theme is included on the repository as a submodule, and when I used gwt to check out different branches in worktrees, the submodule was not initialized in the new worktrees.

This led me to add new internal function and a gwt submodule command to handle submodule initialization; the internal function is called from convert and add (when converting a repo or adding a worktree) and the public command is useful to update the submodules on existing branches.

Path Handling with Branch Names Containing Slashes

The second discovery was a bug in how the tool handled branch names containing slashes (e.g., feature/new-api, docs/user-guide), the worktree directories are created with the branch name as the path, so a branch like feature/new-api would create two nested folders (feature and new-api inside it).

However, there was a mismatch in how the zsh wrapper function resolved worktree paths (initially it used shell parameter expansion, i.e. rel="${cwd#"$REPO_ROOT"/}"), versus how the core script calculated them, causing the cd command to fail or navigate to the wrong location when branch names contained slashes.

The fix involved ensuring consistent path resolution throughout the script and wrapper (now it uses a function that processes the git worktree list output), so that gwt cd feature/new-api correctly navigates to the worktree directory regardless of path depth.

Conclusion

gwt is a tool that solves a real problem: managing multiple Git branches simultaneously without context-switching overhead.

I'm sure I'm going to keep using it for my projects, as it simplifies some workflows, although I'll also use switch and stash in some cases, but I like the use of multiple worktrees in parallel.

In fact I converted this blog repository checkout to the gwt format to work on a separate branch as it felt the right approach even if I'm the only one using the repo now, and it helped me improve the tool, as explained before.

Also, it was a good example of how to use AI tools like copilot to develop a simple tool and keep it evolving while using it.

In any case, although I find the copilot useful and has saved me time, I don't trust it to work without supervision, it worked well, but got stuck some times and didn't do the things as I wanted in multiple occasions.

I also have an additional problem now …​ I've been reading about it, but I don't really know which models to use or how the premium requests are computed (I've only been playing with it since last month and I ran out of requests the last day of the month on purpose, just to see what happened …​ it stops working …​ ;).

On my work machine I've been using a specific user account with a GitHub Copilot Business subscription and I only used the Anthropic Claude Sonnet 4.6 model and with my personal account I configured the Anthropic Claude Haiku 4.5 model, but I've only used that to create the initial draft of this post (I ended up rewriting most of it manually anyway) and to review the final version (I'm not a native speaker and it was useful for finding typos and improving the style in some parts).

I guess I'll try other models with copilot in the future and check other command line tools like aider or claude-code, but probably only using free accounts unless I get a payed account at work, as I have with GitHub Copilot.

To be fair, what I will love to be able to do is to use local models (aider can do it), but the machines I have are not powerful enough. I tried to run a simple test and it felt really slow, but when I have the time or the need I'll try again, just in case.

23 Apr 2026 5:40pm GMT

22 Apr 2026

feedPlanet Debian

Dirk Eddelbuettel: nanotime 0.3.14 on CRAN: Upstream Maintenance

Another minor update 0.3.14 for our nanotime package is now on CRAN, and has compiled for r2u (and will have to wait to be uploaded to Debian until dependency bit64 has been updated there). nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

This release has been driven almost entirely by Michael, who took over as bit64 maintainer and has been making changes there that have an effect on us 'downstream'. He reached out with a number of PRs which (following occassional refinement and smoothing) have all been integrated. There are no user-facing changes, or behavioural changes or enhancements, in this release.

The NEWS snippet below has the fuller details.

Changes in version 0.3.14 (2026-04-22)

  • Tests were refactored to use NA_integer64_ (Michael Chirico in #149 and Dirk in #156)

  • nanoduration was updated for changes in nanotime 4.8.0 (Michael Chirico in #152 fixing #151)

  • Use of as.integer64(keep.names=TRUE) has been refactored (Michael Chirico in #154 fixing #153)

  • In tests, nanotime is attached after bit64; this still needs a better fix (Michael Chirico in #155)

  • The package now has a hard dependency on the just released bit64 version 4.8.0 (or later)

Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository - and all documentation is provided at the nanotime documentation site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

22 Apr 2026 8:34pm GMT

16 Apr 2026

feedPlanet Lisp

Tim Bradshaw: Structures of arrays

Or, second system.

A while ago, I decided that I'd like to test my intuition that Lisp (specifically implementations of Common Lisp) was not, in fact, bad at floating-point code and that the ease of designing languages in Lisp could make traditional Fortran-style array-bashing numerical code pretty pleasant to write.

I used an intentionally naïve numerical solution to a gravitating many-body system as a benchmark, so I could easily compare Lisp & C versions. The brief result is that the Lisp code is a little slower than C, but not much: Lisp is not, in fact, slow. Who knew?

The point here though, is that I wanted to dress up the array-bashing code so it looked a lot more structured. To do this I wrote a macro which hid what was in fact an array of (for instance) double floats behind a bunch of syntax which made it look like an array of structures. That macro took a couple of hours.

This was fine and pretty simple, but it only dealt with a single type for each conceptual array of objects, there was no inheritance and it was restricted in various other ways. In particular it really was syntactic sugar on a vector: there was no distinct implementational type at all. So I thought well, I could make it more general and nicer.

Big mistake.

The second system

Here is an example of what I wanted to be able to do (this is in fact the current syntax):

(define-soa-class example ()
  ((x :array t :type double-float)
   (y :array t :type double-float)
   (p :array t :type double-float :group pq)
   (q :array t :type double-float :group pq)
   (r :array t :type fixnum)
   (s)))

This defines a class, instances of which have five array slots and one scalar slot. Of the array slots:

The implementation will tell you this:

> (describe (make-instance 'example :dimensions '(2 2)))
#<example 8010059EEB> is an example
[...]
dimensions      (2 2)
total-size      4
rank            2
tick            1
its class example has a valid layout
it has 3 arrays:
 index 0, element type double-float, 2 slots
 index 1, element type (signed-byte 64), 1 slot
 index 2, element type double-float, 2 slots
it has 5 array slots:
 name x, index 0 offset 0
 name y, index 0 offset 1
 name r, index 1 offset 0
 name p, index 2 offset 0
 name q, index 2 offset 1

This is already too complicated: the ability to control sharing via groups is almost certainly never going to be useful: it's only even there because I thought of it quite early on and never removed it.

The class definition macro then needs to arrange life so that enough information is available so that a macro can be written which turns indexed slot access into indexed array access of the underlying arrays which are secretly stored in instances, inserting declarations to make this as fast as possible: anything slower than explicit array access is not acceptable. This might (and does) look like this, for example:

(with-array-slots (x y) (thing example)
  (for* ((i ...) (j ...))
    (setf (x i j) (- (y i j) (y j i)))))

As you can see from this, the resulting objects should be allowed to have rank other than 1. Inheritance should also work, including for array slots. Redefinition should be supported and obsolete macro expansions and instances at least detected.

In other words there are exactly two things I should have aimed at achieving: the ability to define fields of various types and have them grouped into (generally fewer) underlying arrays, and an implementational type to hold these things. Everything else was just unnecessary baggage which made the implementation much more complicated than it needed to be.

I had not finished making mistakes. The system needs to store some metadata about how slots map onto the underlying arrays, element types and so on, so the macro can use this to compile efficient code. There are two obvious ways to do this: use the property list of the class name, or subclass standard-class and store the metadata in the class. The first approach is simple, portable, has clear semantics, but it's 'hacky'; the second is more complicated, not portable, has unclear semantics1, but it's The Right Thing2. Another wrong decision I made without even trying.

The only thing that saved me was that the nature of software is that you can only make a finite number of bad decisions in a finite time.

More bad decisions

I was not done. Early on, I thought that, well, I could make this whole thing be a shim around defstruct: single inheritance was more than enough, and obviously I could store metadata on the property list of the type name as described above. And there's no nausea with multiple accessors or any of that nonsense.

But, somehow, I found writing a thing which would process the (structure-name ...) case of defstruct too painful, so I decided to go for the shim-around-defclass version instead. I even have a partly-complete version of the defstructy code which I abandoned. Another mistake.

I also decided that The Right Thing was to have the system support objects of rank 0. That constrains the underlying array representation (it needs to use rank \(n+1\) arrays for an object of rank \(n\)) in a way which I thought for a long time might limit performance.

Things I already knew

At any point during the implementation of this I could have told you that it was too general and the implementation was going to be too complicated for no real gain. I don't know why I made so many bad choices.

The whole process took weeks and I nearly just gave up several times.

The light at the end of the tunnel

Or: all-up testing.

Eventually, I had a thing I thought might work. The macro syntax was a bit ugly (that macro still exists, with a different name) but it seemed to work. But since the whole purpose of the thing was performance, that needed to be checked. I wasn't optimistic.

What I did was to write a version of my naïve gravitational many-body system using the new code, based closely on the previous one. The function that updates the state of the particles looks like this:

(defun/quickly step-pvs (source destination from below dt G &aux
                                (n (particle-vector-length source)))
  ;; Step a source particle vector into a destination one.
  ;;
  ;; Operation count:
  ;;  3
  ;;  + (below - from) * (n - 1) * (3 + 8 + 9)
  ;;  + (below - from) * (12 + 6)
  ;;  = (below - from) * (20 * (n - 1) + 18) + 3
  (declare (type particle-vector source destination)
           (type vector-index from)
           (type vector-dimension below)
           (type fpv dt G)
           (type vector-dimension n))
  (when (eq source destination)
    (error "botch"))
  (let*/fpv ((Gdt (* G dt))
             (Gdt^2/2 (/ (* Gdt dt) (fpv 2.0))))
    (binding-array-slots (((source particle-vector :check nil :rank 1 :suffix _s)
                           m x y z vx vy vz)
                          ((destination particle-vector :check nil :rank 1 :suffix _d)
                           m x y z vx vy vz))
      (for ((i1 (in-naturals :initially from :bound below :fixnum t)))
        (let/fpv ((ax/G zero.fpv)
                  (ay/G zero.fpv)
                  (az/G zero.fpv)
                  (x1 (x_s i1))
                  (y1 (y_s i1))
                  (z1 (z_s i1))
                  (vx1 (vx_s i1))
                  (vy1 (vy_s i1))
                  (vz1 (vz_s i1)))
          (for ((i2 (in-naturals n t)))
            (when (= i1 i2) (next))
            (let/fpv ((m2 (m_s i2))
                      (x2 (x_s i2))
                      (y2 (y_s i2))
                      (z2 (z_s i2)))
              (let/fpv ((rx (- x2 x1))
                        (ry (- y2 y1))
                        (rz (- z2 z1)))
                (let/fpv ((r^3 (let* ((r^2 (+ (* rx rx) (* ry ry) (* rz rz)))
                                      (r (sqrt r^2)))
                                 (declare (type nonnegative-fpv r^2 r))
                                 (* r r r))))
                  (incf ax/G (/ (* rx m2) r^3))
                  (incf ay/G (/ (* ry m2) r^3))
                  (incf az/G (/ (* rz m2) r^3))))))
          (setf (x_d i1) (+ x1 (* vx1 dt) (* ax/G Gdt^2/2))
                (y_d i1) (+ y1 (* vy1 dt) (* ay/G Gdt^2/2))
                (z_d i1) (+ z1 (* vz1 dt) (* az/G Gdt^2/2)))
          (setf (vx_d i1) (+ vx1 (* ax/G Gdt))
                (vy_d i1) (+ vy1 (* ay/G Gdt))
                (vz_d i1) (+ vz1 (* az/G Gdt)))))))
  destination)

And it not only worked, the performance was very close to the previous version, straight out of the gate. The syntax is not as nice as that of the initial, quick-and-dirty version, but it is much more general, so I think that's worth it on the whole.

There have been problems since then: in particular the dependency on when classes get defined. It will never be as portable as I'd like because of the unnecessary MOP dependencies3, but it is usable and quick4.

Was it worth it? May be, but it should have been simpler.


  1. When exactly do classes get defined? Right.

  2. Nothing that uses the AMOP MOP is ever The Right Thing, because the whole thing was designed by people who were extremely smart, but still not as smart as they needed to be and thought they were. It's unclear if any MOP for CLOS can ever be satisfactory, in part because CLOS itself suffers from the same smart-but-not-smart-enough problem to a large extent not helped by bring dropped wholesale into CL at the last minute: by the time CL was standardised people had written large systems in it, but almost nobody had written anything significant using CLOS, let alone the AMOP MOP.

  3. A mistake I somehow managed to avoid was using the whole slot-definition mechanism the MOP wants you to use.

  4. I will make it available at some point.

16 Apr 2026 11:01am GMT

14 Apr 2026

feedPlanet Lisp

Robert Smith: Not all elementary functions can be expressed with exp-minus-log

By Robert Smith

All Elementary Functions from a Single Operator is a paper by Andrzej Odrzywołek that has been making rounds on the internet lately, being called everything from a "breakthrough" to "groundbreaking". Some are going as far as to suggest that the entire foundations of computer engineering and machine learning should be re-built as a result of this. The paper says that the function

$$ E(x,y) := \exp x - \log y $$

together with variables and the constant $1$, which we will call EML terms, are sufficient to express all elementary functions, and proceeds to give constructions for many constants and functions, from addition to $\pi$ to hyperbolic trigonometry.

I think the result is neat and thought-provoking. Odrzywołek is explicit about his definition of "elementary function". His Table 1 fixes "elementary" as 36 specific symbols, and under that definition his theorem is correct and clever, so long as we accept some of his modifications to the conventional $\log$ function and do arithmetic with infinities.

My concern is that the word "elementary" in the title carries a much broader meaning in standard mathematical usage. Odrzywołek recognizes this, saying little more than "[t]hat generality is not needed here" and that his work takes "the ordinary scientific-calculator point of view". He does not offer further commentary.

What is this more general setting, and does his claim still hold? In modern pure mathematics, dating back to the 19th century, the definition of "elementary function" has been well established. We'll get to a definition shortly, but to cut to the chase, the titular result does not hold in this setting. As such, in layman's terms, I do not consider the "Exp-Minus-Log" function to be the continuous analog of the Boolean NAND gate or the universal quantum CCNOT/CSWAP gates.

The rough TL;DR is this: Elementary functions typically include arbitrary polynomial root functions, and EML terms cannot express them. Below, I'll give a relatively technical argument that EML terms are not sufficient to express what I consider standard elementary functions.

To avoid any confusion, the purpose of this blog post is manifold:

  1. To elucidate what many mathematicians consider to be an "elementary function", which is the foundation for a variety of rich and interesting math (especially if you like computer science).
  2. To prove a result about EML terms using topological Galois theory.
  3. To demonstrate how this result may be used to show an elementary function not expressible by EML terms.

This blog post is not a refutation of Odrzywołek's work, though the title might be considered just as clickbait (and accurate) as his, depending on where you sit in the hall of mathematics and computation.

Disclaimer: I audited graduate-level mathematics courses almost 20 years ago, and I am not a professional mathematician. Please email me if my statements are clumsy or incorrect.

The 19th century is where all modern understanding of elementary functions was developed, Liouville being one of the big names with countless theorems of analysis and algebra named after him. One such result is about integration: do the outputs of integrals look the same as their inputs? Well, what does "input" and "look the same" mean? Liouville defined a class of functions called elementary functions, and said that the integral of an elementary function will sometimes be elementary, and when it is, it will always resemble the input in a specific way, plus potential extra logarithmic factors.

Since then, elementary functions have been defined by starting with rational functions and closing under arithmetic operations, composition, exponentiation, logarithms, and polynomial roots. While EML terms are quite expressive, they are unable to capture the "polynomial roots" in full generality. We will show this by using Khovanskii's topological Galois theory: the monodromy group of a function built from rational functions by composition with $\exp$ and $\log$ is solvable. For anybody that has studied Galois theory in an algebra course, this will be familiar, as the destination here is effectively the same, but with more powerful intermediate tooling to wrangle exponentials and logarithms.

First, let's be more precise by what we mean by an EML term and by a standard elementary function.

Definition (EML Term): An EML term in the variables $x_1,\dots,x_n$ is any expression obtained recursively, starting from $\{1, x_1,\dots,x_n\}$, by the rule $$ T,S \mapsto \exp T-\log S. $$ Each such term, evaluated at a point where all the $\log$ arguments are nonzero, determines an analytic germ; we take $\mathcal T_n$ to be the class of germs representable this way, together with their maximal analytic continuations.

Definition (Standard Elementary Function): The standard elementary functions $\mathcal{E}_n$ are the smallest class of multivalued analytic functions on domains in $\mathbb{C}^n$ containing the rational functions and closed under

What we will show is that the class of elementary functions defined this way is strictly larger than the class induced by EML terms.

Lemma: Every EML term has solvable monodromy group. In particular, if $f\in\mathcal T_n$ is algebraic over $\mathbb C(x_1,\dots,x_n)$, then its monodromy group is a finite solvable group.

Proof: We prove by induction on EML term construction. Constants and coordinate functions have trivial monodromy.

For the inductive step, suppose $f = \exp A-\log B$ with $A,B\in\mathcal T_n$, and assume that $\mathrm{Mon}(A)$ and $\mathrm{Mon}(B)$ are solvable. We argue in three steps.

Step 1: $\mathrm{Mon}(\exp A)$ is solvable. The germs of $\exp A$ are images under $\exp$ of the germs of $A$, with germs of $A$ differing by $2\pi i\mathbb Z$ collapsing to the same value. So there is a surjection $\mathrm{Mon}(A)\twoheadrightarrow\mathrm{Mon}(\exp A)$, and a quotient of a solvable group is solvable.

Step 2: $\mathrm{Mon}(\log B)$ is solvable. At a generic point $p$, germs of $\log B$ are parameterized by pairs $(b,k)$ where $b$ is a germ of $B$ at $p$ and $k\in\mathbb Z$ selects the branch of $\log$. A loop $\gamma$ acts by $$ (b,k)\mapsto\bigl(\rho_B(\gamma)(b), k+n(\gamma,b)\bigr), $$ where $\rho_B(\gamma)$ is the monodromy action of $\gamma$ on germs of $B$, and $n(\gamma,b)\in\mathbb Z$ is the winding number around $0$ of the analytic continuation of $b$ along $\gamma$. The projection $\mathrm{Mon}(\log B)\to\mathrm{Mon}(B)$ onto the first component is a surjective homomorphism. Its kernel consists of the elements of $\mathrm{Mon}(\log B)$ induced by loops $\gamma$ with $\rho_B(\gamma)=\mathrm{id}$, which then act only by integer shifts on the $k$-coordinate. Let $S_B$ be the set of germs of $B$ at $p$. For each $b\in S_B$, such a loop determines an integer shift $n(\gamma,b)$, so the kernel embeds in the direct product $\mathbb Z^{S_B}$. In particular, the kernel is abelian. Hence $\mathrm{Mon}(\log B)$ is an extension of $\mathrm{Mon}(B)$ by an abelian group, and extensions of solvable groups by abelian groups are solvable.

Step 3: $\mathrm{Mon}(f)$ is solvable. At a generic point, a germ of $f=\exp A-\log B$ is obtained by subtraction from a pair (germ of $\exp A$, germ of $\log B$), and analytic continuation acts componentwise on such pairs. This gives a surjection of $\pi_1$ onto some subgroup $$ H \le \mathrm{Mon}(\exp A)\times\mathrm{Mon}(\log B), $$ and, since $f$ is obtained from the pair by subtraction, this descends to a surjection $H\twoheadrightarrow\mathrm{Mon}(f)$. So $\mathrm{Mon}(f)$ is a quotient of a subgroup of a direct product of solvable groups, hence solvable.

The second statement of the lemma follows: an algebraic function has finitely many branches, so its monodromy group is finite; a solvable group that is finite is, well, finite and solvable. ∎

Remark. This is the core of Khovanskii's topological Galois theory; see Topological Galois Theory: Solvability and Unsolvability of Equations in Finite Terms.

Theorem: $\mathcal T_n \subsetneq \mathcal E_n$.

Proof: $\mathcal E_n$ is closed under algebraic adjunction, so any local branch of an algebraic function is elementary. In particular, a branch of a root of the generic quintic $$ f^5+a_1f^4+a_2f^3+a_3f^2+a_4f+a_5=0 $$ is elementary.

Suppose for contradiction that at some point $p$ a germ of a branch of this root agrees with a germ of an EML term $T$. By uniqueness of analytic continuation, the Riemann surfaces obtained by maximally continuing these two germs coincide, so in particular their monodromy groups coincide. The monodromy group of the generic quintic is $S_5$, which is not solvable. But by the lemma, the monodromy group of any EML term is solvable. Contradiction.

Hence $\mathcal T_n$ is a strict subset of $\mathcal E_n$. ∎

Edit (15 April 2026): This article used to have an example proving that the real and complex absolute value cannot be expressed over their entire domain as EML terms under the conventional definition of $\log$. I wrote it to emphasize that Odrzywołek's approach required mathematical "patching" in order to work as intended. However, it ended up more distracting than illuminating, and was tangential to the point about the definition of "elementary", so it has been removed.

14 Apr 2026 12:00am GMT

13 Apr 2026

feedPlanet Lisp

Scott L. Burson: FSet v2.4.2: CHAMP Bags, and v1.0 of my FSet book!

A couple of weeks ago I released FSet 2.4.0, which brought a CHAMP implementation of bags, filling out the suite of CHAMP types. 🚀 FSet users should have a look at the release page, as it also contained a number of bug fixes and minor changes.

I've since released v2.4.1 and v2.4.2, with some more bug fixes.

But the big news is the book! It brings together all the introductory material I have written, plus a lot more, along with a complete API Reference chapter.

FSet is now in the state I decided last summer I wanted to get it into: faster, better tested and debugged, more feature-complete, and much better documented than it has ever been in its nearly two decades of existence. I am, of course, very much hoping that these months of work have made the library more interesting and accessible to CL programmers who haven't tried it yet. I am even hoping that its existence helps attract newcomers to the CL community. Time will tell!

13 Apr 2026 6:21am GMT

29 Jan 2026

feedFOSDEM 2026

Join the FOSDEM Treasure Hunt!

Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰

29 Jan 2026 11:00pm GMT

26 Jan 2026

feedFOSDEM 2026

Guided sightseeing tours

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

26 Jan 2026 11:00pm GMT

Call for volunteers

With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰

26 Jan 2026 11:00pm GMT