12 Apr 2026
Planet Grep
Lionel Dricot: Ne rien avoir à penser

Ne rien avoir à penser
Après le « Je n'ai rien à cacher », voici venu l'ère du « Je n'ai rien à penser »
Se faire prendre pour des crétins parce que ça fonctionne
Google l'annonce : il y a plus de personnes dans le monde avec un smartphone Android que de personnes qui ont accès à de l'eau propre et des égouts.
Cela implique, toujours selon Google, qu'il faut plus d'IA pour ces personnes.
Non, sérieusement, je ne déconne pas. C'est vraiment ce que les gens de Google vont raconter dans les universités dans des événements qui ressemblent un peu à ce que des vendeurs de cigarettes pourraient organiser dans des clubs de sport pour former la jeunesse à fumer en offrant un an de cigarettes gratuites.
Et ils enfoncent le clou: de toute façon, personne n'a le choix d'utiliser l'IA ou non. C'est comme ça. Exactement ce que disait Anthropic: « Que vous le vouliez ou non, préparez-vous pour ce monde stupide ! »
Mon exemple du vendeur de cigarettes semble exagéré, mais je viens d'être témoin, dans ma ville universitaire de Louvain-La-Neuve, d'une compétition qui consistait à faire le tour du lac en courant tout en buvant quatre bières de 33cl. La course était sponsorisée par… une marque de bière, bien entendu. L'université semble avoir donné sa bénédiction pour cet événement et beaucoup d'étudiants sont assez naïfs pour trouver ça cool…
Je suis moi-même un grand naïf. Je croyais que les personnes étaient majoritairement moralement « bonnes ». Elles produisent souvent un impact négatif lorsqu'elles travaillent à maximiser le profit d'une entreprise. C'est juste qu'elles ne s'en rendent pas compte.
Mais c'est faux. Nous savons aujourd'hui que des personnes comme Mark Zuckerberg sont tout simplement moralement inhumaines et que toutes les personnes impliquées savent très bien ce qu'elles font et pourquoi elles le font. Les produits Meta sont spécifiquement modifiés pour rendre les adolescents les plus addicts possibles, pour les perturber durant leur scolarité. Ce n'est pas une conséquence, c'est le but premier du produit. La distraction incessante n'est pas un effet insoupçonné, c'est littéralement ce que cherchent à faire les ingénieurs de Facebook.
Et dire que la plupart des profs sont en mode : « Il faut vivre avec, il faut apprendre à utiliser raisonnablement ».
Non. C'est faux et c'est complètement stupide. C'est comme donner aux adolescents des formations, sponsorisées par Philip Morris, où ils apprendraient à fumer « sans inhaler la fumée ». Ou leur dire que c'est cool de courir en buvant plus de bières que ton estomac ne peut en supporter.
La vérité c'est que la plupart des profs sont complètement addicts à leur smartphone et que c'est plus rassurant d'enseigner son addiction comme un truc positif que de se remettre en question.
La pub nous prend pour des crétins. Elle prend les politiciens pour des crétins. Et, expérimentalement parlant, elle a bien raison. Nous le sommes ! Ça fonctionne encore mieux que prévu parce que, du coup, nous allons leur donner raison et soutenir ceux qui se foutent de notre gueule !
Regardez le RGPD et les bannières de cookies qui ennuient tout le monde et pour lesquelles on accuse « l'Europe ».
Contrairement à une idée reçue, les ennuyeuses bannières de cookies sur les sites ne sont pas la faute du RGPD. D'ailleurs, dans l'immense majorité des cas, ces bannières sont illégales. Gee l'explique très bien en BD :
Mais il y a pire : si ces bannières sont ennuyeuses, c'est parce qu'elles ont été explicitement conçues pour ça. Et oui, pour faire baisser le degré d'adhésion du peuple envers le RGPD. C'est une pure manipulation politique volontaire et consciente de l'industrie publicitaire. Ils savent très bien ce qu'ils font : nous pourrir la vie pour décrédibiliser les institutions politiques afin de nous fourguer plus de pub.
La fin de l'intellectualisme
Un article important sur le retour à l'oralité et le déclin de la lecture. L'oralité, c'est l'émotion au lieu de l'information, c'est le charisme au lieu de la vérité, c'est la manipulation au lieu de la rationalité. C'est également la disparition de l'effort sur le long terme.
Cela semble alarmiste, mais, factuellement, lorsque les chercheurs scientifiques, censés représenter l'élite intellectuelle du monde, en sont réduits à générer des articles qui citent des articles qui n'existent pas, cela pose quand même des questions.
Oui, c'est la fin du monde, la fin d'un monde !
Mais ChatGPT n'est que la cerise sur le gâteau. La raison réelle, c'est que nous dévalorisons l'intellectualité depuis des décennies. Nous valorisons le CEO qui prend des décisions aléatoires en 5 minutes. Nous demandons à tout le monde de creuser des trous et de les reboucher pour « faire tourner l'économie ». Nous vivons dans un monde où Julius grimpe les échelons !
Bref, nous ne faisons que mener le monde vers sa destination la plus logique en regard des indicateurs que nous utilisons pour l'optimiser. C'est tout à fait normal. C'est tout à fait attendu. On ne réduira jamais les émissions de CO₂ tant qu'on tentera de maximiser le PIB d'un pays. Faire tourner l'économie implique de maximiser le travail et donc de consommer le plus de joules possible. Joules qu'il faut produire en émettant du CO₂. Les énergies dites « renouvelables » ne sont qu'une manière d'émettre « moins de CO₂ par joule ». Ce qui est une bonne chose en soi, mais ne résout pas le problème de base que nous cherchons justement à consommer le plus de joules possible. Le résultat du succès des énergies renouvelables est d'ailleurs évident : nous consommons plus de joules, tout simplement.
Nous sommes en train de connaître la fin de l'intellectualité comme nous avons traversé la fin de la vie privée. Non, ce n'est pas réellement la fin. C'est juste que l'intellectualité, tout comme la vie privée avant elle, a perdu son statut de valeur fondamentale pour devenir un truc underground, uniquement valorisée par quelques cercles de plus en plus considérés comme marginaux, y compris, surtout, au sein des plus prestigieuses institutions académiques.
« Je n'ai rien à cacher » s'est subtilement transformé en « Je n'ai rien à penser ».
Depuis les smartphones à ChatGPT en passant par les séries en streaming, les géants technologiques se sont ligués pour nous convaincre de ne plus penser, que penser est has been, que c'est fatigant, que ça ne sert à rien. Nul besoin d'avoir un doctorat en sciences politiques pour comprendre que ça arrange beaucoup de monde.
Ma défense : l'effet bibliothèque
Les chatbots ne font, au fond, qu'augmenter la disponibilité de l'information, y compris fausse. Cette disponibilité réduit l'engagement cognitif et donc le développement du cerveau. Cet effet était déjà visible et étudié en 2011 comme "l'effet Google". Si nous savons qu'une information est disponible en ligne, nous ne tentons plus de nous la rappeler, nous la cherchons (combien de fois avez-vous pris votre téléphone parce que vous ne vous souveniez plus du nom d'un acteur dans un film?)
Ce qui est amusant à constater c'est que, bien avant d'avoir lu ces études, j'ai instinctivement adopté la posture inverse depuis quelques années. Je me refuse de chercher immédiatement une info. Ma motivation était de ne pas interrompre une conversation en cours (je dissuade d'ailleurs mon interlocuteur de sortir son téléphone) ou ne pas interrompre mon travail en cours (je me connais, je sais que si je cherche l'info, je suis 30 minutes plus tard en train de lire la page Wikipédia consacrée à la biographie d'Henri IV ou à une espèce rare de méduse en Nouvelle-Calédonie).
On pourrait arguer qu'il en est de même avec une bibliothèque. Mais je vois des différences fondamentales.
Premièrement, il y a la composante physique : lorsque je cherche une information dans un livre, je me déplace, je cherche dans un rayon. Mon cerveau associe le mouvement avec la mémorisation. Ma bibliothèque a beau être fluide et mouvante, elle garde une structure. Avec le temps, se souvenir d'une information revient à se souvenir du déplacement à effectuer pour aller chercher le livre.
En second lieu, les informations dans les livres sont stables et figées. Elles peuvent être fausses, mais je sais qu'elles ne sont pas générées pour améliorer le SEO du livre ou obtenir des likes. Elles ne se transforment pas subitement en erreur 404.
Cette stabilité rassure mon cerveau. Celui-ci n'est pas dans la "perception", la tentative de comprendre un environnement changeant, ce qui est source de stress. Il est au contraire dans le familier et peut se permettre d'extrapoler, d'imaginer, de faire des liens imprévus.
Bref, je donne à mon cerveau la possibilité d'être créatif, je lui offre un espace stable où il peut expérimenter la mouvance et le changement dans ce qu'il crée : les mots, les histoires. Ce n'est pas un hasard si je n'écris que sur une machine à écrire ou depuis mon terminal dans un éditeur qui change très peu depuis 40 ans (Vim). Je veux libérer de l'espace mental pour créer et réfléchir.
Si vous avez déjà été dans une bibliothèque juste pour être au calme et réfléchir, vous voyez très bien ce que je veux dire.
Bref, je suis un technopunk ringard… Mais ça, vous le saviez déjà !
À propos de l'auteur :
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
12 Apr 2026 8:28pm GMT
Frederic Descamps: A response to Percona’s 2026 MySQL ecosystem benchmark: useful data, but not a realistic MariaDB comparison
Percona's new 2026 benchmark report is interesting because it puts several MySQL-family releases on the same graphs and shares a public repository for the test harness. That openness is welcome. But after reading both the article and the published scripts, I do not think the post supports broad conclusions about "ecosystem performance," and I especially […]
12 Apr 2026 8:28pm GMT
Dries Buytaert: Introducing headers.dev
My HTTP Header Analyzer started as a small tool on my blog six years ago. It makes HTTP headers visible and explains what they do. You give it a URL, it fetches the response headers, and it breaks down what is present, what is missing, and what is possibly misconfigured.
It has been used more than 5 million times, despite being buried at https://dri.es/headers. So last week I finally registered headers.dev and gave it a proper home.
While I was at it, I also audited the analyzer against OWASP's recommendations for HTTP headers. I found a few gaps worth fixing. A site could have a Content Security Policy that included unsafe-inline and unsafe-eval, and the analyzer would describe each directive without mentioning that those two keywords effectively disable XSS protection. Or you could set HSTS with preload but forget includeSubDomains, which means your preload submission gets silently rejected. These are the kinds of issues a human reviewer might miss but an automated tool should catch. I fixed those and more, so if you've used the analyzer before, your scores might look different now.
The analyzer also learned about dozens of new headers. Speculation-Rules, for example, tells browsers to prerender pages a user is likely to visit next. Cache-Status replaces the patchwork of vendor-specific X-Cache headers with a single structured format that can describe multiple cache layers in one value. And Reporting-Endpoints is the modern replacement for Report-To, using a simpler key-value syntax for telling browsers where to send security violation reports.
Try it at headers.dev. It now explains over 150 headers and catches misconfigurations that it used to miss. The Open Web is better when more people check their HTTP headers.
12 Apr 2026 8:28pm GMT
Planet Debian
Dirk Eddelbuettel: littler 0.3.23 on CRAN: Mostly Internal Fixes


The twentyfourth release of littler as a CRAN package landed on CRAN just now, following in the now twenty-one year history (!!) as a (initially non-CRAN) package started by Jeff in 2006, and joined by me a few weeks later.
littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in later years.
littler lives on Linux and Unix, has its difficulties on macOS due to some-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet - the build system could be extended - see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette.
This release, which comes just two months after the previous 0.3.22 release that brought a few new features, is mostly internal. (The previous release erroneously had 0.3.23 in its blog and social media posts, it really was 0.3.22 and this one now is is 0.3.23.) Mattias Ellert address a nag (when building for a distribution) about one example file with a shebang not have excutable modes. I accommodated the ever-changing interface the C API of R (within about twelve hours of being notified). A few other smaller changes were made as well polishing a script or two or usual, see below for more.
The full change description follows.
Changes in littler version 0.3.23 (2026-04-12)
Changes in examples scripts
Correct spelling in
installGithub.rto lower-case hThe
r2u.rnow recognises 'resolute' aka 26.06
installRub.rcan install (more easily) from r-multiverseA file permission was corrected (Mattias Ellert in #131)
Changes in package
Update script count and examples in README.md
Continuous intgegration scripts received minor updates
The C level access to the R API was updated to reflect most recent standards (Dirk in #132)
My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.
12 Apr 2026 2:47pm GMT
Colin Watson: Free software activity in March 2026

My Debian contributions this month were all sponsored by Freexian.
You can also support my work directly via Liberapay or GitHub Sponsors.
OpenSSH
I fixed CVE-2026-3497 in unstable, thanks to a fix in Ubuntu by Marc Deslauriers. Relatedly, I applied an Ubuntu patch by Athos Ribeiro to not default to weak GSS-API exchange algorithms.
I'm looking forward to being able to split out GSS-API key exchange support in OpenSSH once Ubuntu 26.04 LTS has been released! This stuff will still be my problem, but at least it won't be in packages that nearly everyone has installed.
Python packaging
New upstream versions:
- dill
- django-modeltranslation
- isort
- langtable
- pathos
- pendulum
- pox
- ppft
- pydantic-extra-types
- pytango
- python-asyncssh
- python-datamodel-code-generator
- python-evalidate
- python-packaging (including fixes for python-hatch-requirements-txt and python-pyproject-examples)
- python-zxcvbn-rs-py
- rpds-py
- smart-open
- trove-classifiers
I packaged pybind11-stubgen, needed for new upstream versions of pytango. Tests of reproducible builds revealed that it didn't generate imports in a stable order; I contributed a fix for that upstream.
I worked with the security team to release DSA-6161-1 in multipart, fixing CVE-2026-28356 (upstream discussion). (Most of the work for this was in February, but the vulnerability was still embargoed when I published my last monthly update.)
In trixie-backports, I updated pytest-django to 4.12.0.
I fixed a number of packages to support building with pyo3 0.28:
- pendulum
- pydantic-core
- python-jellyfish
- python-zxcvbn-rs-py
- rpds-py
Other build/test failures:
- python-bcrypt: Upcoming rust-getrandom update
- python-cotengrust: FTBFS: error[E0432]: unresolved import
rand::rngs::OsRng - austin: FTBFS: E ModuleNotFoundError: No module named 'pycparser.plyparser' (contributed upstream)
- taurus: FTBFS: dh_auto_build: error: pybuild -build -i python{version} -p "3.14 3.13" returned exit code 13
- python-datamodel-code-generator: Depends: python3-isort (< 8) but 8.0.0-1 is to be installed (contributed upstream)
Rust packaging
New upstream versions:
- rust-rpds
Other bits and pieces
I upgraded tango to 10.1.2, and yubihsm-shell to 2.7.2.
Code reviews
- python-backports.zstd: Obsolete with Python 3.14 (sponsored partial fix from YOKOTA Hiroshi)
12 Apr 2026 10:13am GMT
Vasudev Kamath: Hardening the Unpacakgeable: A systemd-run Sandbox for Third-Party Binaries

The Shift in Software Consumption
Historically, I have been a "distribution-first" user. Sticking to tools packaged within the Debian archives provides a layer of trust; maintainers validate licenses, audit code, and ensure the entire dependency chain is verified. However, the rapid pace of development in the Generative AI space-specifically with new tools like Gemini-CLI-has made this traditional approach difficult to sustain.
Many modern CLI tools are built within the npm or Python ecosystems. For a distribution packager, these are a nightmare; packaging a single tool often requires packaging a massive, shifting dependency chain. Consequently, I found myself forced to use third-party binaries, bypassing the safety of the Debian archive.
The Supply Chain Risk
Recent supply chain attacks affecting widely used packages like axios and LiteLLM have made it clear: running unvetted binaries on a personal system is a significant risk. These scripts often have full access to your $HOME directory, SSH keys, and the system D-Bus.
After discussing these concerns with a colleague, I was inspired by his approach-using a Flatpak-style sandbox for even basic applications like Google Chrome. I decided to build a generalized version of this using OpenCode and Qwen 3.6 Fast (which was available for free use at the time) to create a robust, transient sandbox utility.
The Solution: safe-run-binary
My script, safe-run-binary, leverages systemd-run to execute binaries within an isolated scope. It implements strict filesystem masking and resource control to ensure that even if a dependency is compromised, the "blast radius" is contained.
Key Technical Features
- 1. Virtualized Home Directory (tmpfs)
- Instead of exposing my real home directory, the script mounts a tmpfs over $HOME. It then selectively creates and bind-mounts only the necessary subdirectories (like .cache or .config) into a virtual structure. This prevents the application from ever "seeing" sensitive files like ~/.ssh or ~/.gnupg.
- 2. D-Bus Isolation via xdg-dbus-proxy
- For GUI applications, providing raw access to the D-Bus is a security hole. The script uses xdg-dbus-proxy to sit between the application and the system bus. By using the --filter and --talk=org.freedesktop.portal.* flags, the app can only communicate with necessary portals (like the file picker) rather than sniffing the entire bus.
- 3. Linux Namespace Restrictions
-
The sandbox utilizes several systemd execution properties to harden the process:
- RestrictNamespaces=yes: For CLI tools, this prevents the app from creating its own nested namespaces.
- PrivateTmp=yes: Ensures a private /tmp space that isn't shared with the host.
- NoNewPrivileges=yes: Prevents the binary from gaining elevated permissions through SUID/SGID bits.
- 4. GPU and Audio Passthrough
- The script intelligently detects and binds Wayland, PipeWire, and NVIDIA/DRI device nodes. This allows browsers like Firefox to run with full hardware acceleration and audio support while remaining locked out of the rest of the filesystem.
Usage
To run a CLI tool like Gemini-CLI with access only to a specific directory:
safe-run-binary -b ~/.gemini-config -- npx @google/gemini-cli
For a GUI application like Firefox:
safe-run-binary --gui -b ~/.mozilla -b ~/.cache/mozilla -b ~/Downloads -- firefox
Conclusion
While it is not always possible to escape the need for third-party software, it is possible to control the environment in which it operates. By leveraging native Linux primitives like systemd and namespaces, high-grade isolation is achievable.
PS: If you spot any issues or have suggestions for improving the script, feel free to raise a PR on the repo.
12 Apr 2026 7:23am GMT
08 Apr 2026
Planet Lisp
Tim Bradshaw: Rules for Lisp programs
Some very serious rules. Very serious.
The essential rule. If you are not building languages in Lisp why are you even here?
The lesser rules.
- If you write a program which uses
defclassyou are probably making a mistake. - If you write a program which uses the CLOS MOP you are making a mistake.
- If you write a program which uses LOOP for any purpose other than creating a better iteration construct you are making a mistake.
- If you write a program which uses LOOP only to create a better iteration construct you are probably making a mistake.
- If you write a program which uses explicit package-qualified names more than very infrequently you will be cast into the outer darkness along with your program.
I will not be taking questions.
08 Apr 2026 10:48am GMT
06 Apr 2026
Planet Lisp
Patrick Stein: Nomic Coding Game
About 30 years ago, I had an idea for a coding game inspired by Nomic. It occurred to me last month that all of the tools I need are readily available now.
Pen-and-paper Nomic
The pen-and-paper game of Nomic (by Peter Suber) has an initial ruleset which describes how one proposes changes to the rules, how one gets those changes ratified, a way to award points when someone's rule change is ratified, and a rule declaring that the winner is the first player to amass 100 points. Some of the rules are mutable and some are immutable and there are rules about turning mutable rules into immutable ones and vice-versa.
The game was meant to show some of the paradoxes of self-amendment. It was meant to lead people into situations where it was clear that certain actions were both legal (or even mandatory) and illegal.
A drastically simplified starting set of rules might look like:
- There are these players: Alice, Bob, Carol, David, and Mel.
- Any of the players can propose a change to these rules at any time when there is not already an outstanding proposal.
- When a player makes a proposal, all players (including the player making the proposal) must immediately vote: Yay or Nay.
- If a proposal garners more Yay than Nay votes, it takes effect immediately. Otherwise, the proposal is rejected.
- The winner is the first person to score 100 points.
Nomic in Code
So, 30 years ago, I had the idea that it would be fabulous to write some code to referee a Nomic game. However, because interpretation of the rules is so horrendously human, it felt impossible. Today, in 2026, it seems one could maybe get Claude, Gemini, or some other LLM to referee. But, this doesn't much interest me, either, really. I cannot get any of them to keep track of something that I made them write down. I cannot imagine that I would be happy with their interpretation of whether my move is legal given the current state of the rules nor to amend the rules appropriately if my move is legal.
What felt slightly more attainable 30 years ago would be to make it a battle in code:
- The players propose deltas to the current code.
- The players vote on which deltas to approve.
- If the resulting code declares you the winner, you win.
This was nice and all, but it was also too static. The rules about who can vote and how votes are tallied and such wouldn't be subject to change.
Nomic in Code in 2026
Fast-forward to last month. Last month, I realized that with the GitHub API interface, I could implement a very Nomic-ish pull request battle game. I can:
- Gather information about all of the open pull requests on a repository,
- Checkout a copy of the current
mainbranch of that same repository, - Run the code on the
mainbranch of that repository and give it the information that I collected about the open pull requests, and - Have the code on the
mainbranch tell me which open pull requests (if any) to accept or reject.
To be truly in Nomic's full spirit, it would be nice to allow the code in the repository to interact with the GitHub API on its own. Alas, that would immediately let the players vote in changes that expose my GitHub tokens, so it would be a gaping security hole-not only because it would let users impersonate me but because it would let them end-around the actual code in the repository to make changes to the main branch in the repository.
So, as it is, I have a supervisor written in Common Lisp which handles all of the interaction with GitHub and various game repositories (one to play in Common Lisp, one to play in JavaScript, and one to play in Python). The supervisor:
- fetches all of the open pull requests;
- annotates each pull request with:
- all of the reviews on the pull request,
- all of the comments on the pull request, and
- all of the commits on the pull request;
- clones the
mainbranch of the game repository; - runs the game code from that
mainbranch giving it the annotated list of open pull requests encoded as JSON on standard input; - reads the JSON-encoded output from the game code; and
- acts accordingly.
The game code, given a list of open pull requests can reply with one of the following messages:
"decision": "winner",
"name": name-of-winner,
"message": optional-reason-for-decision
}
{
"decision": "accept",
"id": id-number-of-pull-request-to-accept,
"message": optional-reason-for-decision
}
{
"decision": "reject",
"id": id-number-of-pull-request-to-reject,
"message": optional-reason-for-decision
}
{
"decision": "defer"
}
The "defer" decision means that there is not enough information at the moment. Maybe, in the future, with other pull requests or other comments or reviews we will be able to make some move.
If the game code replies with anything that isn't one of the four types of replies shown above, the supervisor assumes the latest merge broke the code and reverts the change.
The Ask
I haven't been able to drum up enough players for a game in any of my regular haunts. So, I am looking for tolerant players who will help me give it a test run or two to work out the kinks in the supervisor. Some areas where I forsee potential issues:
- There may be scenarios that cause the game to reach an impasse.
- There are probably some GitHub responses that the supervisor doesn't do the right thing with (in fact, I think I just thought of a situation that a malicious player could do if they are a collaborator rather than doing this through forked repos).
- There might be special issues related to pull requests coming in from forks rather than within the repo which I cannot test without making myself a second GitHub account.
- Who can say what the optimal number of players is, at this point?
So, if you're tolerant of some bumps in the process, have a GitHub account (or will make one), and are interested in a Common Lisp battle of pull requests, let me know so we can get a game going.
The post Nomic Coding Game first appeared on nklein software.
06 Apr 2026 4:09am GMT
03 Apr 2026
Planet Lisp
Marco Antoniotti: An Update on MK-DEFSYSTEM
There are still a few of us (at least two) who are using MK:DEFSYSTEM. The venerable system construction tool has accumulated a lot of ancient cruft, some of which quite convoluted.
Recently I went back to MK:DEFSYSTEM and "cleaned up" some of the code, especially regarding the pathname construction for each component. I also used some simpler hierarchical tricks using defstruct only.
The result should be more solid and clearer in the steps that comprise some "macro tasks". Of course, a rewrite using CLOS would change the coding style, but the choice has been made to keep the MK:DEFSYSTEM code base quite... retro (and somewhat simple).
Why did I went back to MK:DEFSYSTEM? As usual, it is because of a rabbit-hole I fell into: I will blog about it later on (hint: HEΛP).
MK-DEFSYSTEM quick history as of March 2026
MK-DEFSYSTEM (or MK:DEFSYSTEM, or MAKE:DEFSYSTEM) was originally written by Mark Kantrowitz as part of the original "CMU Lisp Utilities" collection; an early "public" set of Common Lisp code and utilities that, in the writer's opinion form one of the basis of most Common Lisp writing to date.
As stated (by M. Kantrowitz himself) in this file header, the original version of MK-DEFSYSTEM was inspired by the Symbolics DEFSYSTEM (or DEFSYS) tool. Yet, MK-DEFSYSTEM differs significantly from it.
In its original form, MK-DEFSYSTEM was built in the CLtL1 era, accommodated a lot of variance among filesystems and CL implementations and it still bears those idiosycrasies. CLtL2 (1992) first and ANSI (1994) next, started reshaping the code base then.
MK-DEFSYSTEM was originally distributed under a license agreement that made redistribution tricky. In 1999, the writer - that'd be me, Marco Antoniotti - contacted Mark Kantrowitz offering to become a maintainer while reworking the distribution license to hammer some FOSS into it. Mark Kantrowitz graciously agreed and, after that, the writer got literally and physically hugged by a few Common Lisp developers because they could use MK-DEFSYSTEM more freely.
Of course, ASDF came along and it solved the same problems that Symbolics (and Kent Pitman's) DEFSYS and MK-DEFSYSTEM solve, plus much more.
Yet, MK-DEFSYSTEM has some nice features (in the eye of the beholder).
MK-DEFSYSTEM still ships in one file - defsystem.lisp - that you can LOAD in your Common Lisp init file. Of course, a big chunk of its current code base is "backward compatibility" and new ok-we-miss-UIOP-and-or-at-least-CL-FAD functionality, plus an ever growing ongoing commentary like this one.
Given this background, the writer has been maintaining MK-DEFSYSTEM for a long time, and more recently, Madhu has made significant changes (and maintains himself a fork with some bells and whistles of his own) since 2008.
Of course, many other contributors helped over the years, and are acknowledged in the early Change Log and in comments in the code.
In early 2026, the writer cleaned up the code and reworked some of the logic, by factoring out some code from main functions. In particular, the CREATE-COMPONENT-PATHNAMES, GENERATE-COMPONENT-PATHNAMES, COMPONENT-FULL-PATHNAME, COMPONENT-FULL-NAMESTRING interplay is better organized; plus new structures, leveraging DEFSTRUCT :INCLUDE feature have been introduced, rendering the code TYPECASE-able.
MK-DEFSYSTEM is old, but it works. It is quirky but it works (at least for the two or three known users - which, in 2026, is already a big chunk of the Common Lisp users' community). Moreover, it does have, at least in the eye of the beholder, some more user friendly user API, for most use case, especially for plain Common Lisp code.
The current MK-DEFSYSTEM repository is at https://gitlab.common-lisp.net/mantoniotti/mk-defsystem
(*) It is assumed that the reader knows about all the acronyms, tools and systems referred to in the text.
'(cheers)
03 Apr 2026 1:04am GMT
29 Jan 2026
FOSDEM 2026
Join the FOSDEM Treasure Hunt!
Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰
29 Jan 2026 11:00pm GMT
26 Jan 2026
FOSDEM 2026
Guided sightseeing tours
If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.
26 Jan 2026 11:00pm GMT
Call for volunteers
With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰
26 Jan 2026 11:00pm GMT