11 Apr 2026

feedPlanet Grep

Lionel Dricot: Ne rien avoir à penser

Ne rien avoir à penser

Après le « Je n'ai rien à cacher », voici venu l'ère du « Je n'ai rien à penser »

Se faire prendre pour des crétins parce que ça fonctionne

Google l'annonce : il y a plus de personnes dans le monde avec un smartphone Android que de personnes qui ont accès à de l'eau propre et des égouts.

Cela implique, toujours selon Google, qu'il faut plus d'IA pour ces personnes.

Non, sérieusement, je ne déconne pas. C'est vraiment ce que les gens de Google vont raconter dans les universités dans des événements qui ressemblent un peu à ce que des vendeurs de cigarettes pourraient organiser dans des clubs de sport pour former la jeunesse à fumer en offrant un an de cigarettes gratuites.

Et ils enfoncent le clou: de toute façon, personne n'a le choix d'utiliser l'IA ou non. C'est comme ça. Exactement ce que disait Anthropic: « Que vous le vouliez ou non, préparez-vous pour ce monde stupide ! »

Mon exemple du vendeur de cigarettes semble exagéré, mais je viens d'être témoin, dans ma ville universitaire de Louvain-La-Neuve, d'une compétition qui consistait à faire le tour du lac en courant tout en buvant quatre bières de 33cl. La course était sponsorisée par… une marque de bière, bien entendu. L'université semble avoir donné sa bénédiction pour cet événement et beaucoup d'étudiants sont assez naïfs pour trouver ça cool…

Je suis moi-même un grand naïf. Je croyais que les personnes étaient majoritairement moralement « bonnes ». Elles produisent souvent un impact négatif lorsqu'elles travaillent à maximiser le profit d'une entreprise. C'est juste qu'elles ne s'en rendent pas compte.

Mais c'est faux. Nous savons aujourd'hui que des personnes comme Mark Zuckerberg sont tout simplement moralement inhumaines et que toutes les personnes impliquées savent très bien ce qu'elles font et pourquoi elles le font. Les produits Meta sont spécifiquement modifiés pour rendre les adolescents les plus addicts possibles, pour les perturber durant leur scolarité. Ce n'est pas une conséquence, c'est le but premier du produit. La distraction incessante n'est pas un effet insoupçonné, c'est littéralement ce que cherchent à faire les ingénieurs de Facebook.

Et dire que la plupart des profs sont en mode : « Il faut vivre avec, il faut apprendre à utiliser raisonnablement ».

Non. C'est faux et c'est complètement stupide. C'est comme donner aux adolescents des formations, sponsorisées par Philip Morris, où ils apprendraient à fumer « sans inhaler la fumée ». Ou leur dire que c'est cool de courir en buvant plus de bières que ton estomac ne peut en supporter.

La vérité c'est que la plupart des profs sont complètement addicts à leur smartphone et que c'est plus rassurant d'enseigner son addiction comme un truc positif que de se remettre en question.

La pub nous prend pour des crétins. Elle prend les politiciens pour des crétins. Et, expérimentalement parlant, elle a bien raison. Nous le sommes ! Ça fonctionne encore mieux que prévu parce que, du coup, nous allons leur donner raison et soutenir ceux qui se foutent de notre gueule !

Regardez le RGPD et les bannières de cookies qui ennuient tout le monde et pour lesquelles on accuse « l'Europe ».

Contrairement à une idée reçue, les ennuyeuses bannières de cookies sur les sites ne sont pas la faute du RGPD. D'ailleurs, dans l'immense majorité des cas, ces bannières sont illégales. Gee l'explique très bien en BD :

Mais il y a pire : si ces bannières sont ennuyeuses, c'est parce qu'elles ont été explicitement conçues pour ça. Et oui, pour faire baisser le degré d'adhésion du peuple envers le RGPD. C'est une pure manipulation politique volontaire et consciente de l'industrie publicitaire. Ils savent très bien ce qu'ils font : nous pourrir la vie pour décrédibiliser les institutions politiques afin de nous fourguer plus de pub.

La fin de l'intellectualisme

Un article important sur le retour à l'oralité et le déclin de la lecture. L'oralité, c'est l'émotion au lieu de l'information, c'est le charisme au lieu de la vérité, c'est la manipulation au lieu de la rationalité. C'est également la disparition de l'effort sur le long terme.

Cela semble alarmiste, mais, factuellement, lorsque les chercheurs scientifiques, censés représenter l'élite intellectuelle du monde, en sont réduits à générer des articles qui citent des articles qui n'existent pas, cela pose quand même des questions.

Oui, c'est la fin du monde, la fin d'un monde !

Mais ChatGPT n'est que la cerise sur le gâteau. La raison réelle, c'est que nous dévalorisons l'intellectualité depuis des décennies. Nous valorisons le CEO qui prend des décisions aléatoires en 5 minutes. Nous demandons à tout le monde de creuser des trous et de les reboucher pour « faire tourner l'économie ». Nous vivons dans un monde où Julius grimpe les échelons !

Bref, nous ne faisons que mener le monde vers sa destination la plus logique en regard des indicateurs que nous utilisons pour l'optimiser. C'est tout à fait normal. C'est tout à fait attendu. On ne réduira jamais les émissions de CO₂ tant qu'on tentera de maximiser le PIB d'un pays. Faire tourner l'économie implique de maximiser le travail et donc de consommer le plus de joules possible. Joules qu'il faut produire en émettant du CO₂. Les énergies dites « renouvelables » ne sont qu'une manière d'émettre « moins de CO₂ par joule ». Ce qui est une bonne chose en soi, mais ne résout pas le problème de base que nous cherchons justement à consommer le plus de joules possible. Le résultat du succès des énergies renouvelables est d'ailleurs évident : nous consommons plus de joules, tout simplement.

Nous sommes en train de connaître la fin de l'intellectualité comme nous avons traversé la fin de la vie privée. Non, ce n'est pas réellement la fin. C'est juste que l'intellectualité, tout comme la vie privée avant elle, a perdu son statut de valeur fondamentale pour devenir un truc underground, uniquement valorisée par quelques cercles de plus en plus considérés comme marginaux, y compris, surtout, au sein des plus prestigieuses institutions académiques.

« Je n'ai rien à cacher » s'est subtilement transformé en « Je n'ai rien à penser ».

Depuis les smartphones à ChatGPT en passant par les séries en streaming, les géants technologiques se sont ligués pour nous convaincre de ne plus penser, que penser est has been, que c'est fatigant, que ça ne sert à rien. Nul besoin d'avoir un doctorat en sciences politiques pour comprendre que ça arrange beaucoup de monde.

Ma défense : l'effet bibliothèque

Les chatbots ne font, au fond, qu'augmenter la disponibilité de l'information, y compris fausse. Cette disponibilité réduit l'engagement cognitif et donc le développement du cerveau. Cet effet était déjà visible et étudié en 2011 comme "l'effet Google". Si nous savons qu'une information est disponible en ligne, nous ne tentons plus de nous la rappeler, nous la cherchons (combien de fois avez-vous pris votre téléphone parce que vous ne vous souveniez plus du nom d'un acteur dans un film?)

Ce qui est amusant à constater c'est que, bien avant d'avoir lu ces études, j'ai instinctivement adopté la posture inverse depuis quelques années. Je me refuse de chercher immédiatement une info. Ma motivation était de ne pas interrompre une conversation en cours (je dissuade d'ailleurs mon interlocuteur de sortir son téléphone) ou ne pas interrompre mon travail en cours (je me connais, je sais que si je cherche l'info, je suis 30 minutes plus tard en train de lire la page Wikipédia consacrée à la biographie d'Henri IV ou à une espèce rare de méduse en Nouvelle-Calédonie).

On pourrait arguer qu'il en est de même avec une bibliothèque. Mais je vois des différences fondamentales.

Premièrement, il y a la composante physique : lorsque je cherche une information dans un livre, je me déplace, je cherche dans un rayon. Mon cerveau associe le mouvement avec la mémorisation. Ma bibliothèque a beau être fluide et mouvante, elle garde une structure. Avec le temps, se souvenir d'une information revient à se souvenir du déplacement à effectuer pour aller chercher le livre.

En second lieu, les informations dans les livres sont stables et figées. Elles peuvent être fausses, mais je sais qu'elles ne sont pas générées pour améliorer le SEO du livre ou obtenir des likes. Elles ne se transforment pas subitement en erreur 404.

Cette stabilité rassure mon cerveau. Celui-ci n'est pas dans la "perception", la tentative de comprendre un environnement changeant, ce qui est source de stress. Il est au contraire dans le familier et peut se permettre d'extrapoler, d'imaginer, de faire des liens imprévus.

Bref, je donne à mon cerveau la possibilité d'être créatif, je lui offre un espace stable où il peut expérimenter la mouvance et le changement dans ce qu'il crée : les mots, les histoires. Ce n'est pas un hasard si je n'écris que sur une machine à écrire ou depuis mon terminal dans un éditeur qui change très peu depuis 40 ans (Vim). Je veux libérer de l'espace mental pour créer et réfléchir.

Si vous avez déjà été dans une bibliothèque juste pour être au calme et réfléchir, vous voyez très bien ce que je veux dire.

Bref, je suis un technopunk ringard… Mais ça, vous le saviez déjà !

À propos de l'auteur :

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

11 Apr 2026 6:51am GMT

Frederic Descamps: A response to Percona’s 2026 MySQL ecosystem benchmark: useful data, but not a realistic MariaDB comparison

Percona's new 2026 benchmark report is interesting because it puts several MySQL-family releases on the same graphs and shares a public repository for the test harness. That openness is welcome. But after reading both the article and the published scripts, I do not think the post supports broad conclusions about "ecosystem performance," and I especially […]

11 Apr 2026 6:51am GMT

Dries Buytaert: Introducing headers.dev

My HTTP Header Analyzer started as a small tool on my blog six years ago. It makes HTTP headers visible and explains what they do. You give it a URL, it fetches the response headers, and it breaks down what is present, what is missing, and what is possibly misconfigured.

It has been used more than 5 million times, despite being buried at https://dri.es/headers. So last week I finally registered headers.dev and gave it a proper home.

While I was at it, I also audited the analyzer against OWASP's recommendations for HTTP headers. I found a few gaps worth fixing. A site could have a Content Security Policy that included unsafe-inline and unsafe-eval, and the analyzer would describe each directive without mentioning that those two keywords effectively disable XSS protection. Or you could set HSTS with preload but forget includeSubDomains, which means your preload submission gets silently rejected. These are the kinds of issues a human reviewer might miss but an automated tool should catch. I fixed those and more, so if you've used the analyzer before, your scores might look different now.

The analyzer also learned about dozens of new headers. Speculation-Rules, for example, tells browsers to prerender pages a user is likely to visit next. Cache-Status replaces the patchwork of vendor-specific X-Cache headers with a single structured format that can describe multiple cache layers in one value. And Reporting-Endpoints is the modern replacement for Report-To, using a simpler key-value syntax for telling browsers where to send security violation reports.

Try it at headers.dev. It now explains over 150 headers and catches misconfigurations that it used to miss. The Open Web is better when more people check their HTTP headers.

11 Apr 2026 6:51am GMT

10 Apr 2026

feedPlanet Debian

Reproducible Builds: Reproducible Builds in March 2026

Welcome to the March 2026 report from the Reproducible Builds project!

These reports outline what we've been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

  1. Linux kernel hash-based integrity checking proposed
  2. Distribution work
  3. Tool development
  4. Upstream patches
  5. Documentation updates
  6. Two new academic papers
  7. Misc news

Linux kernel hash-based integrity checking proposed

Eric Biggers posted to the Linux Kernel Mailing List in response to a patch series posted by Thomas Weißschuh to introduce a calculated hash-based system of integrity checking to complement the existing signature-based approach. Thomas' original post mentions:

The current signature-based module integrity checking has some drawbacks in combination with reproducible builds. Either the module signing key is generated at build time, which makes the build unreproducible, or a static signing key is used, which precludes rebuilds by third parties and makes the whole build and packaging process much more complicated.

However, Eric's followup message goes further:

I think this actually undersells the feature. It's also much simpler than the signature-based module authentication. The latter relies on PKCS#7, X.509, ASN.1, OID registry, crypto_sig API, etc in addition to the implementations of the actual signature algorithm (RSA / ECDSA / ML-DSA) and at least one hash algorithm.


Distribution work

In Debian this month,

Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


Tool development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including preparing and uploading versions, 314 and 315 to Debian.

In addition, Vagrant Cascadian updated diffoscope in GNU Guix to version 315.


rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there; it powers, amongst other things, reproduce.debian.net.

A new version, 0.26.0, was released this month, with the following improvements:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Documentation updates

Once again, there were a number of improvements made to our website this month including:


Two new academic papers

Marc Ohm, Timo Pohl, Ben Swierzy and Michael Meier published a paper on the threat of cache poisoning in the Python ecosystem:

Attacks on software supply chains are on the rise, and attackers are becoming increasingly creative in how they inject malicious code into software components. This paper is the first to investigate Python cache poisoning, which manipulates bytecode cache files to execute malicious code without altering the human-readable source code. We demonstrate a proof of concept, showing that an attacker can inject malicious bytecode into a cache file without failing the Python interpreter's integrity checks. In a large-scale analysis of the Python Package Index, we find that about 12,500 packages are distributed with cache files. Through manual investigation of cache files that cannot be reproduced automatically from the corresponding source files, we identify classes of reasons for irreproducibility to locate malicious cache files. While we did not identify any malware leveraging this attack vector, we demonstrate that several widespread package managers are vulnerable to such attacks.

A PDF of the paper is available online.


Mario Lins of the University of Linz, Austria, has published their PhD doctoral thesis on the topic of Software supply chain transparency:

We begin by examining threats to the software distribution stage - the point at which artifacts (e.g., mobile apps) are delivered to end users - with an emphasis on mobile ecosystems [and] we next focus on the operating system on mobile devices, with an emphasis on mitigating bootloader-targeted attacks. We demonstrate how to compensate lost security guarantees on devices with an unlocked bootloader. This allows users to flash custom operating systems on devices that no longer receive security updates from the original manufacturer without compromising security. We then move to the source code stage. [Also,] we introduce a new architecture to ensure strong source-to-binary correspondence by leveraging the security guarantees of Confidential Computing technology. Finally, we present The Supply Chain Game, an organizational security approach that enhances standard risk-management methods. We demonstrate how game-theoretic techniques, combined with common risk management practices, can derive new criteria to better support decision makers.

A PDF of the paper is available online.


Misc news

On our mailing list this month:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

10 Apr 2026 4:13pm GMT

Jamie McClelland: AI Hacking the Planet

A colleague asked me if we should move all our money to our pillow cases after reading the latest AI editorial from Thomas Friedman. The article reads like a press release from Anthropic, repeating the claim that their latest AI model is so good at finding software vulnerabilities that it is a danger to the world.

I think I now know what it's like to be a doctor who is forced to watch Gray's Anatomy.

By now every journalist should be able to recognize the AI publicity playbook:

Step 1: Start with a wildly unsubstantiated claim about how dangerous your product is:

AI will cause human extinction before we have a chance to colonize mars (remember that one? Even Kim Stanley Robinson, author of perhaps the most compelling science fiction on colonizing mars calls bull shit on it).

AI will eliminate all of our jobs (this one was extremely effective at providing cover for software companies laying off staff but it has quickly dawned on people that the companies that did this are living in chaos not humming along happily with functional robots)

AI will discover massive software vulnerabilities allowing bad actors to "hack pretty much every major software system in the world". (Did Friedman pull that directly from Anthropic's press release or was that his contribution?)

Step 2: To help stave off human collapse, only release the new version to a vetted group of software companies and developers, preferably ones with big social media followings

Step 3: Wait for the limited release developers to spew unbridled enthusiasm and shocking examples that seem to suggest this new AI produce is truly unbelievable

Step 4: Watch stock prices and valuations soar

Step 5: Release to the world, and experience a steady stream of mockery as people discover how wrong you are

Step 6: Start over

Even if Friedman missed the text book example of the playbook, I have to ask: if you think bad actors compromising software resulting in massive loss of private data, major outages and wasted resources needs to be reported on, then where have you been for the last 10 years? This literally happens on a daily basis due to the fundamentally flawed way capitalism has been writing software even before the invention of AI. A small part of me wonders - maybe AI writing software is not so bad, because how could it be any worse than it is now?

Also, let's keep in mind that AI's super ability at finding vulnerable software depends on having access to the software's source code, which most companies keep locked up tight. That means the owners of the software can use AI to find vulnerabilities and fix them but bad actors can't.

Oh, but wait, what if a company is so incompetent that they accidentally release their proprietary software to the Internet?

Surely that would allow AI bots to discover their vulnerabilities and destroy the company right? I'm not sure if anyone has discovered world ending vulnerabilities in Anthropic's Claude code since it was accidentally released, but it is fun to watch people mock software that is clearly written by AI (and spoiler alert, it seems way worse that software written now).

Well… we probably should all be keeping our money in a pillow case anyway.

10 Apr 2026 12:27pm GMT

Reproducible Builds (diffoscope): diffoscope 317 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 317. This version includes the following changes:

[ Chris Lamb ]
* Limit python3-guestfs Build-Dependency to !i386. (Closes: #1132974)
* Try to fix PYPI_ID_TOKEN debugging.

[ Holger Levsen ]
* Add ppc64el to the list of architectures for python3-guestfs.

You find out more by visiting the project homepage.

10 Apr 2026 12:00am GMT

08 Apr 2026

feedPlanet Lisp

Tim Bradshaw: Rules for Lisp programs

Some very serious rules. Very serious.

The essential rule. If you are not building languages in Lisp why are you even here?

The lesser rules.

  1. If you write a program which uses defclass you are probably making a mistake.
  2. If you write a program which uses the CLOS MOP you are making a mistake.
  3. If you write a program which uses LOOP for any purpose other than creating a better iteration construct you are making a mistake.
  4. If you write a program which uses LOOP only to create a better iteration construct you are probably making a mistake.
  5. If you write a program which uses explicit package-qualified names more than very infrequently you will be cast into the outer darkness along with your program.

I will not be taking questions.

08 Apr 2026 10:48am GMT

06 Apr 2026

feedPlanet Lisp

Patrick Stein: Nomic Coding Game

About 30 years ago, I had an idea for a coding game inspired by Nomic. It occurred to me last month that all of the tools I need are readily available now.

Pen-and-paper Nomic

The pen-and-paper game of Nomic (by Peter Suber) has an initial ruleset which describes how one proposes changes to the rules, how one gets those changes ratified, a way to award points when someone's rule change is ratified, and a rule declaring that the winner is the first player to amass 100 points. Some of the rules are mutable and some are immutable and there are rules about turning mutable rules into immutable ones and vice-versa.

The game was meant to show some of the paradoxes of self-amendment. It was meant to lead people into situations where it was clear that certain actions were both legal (or even mandatory) and illegal.

A drastically simplified starting set of rules might look like:

Nomic in Code

So, 30 years ago, I had the idea that it would be fabulous to write some code to referee a Nomic game. However, because interpretation of the rules is so horrendously human, it felt impossible. Today, in 2026, it seems one could maybe get Claude, Gemini, or some other LLM to referee. But, this doesn't much interest me, either, really. I cannot get any of them to keep track of something that I made them write down. I cannot imagine that I would be happy with their interpretation of whether my move is legal given the current state of the rules nor to amend the rules appropriately if my move is legal.

What felt slightly more attainable 30 years ago would be to make it a battle in code:

This was nice and all, but it was also too static. The rules about who can vote and how votes are tallied and such wouldn't be subject to change.

Nomic in Code in 2026

Fast-forward to last month. Last month, I realized that with the GitHub API interface, I could implement a very Nomic-ish pull request battle game. I can:

To be truly in Nomic's full spirit, it would be nice to allow the code in the repository to interact with the GitHub API on its own. Alas, that would immediately let the players vote in changes that expose my GitHub tokens, so it would be a gaping security hole-not only because it would let users impersonate me but because it would let them end-around the actual code in the repository to make changes to the main branch in the repository.

So, as it is, I have a supervisor written in Common Lisp which handles all of the interaction with GitHub and various game repositories (one to play in Common Lisp, one to play in JavaScript, and one to play in Python). The supervisor:

The game code, given a list of open pull requests can reply with one of the following messages:

{
"decision": "winner",
"name": name-of-winner,
"message": optional-reason-for-decision
}
{
"decision": "accept",
"id": id-number-of-pull-request-to-accept,
"message": optional-reason-for-decision
}
{
"decision": "reject",
"id": id-number-of-pull-request-to-reject,
"message": optional-reason-for-decision
}
{
"decision": "defer"
}

The "defer" decision means that there is not enough information at the moment. Maybe, in the future, with other pull requests or other comments or reviews we will be able to make some move.

If the game code replies with anything that isn't one of the four types of replies shown above, the supervisor assumes the latest merge broke the code and reverts the change.

The Ask

I haven't been able to drum up enough players for a game in any of my regular haunts. So, I am looking for tolerant players who will help me give it a test run or two to work out the kinks in the supervisor. Some areas where I forsee potential issues:

So, if you're tolerant of some bumps in the process, have a GitHub account (or will make one), and are interested in a Common Lisp battle of pull requests, let me know so we can get a game going.

The post Nomic Coding Game first appeared on nklein software.

06 Apr 2026 4:09am GMT

03 Apr 2026

feedPlanet Lisp

Marco Antoniotti: An Update on MK-DEFSYSTEM

There are still a few of us (at least two) who are using MK:DEFSYSTEM. The venerable system construction tool has accumulated a lot of ancient cruft, some of which quite convoluted.

Recently I went back to MK:DEFSYSTEM and "cleaned up" some of the code, especially regarding the pathname construction for each component. I also used some simpler hierarchical tricks using defstruct only.

The result should be more solid and clearer in the steps that comprise some "macro tasks". Of course, a rewrite using CLOS would change the coding style, but the choice has been made to keep the MK:DEFSYSTEM code base quite... retro (and somewhat simple).

Why did I went back to MK:DEFSYSTEM? As usual, it is because of a rabbit-hole I fell into: I will blog about it later on (hint: HEΛP).

MK-DEFSYSTEM quick history as of March 2026

MK-DEFSYSTEM (or MK:DEFSYSTEM, or MAKE:DEFSYSTEM) was originally written by Mark Kantrowitz as part of the original "CMU Lisp Utilities" collection; an early "public" set of Common Lisp code and utilities that, in the writer's opinion form one of the basis of most Common Lisp writing to date.

As stated (by M. Kantrowitz himself) in this file header, the original version of MK-DEFSYSTEM was inspired by the Symbolics DEFSYSTEM (or DEFSYS) tool. Yet, MK-DEFSYSTEM differs significantly from it.

In its original form, MK-DEFSYSTEM was built in the CLtL1 era, accommodated a lot of variance among filesystems and CL implementations and it still bears those idiosycrasies. CLtL2 (1992) first and ANSI (1994) next, started reshaping the code base then.

MK-DEFSYSTEM was originally distributed under a license agreement that made redistribution tricky. In 1999, the writer - that'd be me, Marco Antoniotti - contacted Mark Kantrowitz offering to become a maintainer while reworking the distribution license to hammer some FOSS into it. Mark Kantrowitz graciously agreed and, after that, the writer got literally and physically hugged by a few Common Lisp developers because they could use MK-DEFSYSTEM more freely.

Of course, ASDF came along and it solved the same problems that Symbolics (and Kent Pitman's) DEFSYS and MK-DEFSYSTEM solve, plus much more.

Yet, MK-DEFSYSTEM has some nice features (in the eye of the beholder).

MK-DEFSYSTEM still ships in one file - defsystem.lisp - that you can LOAD in your Common Lisp init file. Of course, a big chunk of its current code base is "backward compatibility" and new ok-we-miss-UIOP-and-or-at-least-CL-FAD functionality, plus an ever growing ongoing commentary like this one.

Given this background, the writer has been maintaining MK-DEFSYSTEM for a long time, and more recently, Madhu has made significant changes (and maintains himself a fork with some bells and whistles of his own) since 2008.

Of course, many other contributors helped over the years, and are acknowledged in the early Change Log and in comments in the code.

In early 2026, the writer cleaned up the code and reworked some of the logic, by factoring out some code from main functions. In particular, the CREATE-COMPONENT-PATHNAMES, GENERATE-COMPONENT-PATHNAMES, COMPONENT-FULL-PATHNAME, COMPONENT-FULL-NAMESTRING interplay is better organized; plus new structures, leveraging DEFSTRUCT :INCLUDE feature have been introduced, rendering the code TYPECASE-able.

MK-DEFSYSTEM is old, but it works. It is quirky but it works (at least for the two or three known users - which, in 2026, is already a big chunk of the Common Lisp users' community). Moreover, it does have, at least in the eye of the beholder, some more user friendly user API, for most use case, especially for plain Common Lisp code.

The current MK-DEFSYSTEM repository is at https://gitlab.common-lisp.net/mantoniotti/mk-defsystem

(*) It is assumed that the reader knows about all the acronyms, tools and systems referred to in the text.


'(cheers)

03 Apr 2026 1:04am GMT

29 Jan 2026

feedFOSDEM 2026

Join the FOSDEM Treasure Hunt!

Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰

29 Jan 2026 11:00pm GMT

26 Jan 2026

feedFOSDEM 2026

Guided sightseeing tours

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

26 Jan 2026 11:00pm GMT

Call for volunteers

With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰

26 Jan 2026 11:00pm GMT