13 Apr 2026

feedPlanet Grep

Lionel Dricot: Ne rien avoir à penser

Ne rien avoir à penser

Après le « Je n'ai rien à cacher », voici venu l'ère du « Je n'ai rien à penser »

Se faire prendre pour des crétins parce que ça fonctionne

Google l'annonce : il y a plus de personnes dans le monde avec un smartphone Android que de personnes qui ont accès à de l'eau propre et des égouts.

Cela implique, toujours selon Google, qu'il faut plus d'IA pour ces personnes.

Non, sérieusement, je ne déconne pas. C'est vraiment ce que les gens de Google vont raconter dans les universités dans des événements qui ressemblent un peu à ce que des vendeurs de cigarettes pourraient organiser dans des clubs de sport pour former la jeunesse à fumer en offrant un an de cigarettes gratuites.

Et ils enfoncent le clou: de toute façon, personne n'a le choix d'utiliser l'IA ou non. C'est comme ça. Exactement ce que disait Anthropic: « Que vous le vouliez ou non, préparez-vous pour ce monde stupide ! »

Mon exemple du vendeur de cigarettes semble exagéré, mais je viens d'être témoin, dans ma ville universitaire de Louvain-La-Neuve, d'une compétition qui consistait à faire le tour du lac en courant tout en buvant quatre bières de 33cl. La course était sponsorisée par… une marque de bière, bien entendu. L'université semble avoir donné sa bénédiction pour cet événement et beaucoup d'étudiants sont assez naïfs pour trouver ça cool…

Je suis moi-même un grand naïf. Je croyais que les personnes étaient majoritairement moralement « bonnes ». Elles produisent souvent un impact négatif lorsqu'elles travaillent à maximiser le profit d'une entreprise. C'est juste qu'elles ne s'en rendent pas compte.

Mais c'est faux. Nous savons aujourd'hui que des personnes comme Mark Zuckerberg sont tout simplement moralement inhumaines et que toutes les personnes impliquées savent très bien ce qu'elles font et pourquoi elles le font. Les produits Meta sont spécifiquement modifiés pour rendre les adolescents les plus addicts possibles, pour les perturber durant leur scolarité. Ce n'est pas une conséquence, c'est le but premier du produit. La distraction incessante n'est pas un effet insoupçonné, c'est littéralement ce que cherchent à faire les ingénieurs de Facebook.

Et dire que la plupart des profs sont en mode : « Il faut vivre avec, il faut apprendre à utiliser raisonnablement ».

Non. C'est faux et c'est complètement stupide. C'est comme donner aux adolescents des formations, sponsorisées par Philip Morris, où ils apprendraient à fumer « sans inhaler la fumée ». Ou leur dire que c'est cool de courir en buvant plus de bières que ton estomac ne peut en supporter.

La vérité c'est que la plupart des profs sont complètement addicts à leur smartphone et que c'est plus rassurant d'enseigner son addiction comme un truc positif que de se remettre en question.

La pub nous prend pour des crétins. Elle prend les politiciens pour des crétins. Et, expérimentalement parlant, elle a bien raison. Nous le sommes ! Ça fonctionne encore mieux que prévu parce que, du coup, nous allons leur donner raison et soutenir ceux qui se foutent de notre gueule !

Regardez le RGPD et les bannières de cookies qui ennuient tout le monde et pour lesquelles on accuse « l'Europe ».

Contrairement à une idée reçue, les ennuyeuses bannières de cookies sur les sites ne sont pas la faute du RGPD. D'ailleurs, dans l'immense majorité des cas, ces bannières sont illégales. Gee l'explique très bien en BD :

Mais il y a pire : si ces bannières sont ennuyeuses, c'est parce qu'elles ont été explicitement conçues pour ça. Et oui, pour faire baisser le degré d'adhésion du peuple envers le RGPD. C'est une pure manipulation politique volontaire et consciente de l'industrie publicitaire. Ils savent très bien ce qu'ils font : nous pourrir la vie pour décrédibiliser les institutions politiques afin de nous fourguer plus de pub.

La fin de l'intellectualisme

Un article important sur le retour à l'oralité et le déclin de la lecture. L'oralité, c'est l'émotion au lieu de l'information, c'est le charisme au lieu de la vérité, c'est la manipulation au lieu de la rationalité. C'est également la disparition de l'effort sur le long terme.

Cela semble alarmiste, mais, factuellement, lorsque les chercheurs scientifiques, censés représenter l'élite intellectuelle du monde, en sont réduits à générer des articles qui citent des articles qui n'existent pas, cela pose quand même des questions.

Oui, c'est la fin du monde, la fin d'un monde !

Mais ChatGPT n'est que la cerise sur le gâteau. La raison réelle, c'est que nous dévalorisons l'intellectualité depuis des décennies. Nous valorisons le CEO qui prend des décisions aléatoires en 5 minutes. Nous demandons à tout le monde de creuser des trous et de les reboucher pour « faire tourner l'économie ». Nous vivons dans un monde où Julius grimpe les échelons !

Bref, nous ne faisons que mener le monde vers sa destination la plus logique en regard des indicateurs que nous utilisons pour l'optimiser. C'est tout à fait normal. C'est tout à fait attendu. On ne réduira jamais les émissions de CO₂ tant qu'on tentera de maximiser le PIB d'un pays. Faire tourner l'économie implique de maximiser le travail et donc de consommer le plus de joules possible. Joules qu'il faut produire en émettant du CO₂. Les énergies dites « renouvelables » ne sont qu'une manière d'émettre « moins de CO₂ par joule ». Ce qui est une bonne chose en soi, mais ne résout pas le problème de base que nous cherchons justement à consommer le plus de joules possible. Le résultat du succès des énergies renouvelables est d'ailleurs évident : nous consommons plus de joules, tout simplement.

Nous sommes en train de connaître la fin de l'intellectualité comme nous avons traversé la fin de la vie privée. Non, ce n'est pas réellement la fin. C'est juste que l'intellectualité, tout comme la vie privée avant elle, a perdu son statut de valeur fondamentale pour devenir un truc underground, uniquement valorisée par quelques cercles de plus en plus considérés comme marginaux, y compris, surtout, au sein des plus prestigieuses institutions académiques.

« Je n'ai rien à cacher » s'est subtilement transformé en « Je n'ai rien à penser ».

Depuis les smartphones à ChatGPT en passant par les séries en streaming, les géants technologiques se sont ligués pour nous convaincre de ne plus penser, que penser est has been, que c'est fatigant, que ça ne sert à rien. Nul besoin d'avoir un doctorat en sciences politiques pour comprendre que ça arrange beaucoup de monde.

Ma défense : l'effet bibliothèque

Les chatbots ne font, au fond, qu'augmenter la disponibilité de l'information, y compris fausse. Cette disponibilité réduit l'engagement cognitif et donc le développement du cerveau. Cet effet était déjà visible et étudié en 2011 comme "l'effet Google". Si nous savons qu'une information est disponible en ligne, nous ne tentons plus de nous la rappeler, nous la cherchons (combien de fois avez-vous pris votre téléphone parce que vous ne vous souveniez plus du nom d'un acteur dans un film?)

Ce qui est amusant à constater c'est que, bien avant d'avoir lu ces études, j'ai instinctivement adopté la posture inverse depuis quelques années. Je me refuse de chercher immédiatement une info. Ma motivation était de ne pas interrompre une conversation en cours (je dissuade d'ailleurs mon interlocuteur de sortir son téléphone) ou ne pas interrompre mon travail en cours (je me connais, je sais que si je cherche l'info, je suis 30 minutes plus tard en train de lire la page Wikipédia consacrée à la biographie d'Henri IV ou à une espèce rare de méduse en Nouvelle-Calédonie).

On pourrait arguer qu'il en est de même avec une bibliothèque. Mais je vois des différences fondamentales.

Premièrement, il y a la composante physique : lorsque je cherche une information dans un livre, je me déplace, je cherche dans un rayon. Mon cerveau associe le mouvement avec la mémorisation. Ma bibliothèque a beau être fluide et mouvante, elle garde une structure. Avec le temps, se souvenir d'une information revient à se souvenir du déplacement à effectuer pour aller chercher le livre.

En second lieu, les informations dans les livres sont stables et figées. Elles peuvent être fausses, mais je sais qu'elles ne sont pas générées pour améliorer le SEO du livre ou obtenir des likes. Elles ne se transforment pas subitement en erreur 404.

Cette stabilité rassure mon cerveau. Celui-ci n'est pas dans la "perception", la tentative de comprendre un environnement changeant, ce qui est source de stress. Il est au contraire dans le familier et peut se permettre d'extrapoler, d'imaginer, de faire des liens imprévus.

Bref, je donne à mon cerveau la possibilité d'être créatif, je lui offre un espace stable où il peut expérimenter la mouvance et le changement dans ce qu'il crée : les mots, les histoires. Ce n'est pas un hasard si je n'écris que sur une machine à écrire ou depuis mon terminal dans un éditeur qui change très peu depuis 40 ans (Vim). Je veux libérer de l'espace mental pour créer et réfléchir.

Si vous avez déjà été dans une bibliothèque juste pour être au calme et réfléchir, vous voyez très bien ce que je veux dire.

Bref, je suis un technopunk ringard… Mais ça, vous le saviez déjà !

À propos de l'auteur :

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

13 Apr 2026 8:56pm GMT

Frederic Descamps: A response to Percona’s 2026 MySQL ecosystem benchmark: useful data, but not a realistic MariaDB comparison

Percona's new 2026 benchmark report is interesting because it puts several MySQL-family releases on the same graphs and shares a public repository for the test harness. That openness is welcome. But after reading both the article and the published scripts, I do not think the post supports broad conclusions about "ecosystem performance," and I especially […]

13 Apr 2026 8:56pm GMT

Dries Buytaert: Introducing headers.dev

My HTTP Header Analyzer started as a small tool on my blog six years ago. It makes HTTP headers visible and explains what they do. You give it a URL, it fetches the response headers, and it breaks down what is present, what is missing, and what is possibly misconfigured.

It has been used more than 5 million times, despite being buried at https://dri.es/headers. So last week I finally registered headers.dev and gave it a proper home.

While I was at it, I also audited the analyzer against OWASP's recommendations for HTTP headers. I found a few gaps worth fixing. A site could have a Content Security Policy that included unsafe-inline and unsafe-eval, and the analyzer would describe each directive without mentioning that those two keywords effectively disable XSS protection. Or you could set HSTS with preload but forget includeSubDomains, which means your preload submission gets silently rejected. These are the kinds of issues a human reviewer might miss but an automated tool should catch. I fixed those and more, so if you've used the analyzer before, your scores might look different now.

The analyzer also learned about dozens of new headers. Speculation-Rules, for example, tells browsers to prerender pages a user is likely to visit next. Cache-Status replaces the patchwork of vendor-specific X-Cache headers with a single structured format that can describe multiple cache layers in one value. And Reporting-Endpoints is the modern replacement for Report-To, using a simpler key-value syntax for telling browsers where to send security violation reports.

Try it at headers.dev. It now explains over 150 headers and catches misconfigurations that it used to miss. The Open Web is better when more people check their HTTP headers.

13 Apr 2026 8:56pm GMT

feedPlanet Lisp

Scott L. Burson: FSet v2.4.2: CHAMP Bags, and v1.0 of my FSet book!

A couple of weeks ago I released FSet 2.4.0, which brought a CHAMP implementation of bags, filling out the suite of CHAMP types. 🚀 FSet users should have a look at the release page, as it also contained a number of bug fixes and minor changes.

I've since released v2.4.1 and v2.4.2, with some more bug fixes.

But the big news is the book! It brings together all the introductory material I have written, plus a lot more, along with a complete API Reference chapter.

FSet is now in the state I decided last summer I wanted to get it into: faster, better tested and debugged, more feature-complete, and much better documented than it has ever been in its nearly two decades of existence. I am, of course, very much hoping that these months of work have made the library more interesting and accessible to CL programmers who haven't tried it yet. I am even hoping that its existence helps attract newcomers to the CL community. Time will tell!

13 Apr 2026 6:21am GMT

12 Apr 2026

feedPlanet Debian

Dirk Eddelbuettel: littler 0.3.23 on CRAN: Mostly Internal Fixes

max-heap image

The twentyfourth release of littler as a CRAN package landed on CRAN just now, following in the now twenty-one year history (!!) as a (initially non-CRAN) package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in later years.

littler lives on Linux and Unix, has its difficulties on macOS due to some-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet - the build system could be extended - see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette.

This release, which comes just two months after the previous 0.3.22 release that brought a few new features, is mostly internal. (The previous release erroneously had 0.3.23 in its blog and social media posts, it really was 0.3.22 and this one now is is 0.3.23.) Mattias Ellert address a nag (when building for a distribution) about one example file with a shebang not have excutable modes. I accommodated the ever-changing interface the C API of R (within about twelve hours of being notified). A few other smaller changes were made as well polishing a script or two or usual, see below for more.

The full change description follows.

Changes in littler version 0.3.23 (2026-04-12)

  • Changes in examples scripts

    • Correct spelling in installGithub.r to lower-case h

    • The r2u.r now recognises 'resolute' aka 26.06

    • installRub.r can install (more easily) from r-multiverse

    • A file permission was corrected (Mattias Ellert in #131)

  • Changes in package

    • Update script count and examples in README.md

    • Continuous intgegration scripts received minor updates

    • The C level access to the R API was updated to reflect most recent standards (Dirk in #132)

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

12 Apr 2026 2:47pm GMT

Colin Watson: Free software activity in March 2026

My Debian contributions this month were all sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

I fixed CVE-2026-3497 in unstable, thanks to a fix in Ubuntu by Marc Deslauriers. Relatedly, I applied an Ubuntu patch by Athos Ribeiro to not default to weak GSS-API exchange algorithms.

I'm looking forward to being able to split out GSS-API key exchange support in OpenSSH once Ubuntu 26.04 LTS has been released! This stuff will still be my problem, but at least it won't be in packages that nearly everyone has installed.

Python packaging

New upstream versions:

I packaged pybind11-stubgen, needed for new upstream versions of pytango. Tests of reproducible builds revealed that it didn't generate imports in a stable order; I contributed a fix for that upstream.

I worked with the security team to release DSA-6161-1 in multipart, fixing CVE-2026-28356 (upstream discussion). (Most of the work for this was in February, but the vulnerability was still embargoed when I published my last monthly update.)

In trixie-backports, I updated pytest-django to 4.12.0.

I fixed a number of packages to support building with pyo3 0.28:

Other build/test failures:

Rust packaging

New upstream versions:

Other bits and pieces

I upgraded tango to 10.1.2, and yubihsm-shell to 2.7.2.

Code reviews

12 Apr 2026 10:13am GMT

Vasudev Kamath: Hardening the Unpacakgeable: A systemd-run Sandbox for Third-Party Binaries

The Shift in Software Consumption

Historically, I have been a "distribution-first" user. Sticking to tools packaged within the Debian archives provides a layer of trust; maintainers validate licenses, audit code, and ensure the entire dependency chain is verified. However, the rapid pace of development in the Generative AI space-specifically with new tools like Gemini-CLI-has made this traditional approach difficult to sustain.

Many modern CLI tools are built within the npm or Python ecosystems. For a distribution packager, these are a nightmare; packaging a single tool often requires packaging a massive, shifting dependency chain. Consequently, I found myself forced to use third-party binaries, bypassing the safety of the Debian archive.

The Supply Chain Risk

Recent supply chain attacks affecting widely used packages like axios and LiteLLM have made it clear: running unvetted binaries on a personal system is a significant risk. These scripts often have full access to your $HOME directory, SSH keys, and the system D-Bus.

After discussing these concerns with a colleague, I was inspired by his approach-using a Flatpak-style sandbox for even basic applications like Google Chrome. I decided to build a generalized version of this using OpenCode and Qwen 3.6 Fast (which was available for free use at the time) to create a robust, transient sandbox utility.

The Solution: safe-run-binary

My script, safe-run-binary, leverages systemd-run to execute binaries within an isolated scope. It implements strict filesystem masking and resource control to ensure that even if a dependency is compromised, the "blast radius" is contained.

Key Technical Features

1. Virtualized Home Directory (tmpfs)
Instead of exposing my real home directory, the script mounts a tmpfs over $HOME. It then selectively creates and bind-mounts only the necessary subdirectories (like .cache or .config) into a virtual structure. This prevents the application from ever "seeing" sensitive files like ~/.ssh or ~/.gnupg.
2. D-Bus Isolation via xdg-dbus-proxy
For GUI applications, providing raw access to the D-Bus is a security hole. The script uses xdg-dbus-proxy to sit between the application and the system bus. By using the --filter and --talk=org.freedesktop.portal.* flags, the app can only communicate with necessary portals (like the file picker) rather than sniffing the entire bus.
3. Linux Namespace Restrictions

The sandbox utilizes several systemd execution properties to harden the process:

  • RestrictNamespaces=yes: For CLI tools, this prevents the app from creating its own nested namespaces.
  • PrivateTmp=yes: Ensures a private /tmp space that isn't shared with the host.
  • NoNewPrivileges=yes: Prevents the binary from gaining elevated permissions through SUID/SGID bits.
4. GPU and Audio Passthrough
The script intelligently detects and binds Wayland, PipeWire, and NVIDIA/DRI device nodes. This allows browsers like Firefox to run with full hardware acceleration and audio support while remaining locked out of the rest of the filesystem.

Usage

To run a CLI tool like Gemini-CLI with access only to a specific directory:

safe-run-binary -b ~/.gemini-config -- npx @google/gemini-cli

For a GUI application like Firefox:

safe-run-binary --gui -b ~/.mozilla -b ~/.cache/mozilla -b ~/Downloads -- firefox

Conclusion

While it is not always possible to escape the need for third-party software, it is possible to control the environment in which it operates. By leveraging native Linux primitives like systemd and namespaces, high-grade isolation is achievable.

PS: If you spot any issues or have suggestions for improving the script, feel free to raise a PR on the repo.

12 Apr 2026 7:23am GMT

feedPlanet Lisp

Wimpie Nortje: Dependency hell revisited, updating my Qlot workflow.

I wrote on this topic before but the landscape has changed a lot since then.

Skip to the new Qlot workflow.

When you work on projects that become even slightly complex it is a matter of time until you run into problems where the specific version of a particular library becomes important. This happens in most, if not all, programming languages.

In the Common Lisp environment Quicklisp has become the de facto standard for loading libraries, including fetching and loading their dependencies. Quicklisp distributes libraries in "distributions" which are point-in-time snapshots of all the known and working libraries at the time of distribution creation.

An advantage of this approach is that you are guaranteed that all libraries available in the distribution can be loaded with any of the others. Some disadvantages are that 1) if a library was included in an older distribution but no longer loads cleanly, it gets removed from the distribution, and 2) libraries are only added or updated when a new distribution is cut.

Though libraries can be put in ~/quicklisp/local-projects/ in order to supplement or override those in the distribution, Quicklisp does not provide any mechanism for pinning the state of ~/quicklisp/local-projects/.

Some Quicklisp attributes:

  1. All libraries in a distribution can be loaded with any of the others. Those that no longer do are removed from the distribution.
  2. Libraries are only added or updated at distribution cutting time.
  3. You specify a single distribution version, not individual library versions. The distribution is used globally, across all projects and across all lisp implementations.
  4. Libraries can be loaded from ~/quicklisp/local-projects/. Anything there will override the version in the distribution.
  5. Nothing about the ~/quicklisp/local-projects/ content can be specified using Quicklisp. It needs to be managed outside of Quicklisp.
  6. Libraries are loaded from the current Quicklisp distribution. There is no way to specify which distribution version a particular project must use.
  7. Quicklisp can create a bundle of all the loaded systems that can be committed with the project code and used from there rather than the distribution, i.e. vendoring.
  8. The source code for libraries included in a distribution are stored in a dedicated location for Quicklisp, they are not read from the original source repo.

Depending on your situation these attributes may be positive, negative or irrelevant.

There are projects like Ultralisp that have different philosophies regarding the distribution content but they still depend on Quicklisp for all other aspects. Thus they share most of the above attributes.

Since my previous post on this topic much has changed in the library version arena. There are now many projects that address different aspects of the above list; the topic of vendoring has gained momentum; and Qlot has changed a lot, to the extent that some code samples in older posts no longer work.

Vendoring is the idea that all the libraries your project depend on are actually part of your project and as such should be committed as part of the project code. Both Quicklisp and Qlot support this with QL:bundle and QLOT:bundle, and the Vend tool is entirely focused on vendoring.

The significant changes in Qlot broke my development workflow. Since I now had to spend time fixing this it was a good opportunity to evaluate some of the other library versioning tools. The issues that made me hesitant to adapt to the new Qlot without considering other options are:

I evaluated many of the other version management tools and came to the conclusion that Qlot is the closest to what I wanted and I set off trying to find a workflow that will adhere to my requirements listed here:

After some fiddling with Qlot I learned that:

  1. Qlot installs a complete and independent Quicklisp inside your project directory. This Quicklisp has no dependence, relation or knowledge about the global Quicklisp.
  2. $ qlot exec ccl mostly arranges things such that Quicklisp is loaded from the project local installation. If you can arrange for that to happen without using Qlot then you can use start your lisp normally.
  3. When the project local Quicklisp is loaded it doesn't know about the global or any other project local Quicklisps. This gives the 100% certainty of where libraries were loaded from.
  4. When the project local Quicklisp is loaded you use it exactly like you would the global one. That is, QL:quickload and not QLOT:quickload, nor do you use any other Qlot wrappers.
  5. Qlot does not need to be present in your lisp image at all.
  6. Loading Qlot inside you current REPL doesn't work well because:

    1. The Qlot REPL API is still in development.
    2. Doing this loads Qlot from the global Quicklisp instead of the project local one. Distribution version mismatches between the two Quicklisps could trigger issues with Qlot itself. It also voids the certainty of library location.
  7. Having Qlot as a standalone executable outside of your lisp image puts it on the same plane as other tools used for development such as make or git. It then doesn't matter which implementation it prefers because it doesn't affect your choices.
  8. Qlot has gained the ability to bundle libraries in case you want to go the vendoring route.
  9. Roswell is not needed for any of the above.

New workflow

Combining my requirements and my new understanding of Qlot, I modified my workflow for pinning library versions to be:

  1. Qlot executable must be available in your path.
  2. Specify the distribution and library versions in qlfile.
  3. qlot install at the CLI.
  4. Load lisp without executing init scripts, e.g. ccl -n.
  5. Load Quicklisp from the project local installation.

    (load "PROJECT-PATH/.qlot/setup.lisp")

  6. Use Quicklisp as before.

    (ql:quickload :alexandria)

If you would like to vendor your libraries then:

  1. Specify the distribution and library versions in qlfile.
  2. qlot install
  3. qlot bundle
  4. git commit
  5. ccl
  6. (load PROJECT-PATH/.bundle-libs/setup.lisp)

Qlot tasks

All Qlot related tasks such as initialising a project, installing libraries, upgrading libraries, etc must be performed in the CLI using the Qlot executable. These happen relatively infrequently and inside the REPL Qlot does not feature.

12 Apr 2026 12:00am GMT

08 Apr 2026

feedPlanet Lisp

Tim Bradshaw: Rules for Lisp programs

Some very serious rules. Very serious.

The essential rule. If you are not building languages in Lisp why are you even here?

The lesser rules.

  1. If you write a program which uses defclass you are probably making a mistake.
  2. If you write a program which uses the CLOS MOP you are making a mistake.
  3. If you write a program which uses LOOP for any purpose other than creating a better iteration construct you are making a mistake.
  4. If you write a program which uses LOOP only to create a better iteration construct you are probably making a mistake.
  5. If you write a program which uses explicit package-qualified names more than very infrequently you will be cast into the outer darkness along with your program.

I will not be taking questions.

08 Apr 2026 10:48am GMT

29 Jan 2026

feedFOSDEM 2026

Join the FOSDEM Treasure Hunt!

Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰

29 Jan 2026 11:00pm GMT

26 Jan 2026

feedFOSDEM 2026

Guided sightseeing tours

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

26 Jan 2026 11:00pm GMT

Call for volunteers

With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰

26 Jan 2026 11:00pm GMT