18 Apr 2026
Planet Grep
Lionel Dricot: Ne rien avoir à penser

Ne rien avoir à penser
Après le « Je n'ai rien à cacher », voici venu l'ère du « Je n'ai rien à penser »
Se faire prendre pour des crétins parce que ça fonctionne
Google l'annonce : il y a plus de personnes dans le monde avec un smartphone Android que de personnes qui ont accès à de l'eau propre et des égouts.
Cela implique, toujours selon Google, qu'il faut plus d'IA pour ces personnes.
Non, sérieusement, je ne déconne pas. C'est vraiment ce que les gens de Google vont raconter dans les universités dans des événements qui ressemblent un peu à ce que des vendeurs de cigarettes pourraient organiser dans des clubs de sport pour former la jeunesse à fumer en offrant un an de cigarettes gratuites.
Et ils enfoncent le clou: de toute façon, personne n'a le choix d'utiliser l'IA ou non. C'est comme ça. Exactement ce que disait Anthropic: « Que vous le vouliez ou non, préparez-vous pour ce monde stupide ! »
Mon exemple du vendeur de cigarettes semble exagéré, mais je viens d'être témoin, dans ma ville universitaire de Louvain-La-Neuve, d'une compétition qui consistait à faire le tour du lac en courant tout en buvant quatre bières de 33cl. La course était sponsorisée par… une marque de bière, bien entendu. L'université semble avoir donné sa bénédiction pour cet événement et beaucoup d'étudiants sont assez naïfs pour trouver ça cool…
Je suis moi-même un grand naïf. Je croyais que les personnes étaient majoritairement moralement « bonnes ». Elles produisent souvent un impact négatif lorsqu'elles travaillent à maximiser le profit d'une entreprise. C'est juste qu'elles ne s'en rendent pas compte.
Mais c'est faux. Nous savons aujourd'hui que des personnes comme Mark Zuckerberg sont tout simplement moralement inhumaines et que toutes les personnes impliquées savent très bien ce qu'elles font et pourquoi elles le font. Les produits Meta sont spécifiquement modifiés pour rendre les adolescents les plus addicts possibles, pour les perturber durant leur scolarité. Ce n'est pas une conséquence, c'est le but premier du produit. La distraction incessante n'est pas un effet insoupçonné, c'est littéralement ce que cherchent à faire les ingénieurs de Facebook.
Et dire que la plupart des profs sont en mode : « Il faut vivre avec, il faut apprendre à utiliser raisonnablement ».
Non. C'est faux et c'est complètement stupide. C'est comme donner aux adolescents des formations, sponsorisées par Philip Morris, où ils apprendraient à fumer « sans inhaler la fumée ». Ou leur dire que c'est cool de courir en buvant plus de bières que ton estomac ne peut en supporter.
La vérité c'est que la plupart des profs sont complètement addicts à leur smartphone et que c'est plus rassurant d'enseigner son addiction comme un truc positif que de se remettre en question.
La pub nous prend pour des crétins. Elle prend les politiciens pour des crétins. Et, expérimentalement parlant, elle a bien raison. Nous le sommes ! Ça fonctionne encore mieux que prévu parce que, du coup, nous allons leur donner raison et soutenir ceux qui se foutent de notre gueule !
Regardez le RGPD et les bannières de cookies qui ennuient tout le monde et pour lesquelles on accuse « l'Europe ».
Contrairement à une idée reçue, les ennuyeuses bannières de cookies sur les sites ne sont pas la faute du RGPD. D'ailleurs, dans l'immense majorité des cas, ces bannières sont illégales. Gee l'explique très bien en BD :
Mais il y a pire : si ces bannières sont ennuyeuses, c'est parce qu'elles ont été explicitement conçues pour ça. Et oui, pour faire baisser le degré d'adhésion du peuple envers le RGPD. C'est une pure manipulation politique volontaire et consciente de l'industrie publicitaire. Ils savent très bien ce qu'ils font : nous pourrir la vie pour décrédibiliser les institutions politiques afin de nous fourguer plus de pub.
La fin de l'intellectualisme
Un article important sur le retour à l'oralité et le déclin de la lecture. L'oralité, c'est l'émotion au lieu de l'information, c'est le charisme au lieu de la vérité, c'est la manipulation au lieu de la rationalité. C'est également la disparition de l'effort sur le long terme.
Cela semble alarmiste, mais, factuellement, lorsque les chercheurs scientifiques, censés représenter l'élite intellectuelle du monde, en sont réduits à générer des articles qui citent des articles qui n'existent pas, cela pose quand même des questions.
Oui, c'est la fin du monde, la fin d'un monde !
Mais ChatGPT n'est que la cerise sur le gâteau. La raison réelle, c'est que nous dévalorisons l'intellectualité depuis des décennies. Nous valorisons le CEO qui prend des décisions aléatoires en 5 minutes. Nous demandons à tout le monde de creuser des trous et de les reboucher pour « faire tourner l'économie ». Nous vivons dans un monde où Julius grimpe les échelons !
Bref, nous ne faisons que mener le monde vers sa destination la plus logique en regard des indicateurs que nous utilisons pour l'optimiser. C'est tout à fait normal. C'est tout à fait attendu. On ne réduira jamais les émissions de CO₂ tant qu'on tentera de maximiser le PIB d'un pays. Faire tourner l'économie implique de maximiser le travail et donc de consommer le plus de joules possible. Joules qu'il faut produire en émettant du CO₂. Les énergies dites « renouvelables » ne sont qu'une manière d'émettre « moins de CO₂ par joule ». Ce qui est une bonne chose en soi, mais ne résout pas le problème de base que nous cherchons justement à consommer le plus de joules possible. Le résultat du succès des énergies renouvelables est d'ailleurs évident : nous consommons plus de joules, tout simplement.
Nous sommes en train de connaître la fin de l'intellectualité comme nous avons traversé la fin de la vie privée. Non, ce n'est pas réellement la fin. C'est juste que l'intellectualité, tout comme la vie privée avant elle, a perdu son statut de valeur fondamentale pour devenir un truc underground, uniquement valorisée par quelques cercles de plus en plus considérés comme marginaux, y compris, surtout, au sein des plus prestigieuses institutions académiques.
« Je n'ai rien à cacher » s'est subtilement transformé en « Je n'ai rien à penser ».
Depuis les smartphones à ChatGPT en passant par les séries en streaming, les géants technologiques se sont ligués pour nous convaincre de ne plus penser, que penser est has been, que c'est fatigant, que ça ne sert à rien. Nul besoin d'avoir un doctorat en sciences politiques pour comprendre que ça arrange beaucoup de monde.
Ma défense : l'effet bibliothèque
Les chatbots ne font, au fond, qu'augmenter la disponibilité de l'information, y compris fausse. Cette disponibilité réduit l'engagement cognitif et donc le développement du cerveau. Cet effet était déjà visible et étudié en 2011 comme "l'effet Google". Si nous savons qu'une information est disponible en ligne, nous ne tentons plus de nous la rappeler, nous la cherchons (combien de fois avez-vous pris votre téléphone parce que vous ne vous souveniez plus du nom d'un acteur dans un film?)
Ce qui est amusant à constater c'est que, bien avant d'avoir lu ces études, j'ai instinctivement adopté la posture inverse depuis quelques années. Je me refuse de chercher immédiatement une info. Ma motivation était de ne pas interrompre une conversation en cours (je dissuade d'ailleurs mon interlocuteur de sortir son téléphone) ou ne pas interrompre mon travail en cours (je me connais, je sais que si je cherche l'info, je suis 30 minutes plus tard en train de lire la page Wikipédia consacrée à la biographie d'Henri IV ou à une espèce rare de méduse en Nouvelle-Calédonie).
On pourrait arguer qu'il en est de même avec une bibliothèque. Mais je vois des différences fondamentales.
Premièrement, il y a la composante physique : lorsque je cherche une information dans un livre, je me déplace, je cherche dans un rayon. Mon cerveau associe le mouvement avec la mémorisation. Ma bibliothèque a beau être fluide et mouvante, elle garde une structure. Avec le temps, se souvenir d'une information revient à se souvenir du déplacement à effectuer pour aller chercher le livre.
En second lieu, les informations dans les livres sont stables et figées. Elles peuvent être fausses, mais je sais qu'elles ne sont pas générées pour améliorer le SEO du livre ou obtenir des likes. Elles ne se transforment pas subitement en erreur 404.
Cette stabilité rassure mon cerveau. Celui-ci n'est pas dans la "perception", la tentative de comprendre un environnement changeant, ce qui est source de stress. Il est au contraire dans le familier et peut se permettre d'extrapoler, d'imaginer, de faire des liens imprévus.
Bref, je donne à mon cerveau la possibilité d'être créatif, je lui offre un espace stable où il peut expérimenter la mouvance et le changement dans ce qu'il crée : les mots, les histoires. Ce n'est pas un hasard si je n'écris que sur une machine à écrire ou depuis mon terminal dans un éditeur qui change très peu depuis 40 ans (Vim). Je veux libérer de l'espace mental pour créer et réfléchir.
Si vous avez déjà été dans une bibliothèque juste pour être au calme et réfléchir, vous voyez très bien ce que je veux dire.
Bref, je suis un technopunk ringard… Mais ça, vous le saviez déjà !
À propos de l'auteur :
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
18 Apr 2026 2:06am GMT
Lionel Dricot: La première Madame Pipi de l’espace profond

La première Madame Pipi de l'espace profond
Outre la première victoire au Tour de France d'Eddy Merckx, 1969 fut une année qui marqua trois événements très importants.
1. Le premier homme sur la Lune
2. L'invention du système UNIX
3. La naissance de Linus Torvalds.
Deux de ces événements ont encore aujourd'hui un impact quotidien dans votre vie.
Adolescent, j'étais fasciné par les vidéos de Neil Armstrong descendant l'échelle du module lunaire et faisant quelques pas. Je tentais d'imaginer ce que j'aurais ressenti si j'avais vécu cet instant. J'espérais d'ailleurs aller un jour moi-même dans l'espace.
Il y a 20 ans, je suivais en quasi direct la découverte de Titan par la sonde Huygens.
Aujourd'hui, pour la première fois depuis 1972, quatre humains ont quitté l'orbite basse terrestre et sont en route vers la Lune. Cela devrait être un truc incroyablement excitant. Mais comme le dit très bien Kevin Boone, tout le monde semble s'en foutre.
Kevin donne plusieurs explications : la catastrophe climatique et les guerres nous rendent beaucoup moins enthousiastes envers la technologie. Mais, surtout, notre attention est trop fragmentée pour nous rendre compte de l'exploit, pour nous y intéresser.
Among the many crimes that can be attributed to Google and the other tech giants, perhaps the worst is that they've created a world in which a Moon landing is unexciting.
Il n'empêche que quatre humains vont tourner autour de la Lune pour la première fois de mon vivant. Trois hommes et une femme, Christina Koch, qui est donc d'ores et déjà la femme la plus éloignée de la Terre de l'histoire de l'humanité.
Et devinez quel est le rôle de Christina à bord du vaisseau sachant qu'elle est l'astronaute la plus expérimentée des quatre ?
Je vous le donne en mille !
Elle est responsable des toilettes !
Je n'invente rien, je l'ai lu sur Wikipédia dans la section « Spécialiste de Mission 1 ».
Christina Koch est donc la première Madame Pipi de l'espace profond !
Ça semble terriblement sexiste, mais, en réalité, les toilettes sont réellement critiques dans l'espace. Les astronautes des missions Apollo déféquaient dans des sacs en plastique à l'étanchéité douteuse et les étrons flottants n'étaient pas rares. Il me semble avoir lu qu'un cas de diarrhée faillit causer l'annulation d'une des missions, car il y en avait partout.
Dans « Stagiaire au spatioport Omega 3000 », j'ironisais sur le fait que les femmes astronautes n'étaient pas prêtes à laisser la responsabilité d'être Madame Pipi à un homme.
À voir si, comme mon héros Nathan Pasavan, Chrisina Koch recevra à l'atterrissage l'emblématique cache-poussière rose et l'assiette à piécette, insigne historique de cette fonction honorifique…
Bref, pendant qu'ils tournent là-haut, je vous invite à (re)lire cette nouvelle et toutes les autres qui peuplent le recueil, dont « Les filons chocolatifères de la Lune », qui se passe également sur notre satellite.
- Présentation de Stagiaire au spatioport Omega 3000 (ploum.net)
- Commander Stagiaire au spatioport Omega 3000 (pvh-editions.com)
À propos de l'auteur :
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
18 Apr 2026 2:06am GMT
Jeroen De Dauw: Boot Your Productivity With AI
My productivity has increased by at least 300% with AI assistance. You can get amazing results nowadays. If you use the tools right. Discover 4 key ingredients that make the tools work for you in this post.
Many people have only tried free AI via ChatGPT or similar web chatbots. It's easy to dismiss those tools, since they lack all 4 ingredients.
1. Context. If I ask you how I could improve my business, you won't be able to provide a good answer. You don't know all the details about my business that matter. All you can do is offer generic advice or make guesses (hallucinations). It's the same with AI tools. Don't rely on the knowledge baked into the LLM base models. Either provide this knowledge, or provide the tools to obtain that knowledge. You can include "all relevant knowledge" in the prompt, but this is labor-intensive. This is why you want an agentic tool.
2. Agentic tools. I've been using Claude Code, a CLI tool that provides agentic AI for knowledge tasks (not just coding). There is also Claude Cowork, a desktop tool, and alternatives from vendors besides Antrophic. These tools use a loop in which the AI determines whether it needs more information and then goes looking for it. You can give these tools a task or a question, and then they will, if called for, run hundreds of searches and commands. They can look at your documents, codebases, and web resources. Tell these tools "Fix GitHub issue $link", and they'll look at the issue, anything references on the issue, as your codebase, make changes, run tests, make more changes, check results via the browser, fix some final issues, create a draft pull request, and provide you with a summary of what was done and possible next steps.
3. Feedback harness. When writing code, you often don't get everything correct the first time. Which is why automated tests are great. More generally, fast feedback loops are great, regardless of whether you're doing software development. For software development, you'll get much better results if the AI tools can actually run the code and run tests and other CI tools to verify everything is correct.
4. Model. AI capabilities are increasing at an incredible pace. If you're using the latest models, your experience will be worlds apart from those using 2-year-old models. For maximum quality, there are 3 metrics to max out: model size/capability, model version, and effort parameters. In other words, use the latest version of the biggest model with "max effort". At the time of writing this post, that is Claude Opus 4.7 with max-effort when using Antrophic, or GPT-5.4 Pro with heavy thinking when using OpenAI. These settings eat tokens, so you will quickly run into your subscription limits of the basic tiers. Then again, paying 200 USD a month for the higher tiers so you can 5x your productivity is quite the bargain.
Those 4 points provide a conceptual framework. There is more to learn, and the AI space is evolving quickly. Ask your favorite AI tool how you can improve your AI workflows, starting from this post, to get specifics.
Some more tips:
- Know how to use CLAUDE.md / AGENTS.md
- Create sandboxed environments so you can let agents run autonomously for longer periods of time
- Mind the current model's tendency to sycophancy when prompting. If you go, "Here is my idea, is it good?" LLMs will often say yes even if there are issues. Adding "Be brutally honest" to your prompt or CLAUDE.md helps. It takes some practice to build up an understanding of how to prompt and in which ways responses should be distrusted. As a starting point, treat current LLMs as overeager sycophantic juniors with an inhuman, jagged skill profile, who tirelessly work at superhuman speeds.
- Claude Code (or similar) plus local files in a text format works well. I've been enjoying Obsidian + Claude Code for personal knowledge management.
You can stay up to speed with AI capabilities development via Don't Worry About the Vase and Astral Codex Ten, which I both highly recommend.
Shameless plug: my company provides an AI Assistant for MediaWiki, giving you AI capabilities on top of collaborative knowledge management, ideal for organizations.
The post Boot Your Productivity With AI appeared first on Blog of Jeroen De Dauw.
18 Apr 2026 2:06am GMT
Jan De Luyck: A new theme
My static blog to-do list kept growing, so I decided to do something about it.
I've switched from Minimal Mistakes to Chirpy as a theme, because it offers built-in dark/light theme support and has share-to-Mastodon functionality. At the same time I've done some spring cleaning in my posts - moved some really old stuff off to the side.
In the end it was more work than I had anticipated:
- Re-categorising everything since I had conflated tags and categories with Minimal Mistakes - it is a lot more forgiving and you can add as many categories as you want, but Chirpy doesn't work that way
- Figuring out how to add a custom font to have my CodeBerg icon
- Reworking the
feed.xmlfile so the output is more like what Minimal Mistakes generates - avoiding RSS readers to mark the old posts as new - Cleaning out the front matter which contained left-overs from my wordpress to jekyll migration
- Adjusting the markdown heading level so it shows up properly in the table of content
- Linted everything using rumdl
- Implemented some quality of viewing things of Chirpy
- Replaced a bunch of dead links with links to the Internet Archive Wayback Machine
- Adjusted a bunch of images to be usable in a dark-theme environment
- Probably some other stuff that I forgot about
I'm happy with the end result. If you're reading this via RSS, you shouldn't notice much ;)
18 Apr 2026 2:06am GMT
Frederic Descamps: What Our Survey Says About MariaDB Preview Releases
Preview releases are among the clearest ways an open-source community can shape the future of a database before it becomes a production reality. They give users early access to new features, a chance to validate upgrade paths, and an opportunity to catch issues while the change is still inexpensive. In our recent survey, we asked […]
18 Apr 2026 2:06am GMT
Frederic Descamps: MariaDB observability – results from the poll: the community has clearly chosen its default stack
Before I share my takeaway from this MariaDB observability poll, I would like to thank all participants and highlight that these recent polls are very popular, and your participation makes us happy. That said, we recently asked the MariaDB community the following question: Which observability tools do you use for MariaDB? I like polls like […]
18 Apr 2026 2:06am GMT
Frederic Descamps: MariaDB Keeps Climbing: Community, Adoption, and Momentum
If you've been around the MariaDB community for a while, you can probably feel it already: things are moving in the right direction. And no, I'm not talking about one vanity metric, one lucky spike, or one noisy social post. I'm talking about a broader trend. The latest Adoption Index data shows something I really […]
18 Apr 2026 2:06am GMT
Frederic Descamps: Know a MariaDB champion? Submit a nomination
One of the things I really like about open source is that a project is never only about the software. Yes, code is important. Very important. But a project like MariaDB exists and grows because of people. People who contribute code, of course, but also people who help users, review bugs, write blog posts, speak […]
18 Apr 2026 2:06am GMT
Frederic Descamps: A response to Percona’s 2026 MySQL ecosystem benchmark: useful data, but not a realistic MariaDB comparison
Percona's new 2026 benchmark report is interesting because it puts several MySQL-family releases on the same graphs and shares a public repository for the test harness. That openness is welcome. But after reading both the article and the published scripts, I do not think the post supports broad conclusions about "ecosystem performance," and I especially […]
18 Apr 2026 2:06am GMT
Frank Goossens: Selah Sue via Vicky Canals Beatles X Radiohead
Back in the sixties Paul McCartney wrote and recorded Blackbird. The song was partly based on a Bach piece and features great guitar-playing really and talks about hope, empowerment and freedom. Unrelated, Thom Yorke, suffering from post-"OK Computer" depression, wrote "Everything in it's right place" on piano and recorded it with Radiohead in 1999, the song initiating their breakout of the…
18 Apr 2026 2:06am GMT
Frank Goossens: De nieuwe Harstad “Onder de kasseien, het strand” is er bijna maar nog niet helemaal
Ik ben niet zo voor lijstjes, maar als ik onder dreiging van foltering een boeken top 3 zou moeten geven, dan zou "Max, Micha & het Tet-offensief" daar zeker in staan. Van auteur Johan Hardstad verscheen in 2024 al een nieuwe roman in het Noors onder de titel "Under brosteinen, stranden!" en volgens doorgaans goed ingelichte bronnen (ik mailde met de uitgever) zou in het najaar van 2026 de…
18 Apr 2026 2:06am GMT
Dries Buytaert: What does 'Buy European' even mean?
This post was co-authored with Nicholas Gates, senior policy advisor at OpenForum Europe. It was originally published on EUobserver, an independent online newspaper widely read by EU policymakers, journalists and advocacy groups. The article summarizes a series of posts I've been writing about digital sovereignty.
European digital assets have a habit of not staying European - a problem current discussions about sovereignty are overlooking.
For example, Skype had Swedish and Danish founders, Estonian engineers, a Luxembourg headquarters, and proprietary code.
Every sovereignty credential was correct on the day it would have been assessed - and meaningless after eBay acquired it, Microsoft bought it, and eventually shut it down in 2025.
This speaks to a core tension at the heart of Europe's digital sovereignty moment. The real story has to do with licensing, dependencies, and supply chains more than it has to do with ownership or operational control - both of which can (and often do) change in Europe.
The current conception of cloud sovereignty asks the right questions about where data is stored, where companies are headquartered, and whether supply chains are European.
What they don't yet ask is whether the sovereignty they are assessing is durable and resilient - for example, whether it will survive a change of ownership, a corporate acquisition, or a disruption in the infrastructure the software depends on.
The European Commission's Cloud Sovereignty Framework provides a non-legislative assessment tool designed to evaluate the digital independence of cloud services in Europe.
It enables public authorities to rank services based on factors such as immunity from non-EU laws, operational control, and data protection.
The forthcoming Cloud and AI Development Act (CAIDA) - expected at the end of May - will possibly go further.
That said, while both are serious and welcome efforts, they are likely to solve only part of the problem.
'Buy European' is a fragile concept
Europe's 'Buy European' strategy is being built on two fragile foundations it hasn't yet explicitly addressed, and this could have disastrous implications in the cloud domain in particular.
Proprietary software with a perfect sovereignty score today is one acquisition away from a different answer tomorrow. Open Source software means the question doesn't arise.
The legal right to fork changes the power dynamic entirely: it gives you leverage, lets a community step in, and means the technology cannot be held hostage.
This is the distinction the Cloud Sovereignty Framework currently misses.
When Oracle acquired Sun Microsystems in 2010, governments running MySQL faced an immediate question: what happens to this software now?
The answer turned on one thing - the licence. Because MySQL was GPL-licensed, the right to fork and maintain it independently was already being exercised before the acquisition even completed.
MySQL's creator, Monty Widenius, forked it in 2009 precisely because he saw the acquisition coming - that fork exists today as MariaDB. The licence didn't prevent Oracle from buying Sun. It meant the acquisition couldn't end the software, and anyone paying attention could act on that right before any harm materialised.
Getting the licence right is necessary, but it is not sufficient.
In 2024, a conflict between WordPress co-founder Matt Mullenweg and WP Engine disrupted updates for millions of websites.
The code was Open Source. The delivery infrastructure had a single point of control. Most programming languages rely on a single central registry and most are controlled by US companies.
In 2019, GitHub restricted access for developers in sanctioned countries; since GitHub also owns npm, the JavaScript ecosystem's delivery infrastructure became subject to the same trade controls. These aren't interchangeable download sites you can swap out.
Sovereign software on fragile infrastructure is not sovereign. It is software waiting for a supply chain to break.
Both fragility problems point to the same conclusion: a 'Buy European' label is not a sovereignty guarantee unless it embraces licensing as a tool and helps to safeguard the supply chains the software depends on.
Consider two scenarios. A government running proprietary software on a European cloud has jurisdiction, but no exit if the provider is acquired - replacing the software could take years.
A government running Open Source software on Amazon Web Services (AWS) in Europe can move the same software to a European provider whenever it wants. Neither is ideal, but they are not equal.
Europe's sovereignty frameworks need to internalise this asymmetry. Structural sovereignty - the kind that survives change - requires open foundations that flow from licensing through the critical supply chains on which that software depends.
A call-to-action for the Cloud and AI Development Act
CAIDA should not make the same mistakes as the Cloud Sovereignty Framework. It would be a mistake to simply extend a 'Buy European' checklist. The legislation should instead define what makes sovereignty durable.
Two concrete steps would make an immediate difference.
First, it can make Open Source licensing a pass/fail gate for mission-critical procurement under the Cloud Sovereignty Framework - a condition of eligibility at the highest assurance levels, not a weighted factor in a composite score.
Second, it should require supply chain resilience assessments that distinguish between dependencies switchable in weeks and those that would take an entire language community years to replicate, with federated or mirrored European alternatives required where no fallback exists.
Yes, requiring Open Source for mission-critical systems narrows the field in the short term.
But the providers you lose are the ones whose sovereignty credentials don't survive change.
In the longer term, these requirements push European companies toward Open Source software - technology that no one can take away.
18 Apr 2026 2:06am GMT
Dries Buytaert: The Sovereignty Prerequisite

Procurement frameworks aren't the most exciting topic. But the European Commission is about to propose the Cloud and AI Development Act (CADA), and how it treats Open Source will affect every Open Source project and Open Source business operating in Europe. This is one of those moments where the details matter.
Last month, I proposed a Software Sovereignty Scale that grades software from A to E based on how easily your rights can be taken away. My core argument: if you want sovereignty that lasts, Open Source matters more than buying European proprietary software.
I submitted the Software Sovereignty Scale as feedback to the European Commission, recommending that Open Source carry more weight in the Cloud Sovereignty Framework, the tool EU institutions like the Commission and Parliament use to evaluate cloud providers when purchasing cloud services for their own operations.
The Cloud Sovereignty Framework only applies to how EU institutions buy their own cloud services. The Cloud and AI Development Act, which is expected to build on its approach, would set rules for the entire EU cloud market, across all 27 member states. The difference in scale is enormous, and the time to get this right is now.
My original recommendation was to give Open Source more weight in the Cloud Sovereignty Framework's scoring. I've since realized that isn't enough. Licensing shouldn't be in the sovereignty score at all. It should be a prerequisite.
Open Source is not a rounding error
The Cloud Sovereignty Framework evaluates providers across eight sovereignty objectives, each weighted into a composite score, as shown in the screenshot below. Contracting authorities use that score to rank and compare providers when selecting software and cloud services.
Screenshot of how the European Commission computes its composite sovereignty score. Technology Sovereignty (SOV-6), which covers open licensing, accounts for 15% of the total. Source: Cloud Sovereignty Framework, version 1.2.1, October 2025.
Technology Sovereignty (SOV-6), the objective that covers Open Source, accounts for 15% of the total. Within it, open licensing is one of four contributing factors. That means software being Open Source can contribute roughly 4% to a provider's final sovereignty score.
Does that feel right to you? The one thing that guarantees sovereignty long-term is worth ~4%.
A framework designed to measure sovereignty treats the one factor that makes sovereignty permanent as a rounding error. I could argue the percentage should be higher, or that Open Source supports other objectives, but even at 40%, licensing would still be in the wrong place.
Licensing is fundamentally different from every other objective in the framework. Skype checked every sovereignty box until eBay acquired it in 2005. Every credential was valid before the acquisition and meaningless after.
Had Skype been Open Source, no one could have taken the code away. You would still retain the right to use, modify, and fork it regardless of who acquired the company. That right is permanent, but a European headquarters is not.
That makes licensing a prerequisite, not something to average into a score. Scores compare trade-offs. Prerequisites define what is non-negotiable.
The gate already exists
Beyond the composite score, the framework defines Sovereign Effectiveness Assurance Levels, or SEAL levels. These range from SEAL-0 (no sovereignty at all) to SEAL-4 (full EU control with no critical non-EU dependencies).
For each of the eight sovereignty objectives, the contracting authority sets a minimum SEAL level. Any provider that falls below the minimum is rejected outright. These minimums work as pass/fail gates.
My proposal: licensing belongs in the gate, not in the score. Make Open Source a minimum requirement for the highest SEAL levels.
The Software Sovereignty Scale could map onto SEAL levels like this:
| SEAL level | Framework definition | Proposed licensing gate | What it means in practice |
|---|---|---|---|
| SEAL-3 or above | Digital Resilience / Full Digital Sovereignty | Grade A, B, or C (Open Source) | Software can be forked and maintained independently. Sovereignty survives acquisition. |
| SEAL-2 | Data Sovereignty | Grade D or above (including European proprietary software) | European jurisdiction, but structurally vulnerable to acquisition or relicensing. |
| SEAL-1 | Jurisdictional Sovereignty | No licensing gate | Minimal sovereignty assurance. |
Under this proposal, mission-critical software with high switching costs would require a minimum of SEAL-3, making Open Source a requirement. For lower-risk procurement where the software is easy to replace, SEAL-2 would allow proprietary providers to compete.
Won't this exclude many proprietary providers? Yes, it would. But we have to be honest: proprietary software doesn't give you sovereignty that lasts.
I support the push to buy homegrown technology ("Buy European"). It keeps investment in Europe. But it doesn't solve the underlying problem.
Which government is sovereign?
Consider two scenarios. In the first, a government runs proprietary software on a sovereign European cloud. The provider gets acquired by a non-EU company, and the government can't migrate without replacing the software entirely. It has jurisdiction but ultimately no control. It's not very sovereign.
In the second, a government runs Open Source software on Amazon Web Services (AWS), a US-owned cloud provider with data centers in Europe. If AWS becomes a problem because of the CLOUD Act, policy changes, or geopolitics, the government can move the same software to a European cloud provider. Switching cloud providers can be hard, but switching software is much harder.
It may seem counterintuitive, but the second government is in a stronger position. Open Source on a non-European cloud gives you more sovereignty than proprietary software on a European one, because you can always change the infrastructure. You can't fix the licensing.
This doesn't make the second scenario risk-free. The ideal solution would be Open Source on a sovereign European cloud.
People overestimate jurisdiction and underestimate licensing. Licensing is not one sovereignty factor among many. It's the sovereignty prerequisite.
Special thanks to Tiffany Farriss and Sachiko Muto for their review of this blog post.
18 Apr 2026 2:06am GMT
Dries Buytaert: State of Drupal presentation (March 2026)
This year, Drupal turned 25. DrupalCon Chicago felt like the right place to mark that milestone. My keynote was part celebration and part wake-up call. I talked about Drupal's foundations, how AI is putting pressure on them, and why I believe we can rebuild them stronger than before.
If you missed the keynote, you can watch the video below or download my slides (32.6 MB).
It will be interesting to rewatch this keynote in 10 years, when AI is fully mainstream and has reshaped how we work, including our agencies, our craft, and how we collaborate in Open Source. It feels like a snapshot of an industry in transition.
Site templates and the marketplace
About a year ago at DrupalCon Atlanta, I introduced the idea of site templates and a marketplace to go with them. By DrupalCon Vienna, we had one site template, but no marketplace.
In Chicago, I showed eleven site templates available in a basic marketplace at marketplace.drupal.org. All eleven can be installed directly from the Drupal CMS installer.
AI for site building
For more than 20 years, Drupal's ecosystem has rested on a stable triangle: the platform itself, digital agencies who bring Drupal into the real world, and the community that builds and maintains it. That triangle has proven remarkably resilient through many waves of new technologies.
But what happens when AI disrupts all three sides at the same time? In my keynote, I showed how Drupal is responding.
I started by showing a demo of a workflow I believe will become common for Drupal agencies. You quickly prototype a website with AI, then turn it into a Drupal site with the help of AI and a skilled developer, all within hours.
AI gets you to a prototype fast. Drupal gives it the foundations that last.
I believe Drupal has a unique advantage in this new world. Organizations will always need real workflows, permissions, security, scalability, integrations, compliance, and governance. Drupal is very well suited for AI-driven workflows.
The demo worked because Drupal CMS ships with Drupal Canvas, which includes both CLI tools and AI skills. But the real strength comes from Drupal's foundations: its APIs, reusable building blocks, and mature architecture, refined over 25 years. This is the accidental AI advantage I have written about before. This is what makes Drupal one of the best platforms for AI-driven development.

AI for content management
At DrupalCon Vienna, I introduced the Context Control Center as a rough prototype. Since then, we have added many features. It is now nearly production-ready.
The idea is straightforward: AI agents need good context to help manage tasks in Drupal. With the Context Control Center, teams define their brand voice, target audiences, key messages, product details, and editorial guidelines in one place. Then every AI agent on the site draws from this single source of truth. The result is that you create knowledge once, and scale it to all the pages and content on your website.
In my keynote, I showed two demos of the Context Control Center in action. First, Drupal's AI agents turn a simple marketing brief into a complete, on-brand page using Drupal Canvas, consulting the Context Control Center along the way. It followed brand rules, asked clarifying questions, generated structured data for search, and added cross-links.
Second, I showed a proof of concept for dynamic contexts, where the Context Control Center pulls in real-time data from Google Analytics to help improve content performance after publication.
Saying no to AI slop
AI is lowering the barrier to contribute to Open Source projects like Drupal. On paper, that sounds great. More contributors, more patches, more momentum.
But it can also be a real challenge. The volume of contributions is going up while the quality is going down. More patches are landing on a small group of maintainers, and reviewing low-quality code wastes their time. This creates asymmetric pressure on Open Source.
If you're using AI to contribute, you are responsible for what you submit: don't submit code you don't understand. Our quality standards matter, and we will uphold them.
Our craft always evolves

In my keynote, I also told the stories of two community members who embraced AI in a meaningful way.
Aidan Foster, who has been running Foster Interactive for 17 years, chose to go all in on the Drupal AI Initiative instead of staying on the sidelines. Together with his team, he is rebuilding the foundations of his agency to leverage AI and prepare for what is next.
And Jürgen Haas, a longtime contributor and creator of the ECA module, used AI to move at the speed of a team and make Drupal's ECA module much easier to use. In both cases, AI amplifies expertise. It does not replace it.
The world is being flooded with AI-generated average. Average is cheap now, but expertise remains hard-earned and valuable. This community has spent 25 years building it, and that is not something AI can replicate.

AI is the storm, and AI is the way through the storm. I said that first in Vienna. Six months later, I believe it more than ever. Not as a slogan, but as something I have watched happen. We need more people like Aidan and Jürgen. If you want to get involved, join us on Drupal Slack or attend DrupalCon Rotterdam this fall.
I want to extend my gratitude to everyone who contributed to making my presentation and demos a success. A special thank you to Adam G-H, Aidan Foster, ASH Sullivan, Christoph Breidert, Cristina Chumillas, Emma Horrell, Gábor Hojtsy, Gurwinder Antal, James Abrahams, Jurgen Haas, Kristen Pol, Lauri Timmanee, Marcus Johansson, Martin Anderson-Clutz, Pamela Barone, Scott Falconer, Tim Lehnen. Many others contributed indirectly to make this possible. If I've inadvertently omitted anyone, please reach out.
18 Apr 2026 2:06am GMT
Dries Buytaert: Introducing headers.dev
My HTTP Header Analyzer started as a small tool on my blog six years ago. It makes HTTP headers visible and explains what they do. You give it a URL, it fetches the response headers, and it breaks down what is present, what is missing, and what is possibly misconfigured.
It has been used more than 5 million times, despite being buried at https://dri.es/headers. So last week I finally registered headers.dev and gave it a proper home.
While I was at it, I also audited the analyzer against OWASP's recommendations for HTTP headers. I found a few gaps worth fixing. A site could have a Content Security Policy that included unsafe-inline and unsafe-eval, and the analyzer would describe each directive without mentioning that those two keywords effectively disable XSS protection. Or you could set HSTS with preload but forget includeSubDomains, which means your preload submission gets silently rejected. These are the kinds of issues a human reviewer might miss but an automated tool should catch. I fixed those and more, so if you've used the analyzer before, your scores might look different now.
The analyzer also learned about dozens of new headers. Speculation-Rules, for example, tells browsers to prerender pages a user is likely to visit next. Cache-Status replaces the patchwork of vendor-specific X-Cache headers with a single structured format that can describe multiple cache layers in one value. And Reporting-Endpoints is the modern replacement for Report-To, using a simpler key-value syntax for telling browsers where to send security violation reports.
Try it at headers.dev. It now explains over 150 headers and catches misconfigurations that it used to miss. The Open Web is better when more people check their HTTP headers.
18 Apr 2026 2:06am GMT
Dries Buytaert: Drupal 12 switches to Argon2id
Drupal 12 will hash passwords with Argon2id by default. It moves every Drupal site to what is now best practice for password storage, recommended by OWASP and aligned with NIST guidance.
Drupal is often used for security-sensitive and large-scale sites, so these kinds of changes matter.
Early versions of Drupal stored passwords as simple MD5 hashes, which is extremely weak by today's standards. Drupal 7 introduced a modified version of the phpass library using SHA-512 with multiple iterations and a salt, and Drupal 10 switched to bcrypt. Each jump was a response to attackers getting faster hardware, and this change continues that pattern.
When I first looked at this change, I wanted to understand what Argon2id actually does differently from bcrypt.
Its key advantage is that it is "memory hard". Each Argon2id hash requires far more memory to compute than a bcrypt hash, and the amount is configurable.
Modern GPUs can run many bcrypt computations in parallel because each one uses very little RAM. GPUs have a lot of total memory, but it is shared across thousands of parallel computations. As a result, Argon2id limits how many hash computations can run in parallel, making it harder and more expensive to scale attacks.
The best security upgrades are the ones nobody has to think about. Once a site upgrades to Drupal 12, existing passwords will automatically be rehashed to Argon2id the next time each user logs in. And in the unlikely event that Argon2id is not available in a particular PHP installation, Drupal will fall back to bcrypt for compatibility.
Many site owners never think about password hashing, so Drupal's defaults become their security policy. The people who benefit most from this change may never know it happened. It's why being "secure by default" matters so much.
Thanks to everyone who helped make this happen.
18 Apr 2026 2:06am GMT