08 Feb 2026
Planet Grep
Staf Wagemakers: Moved my blog to [blog.wagemakers.be](https://blog.wagemakers.be)

If you follow my blog posts with an RSS reader, update the rss feed to: https://blog.wagemakers.be/atom.xml
…If you want to continue to follow me off-course ;-)
I moved my blog from GitHub to my own hosting ( powered by Procolix ).
Procolix sponsored my hosting for 20 years, till I decided to start my company Mask27.dev.
One reason is that Microsoft seems to like to put "copilot everywhere", including on repositories hosted on github. While I don't dislike AI ( artificial intelligence ), LLM ( Large Language Models ) are a nice piece of technology. The security, privacy, and other issues are overlooked or even just ignored.
The migration was a bit more complicated as usual, as nothing "is easy" ;-)
You'll find the pitfalls of moving my blog below as they might be useful for somebody else ( including the future me ).
Html redirect
I use Jekyll to generate my webpages on my blog. I might switch to HUGO in the future.
While there're Jekyll plugins available to preform a redirect, I decide to keep it simple and added a http header to _includes/head.html
<meta http-equiv="refresh" content="0; url=https://blog.wagemakers.be/blog/2026/01/26/blog-wagemakers-be/" />
Hardcoded links
I had some hardcoded links for image, url, etc on my blog posts.
I used the script below to update the links in my _post directory.
#!/bin/sh
set -o errexit
set -o pipefail
set -o nounset
for file in *; do
echo "... Processing file: ${file}"
sed -i ${file} -e s@https://stafwag.github.io/blog/blog/@https://blog.wagemakers.be/blog/@g
sed -i ${file} -e s@https://stafwag.github.io/blog/images/@https://blog.wagemakers.be/images/@g
sed -i ${file} -e s@\(https://stafwag.github.io/blog\)@\(https://blog.wagemakers.be\)@
done
Disqus
I use DISQUS as the comment system on my blog. As the HTML pages got a proper redirect, I could ask Disqus to reindex the pages so the old comments became available again.
More information is available at: https://help.disqus.com/en/articles/1717126-redirect-crawler
Without a redirect, you can download the URL in a csv and add a migration URL to the csv file and upload it to Disqus. You can find information about it in the link below.
https://help.disqus.com/en/articles/1717129-url-mapper
RSS redirect
I didn't find a good way to redirect for RSS feeds, which RSS readers use correctly.
If you know a good way to handle it, please let me know.
I tried to add an XML redirect as suggested at: https://www.rssboard.org/redirect-rss-feed. But this doesn't seem to work with the RSS readers I tested (NewsFlash, Akregator).
These are the steps I took.
HTML header
I added the following headers to _includes/head.html
<link rel="self" type="application/atom+xml" href="{{ site.url }}{{ site.baseurl }}/atom.xml" />
<link rel="alternate" type="application/atom+xml" title="Wagemakers Atom Feed" href="https://wagemakers.be/atom.xml">
<<link rel="self" type="application/rss+xml" href="{{ site.url }}{{ site.baseurl }}/atom.xml" />
<link rel="alternate" type="application/rss+xml" title="Wagemakers Atom Feed" href="https://wagemakers.be/atom.xml">
Custom feed.xml
When I switched from Octopress to "plain jekyll" I started to use the jekyll-feedplugin. But I still had the old RSS page from Octopress available, so I decided to use it to generate atom.xml and feed.xml in the link rel=self and link rel="alternate" directives.
Full code below or on GitHub: https://github.com/stafwag/blog/blob/gh-pages/feed.xml
---
layout: null
---
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title><![CDATA[stafwag Blog]]></title>
<link href="https://blog.wagemakers.be//atom.xml" rel="self"/>
<link rel="alternate" href="https://blog.wagemakers.be/atom.xml" /> <link href="https://blog.wagemakers.be }}"/>
<link rel="self" type="application/atom+xml" href="https://blog.wagemakers.be//atom.xml" />
<link rel="alternate" type="application/atom+xml" href="https://blog.wagemakers.be/atom.xml" />
<link rel="self" type="application/rss+xml" href="https://blog.wagemakers.be//atom.xml" />
<link rel="alternate" type="application/rss+xml" href="https://blog.wagemakers.be/atom.xml" />
<updated>2026-01-26T20:10:56+01:00</updated>
<id>https://blog.wagemakers.be</id>
<author>
<name><![CDATA[Staf Wagemakers]]></name>
</author>
<generator uri="http://octopress.org/">Octopress</generator>
{% for post in site.posts limit: 10000 %}
<entry>
<title type="html"><![CDATA[{% if site.titlecase %}{{ post.title | titlecase | cdata_escape }}{% else %}{{ post.title | cdata_escape }}{% endif %}]]></title>
<link href="{{ site.url }}{{ site.baseurl }}{{ post.url }}"/>
<updated></updated>
<id>https://blog.wagemakers.be/</id>
<content type="html"><![CDATA[]]></content>
</entry>
{% endfor %}
</feed>
Notify users
I created this blog post to notify the users ;-)
Have fun!
Links
- https://help.disqus.com/en/articles/1717129-url-mapper
- https://help.disqus.com/en/articles/1717126-redirect-crawler
- https://www.heerentanna.com/blog/move-disqus-comment-old-url-new.html
- https://www.rssboard.org/redirect-rss-feed
08 Feb 2026 6:37am GMT
Mattias Geniar: Catching SQL performance issues in PHPUnit and Pest, as part of your test infrastructure
I've been on a bit of a SQL performance kick lately. Over at Oh Dear , I wrote a 3-part series about finding, fixing, and automatically detecting SQL performance issues . The last part of that series covers how we catch regressions before they hit production by running checks in our test suite.
08 Feb 2026 6:37am GMT
Lionel Dricot: The Disconnected Git Workflow

The Disconnected Git Workflow
Using git-send-email while being offline and with multiple email accounts
WARNING: the following is a technical reminder for my future self. If you don't use the "git" software, you can safely ignore this post.
The more I work with git-send-email, the less I find the GitHub interface sufferable.
Want to send a small patch to a GitHub project? You need to clone the repository, push your changes to your own branch, then ask for a pull request using the cumbersome web interface, replying to comments online while trying to avoid smileys.
With git send-email, I simply work offline, do my local commit, then:
git send-email HEAD^
And I'm done. I reply to comments by email, with Vim/Mutt. When the patch is accepted, getting a clean tree usually boils down to:
git pull
git rebase
Yeah for git-send-email!
And, yes, I do that while offline and with multiple email accounts. That's one more reason to hate GitHub.
- How GitHub monopoly is destroying the open source ecosystem (ploum.net)
- We need to talk about your GitHub addiction (ploum.net)
One mail account for each git repository
The secret is not to configure email accounts in git but to use "msmtp" to send email. Msmtp is a really cool sendmail replacement.
In .msmtprc, you can configure multiple accounts with multiple options, including calling a command to get your password.
# account 1 - pro account work host smtp.company.com port 465 user login@company.com from ploum@company.com password SuPeRstr0ngP4ssw0rd tls_starttls off # personal account for FLOSS account floss host mail.provider.net port 465 user ploum@mydomain.net from ploum@mydomain.net from ploum*@mydomain.net passwordeval "cat ~/incredibly_encrypted_password.txt | rot13" tls_starttls off
The important bit here is that you can set multiple "from" addresses for a given account, including a regexp to catch multiple aliases!
Now, we will ask git to automatically use the right msmtp account. In your global .gitconfig, set the following:
[sendemail] sendmailCmd = /usr/bin/msmtp --set-from-header=on envelopeSender = auto
The "envelopesender" option will ensure that the sendemail.from will be used and given to msmtp as a "from address." This might be redundant with "--set-from-header=on" in msmtp but, in my tests, having both was required. And, cherry on the cake, it automatically works for all accounts configured in msmtprc.
Older git versions (< 2.33) don't have sendmailCmd and should do:
[sendemail] smtpserver = /usr/bin/msmtp smtpserveroption = --set-from-header=on envelopesender = auto
I usually stick to a "ploum-PROJECT@mydomain.net" for each project I contribute to. This allows me to easily cut spam when needed. So far, the worst has been with a bug reported on the FreeBSD Bugzilla. The address used there (and nowhere else) has since been spammed to death.
In each git project, you need to do the following:
1. Set the email address used in your commit that will appear in "git log" (if different from the global one)
git config user.email "Ploum <ploum-PROJECT@mydomain.net>"
2. Set the email address that will be used to actually send the patch (could be different from the first one)
git config sendemail.from "Ploum <ploum-PROJECT@mydomain.net>"
3. Set the email address of the developer or the mailing list to which you want to contribute
git config sendemail.to project-devel@mailing-list.com
Damn, I did a commit with the wrong user.email!
Yep, I always forget to change it when working on a new project or from a fresh git clone. Not a problem. Just use "git config" like above, then:
git commit --amend --reset-author
And that's it.
Working offline
I told you I mostly work offline. And, as you might expect, msmtp requires a working Internet connection to send an email.
But msmtp comes with three wonderful little scripts: msmtp-enqueue.sh, msmtp-listqueue.sh and msmtp-runqueue.sh.
The first one saves your email to be sent in ~/.msmtpqueue, with the sending options in a separate file. The second one lists the unsent emails, and the third one actually sends all the emails in the queue.
All you need to do is change the msmtp line in your global .gitconfig to call the msmtpqueue.sh script:
[sendemail]
sendmailcmd = /usr/libexec/msmtp/msmtpqueue/msmtp-enqueue.sh --set-from-header=on
envelopeSender = auto
In Debian, the scripts are available with the msmtp package. But the three are simple bash scripts that can be run from any path if your msmtp package doesn't provide them.
You can test sending a mail, then check the ~/.msmtpqueue folder for the email itself (.email file) and the related msmtp command line (.msmtp file). It happens nearly every day that I visit this folder to quickly add missing information to an email or simply remove it completely from the queue.
Of course, once connected, you need to remember to run:
/usr/libexec/msmtp/msmtpqueue/msmtp-runqueue.sh
If not connected, mails will not be sent and will be kept in the queue. This line is obviously part of my do_the_internet.sh script, along with "offpunk --sync".
It is not only git!
If it works for git, it works for any mail client. I use neomutt with the following configuration to use msmtp-enqueue and reply to email using the address it was sent to.
set sendmail="/usr/libexec/msmtp/msmtpqueue/msmtp-enqueue.sh --set-from-header=on" unset envelope_from_address set use_envelope_from set reverse_name set from="ploum@mydomain.net" alternates ploum[A-Za-z0-9]*@mydomain.net
Of course, the whole config is a little more complex to handle multiple accounts that are all stored locally in Maildir format through offlineimap and indexed with notmuch. But this is a bit out of the scope of this post.
At least, you get the idea, and you could probably adapt it to your own mail client.
Conclusion
Sure, it's a whole blog post just to get the config right. But there's nothing really out of this world. And once the setup is done, it is done for good. No need to adapt to every change in a clumsy web interface, no need to use your mouse. Simple command lines and simple git flow!
Sometimes, I work late at night. When finished, I close the lid of my laptop and call it a day without reconnecting my laptop. This allows me not to see anything new before going to bed. When this happens, queued mails are sent the next morning, when I run the first do_the_internet.sh of the day.
And it always brings a smile to my face to see those bits being sent while I've completely forgotten about them…
About the author
I'm Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
08 Feb 2026 6:37am GMT
Frederic Descamps: Native Password Legacy for 9.6
In the previous article, I shared a solution for people who want to try the latest and greatest MySQL version. We just released MySQL Innovation 9.6, and for those willing to test it with their old application and require the unsafe old authentication method, here are some RPMs of the legacy authentication plugin for EL/OL […]
08 Feb 2026 6:37am GMT
Frank Goossens: w.social invite code
So regarding that new EU social network (which is said to be decentralized but unclear if that implies ActivityPub which would make it more relevant in my book); entering a string in the "invitation code" and clicking "continue" does not result in an XHR request to the server and there's a lot of JS on the page to handle the invitation code. This implies the code is checked in the browser so the…
08 Feb 2026 6:37am GMT
Frank Goossens: Zijn er Meshcore-users in de Limburgse Maasvallei?
Ik steek het op de Fediverse, waar ik sinds een paar weken (maanden?) regelmatig posts langs zag komen over Meshcore als technologie/ software voor gedecentraliseerde ad-hoc netwerken voor tekst-gebaseerde berichten op basis van LoRa radio. Ik heb me zo een Sensecap T1000-e gekocht, meshcore geflasht (vanuit Chrome, poef) en verbonden met m'n Fairphone met de Meshcore app en … niks.
08 Feb 2026 6:37am GMT
Frank Goossens: New (old) bicycle; going to gravel
I bought a 2nd hand gravel bike today because my race bike (Cube Agree GTC SL) is really not well-suited to safely ride in icy/ wet conditions. So happy to show off my new old Specialized Diverge which I will happily also take off-road…
08 Feb 2026 6:37am GMT
Frank Goossens: As seen on YouTube; Angine de Poitrine live on KEXP
Angine de Poitrine live on KEXP. French-Canadian Dada-ist instrumental rock-techno-jazz maybe? A bit of Can, Primus, King Crimson and Sun Ra and a whole lot of virtuoso drums and guitar loop-pedal juggling. If I could I'd go straight for the moshpit but it's just me in my homeoffice so … Some of the YouTube comments (not a cesspool, for once) are spot on; Anyway, just look & listen…
08 Feb 2026 6:37am GMT
FOSDEM organizers: Present a lightning lightning talk
The same as last year: come and take part in a very rapid set of talks! Thought of a last minute topic you want to share? Got your interesting talk rejected? Has something exciting happened in the last few weeks you want to talk about? Get that talk submitted to Lightning Lightning Talks! We have two sessions for participants to speak about subjects which are interesting, amusing, or just something the FOSDEM audience would appreciate: Saturday Sunday Selected speakers line up and present in one continuous automated stream, with an SLO of 99% talk uptime. To submit your talk for舰
08 Feb 2026 6:37am GMT
FOSDEM organizers: Join the FOSDEM Treasure Hunt!
Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰
08 Feb 2026 6:37am GMT
FOSDEM organizers: Guided sightseeing tours
If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.
08 Feb 2026 6:37am GMT
FOSDEM organizers: Call for volunteers
With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰
08 Feb 2026 6:37am GMT
Dries Buytaert: Self-improving AI skills
If you read one thing this week, make it Simon Willison's post on Moltbook. Moltbook is a social network for AI agents. To join, you tell your agent to read a URL. That URL points to a skill file that teaches the agent how to join and participate.
Visit Moltbook and you'll see something really strange: agents from around the world talking to each other and sharing what they've learned. Humans just watch.
This is the most interesting bad idea I've seen in a while. And I can't stop thinking about it.
When I work on my Drupal site, I sometimes use Claude Code with a custom CLAUDE.md skill file. It teaches the agent the steps I follow, like safely cloning my production database, [running PHPUnit tests](https://dri.es/phpunit-tests-for-drupal, clearing Drupal caches, and more.
Moltbook agents share tips through posts. They're chatting, like developers on Reddit. But imagine a skill that doesn't just read those ideas, but finds other skill files, compares approaches, and pulls in the parts that fit. That stops being a conversation. That is a skill rewriting itself.
Skills that learn from each other. Skills that improve by being part of a community, the way humans do.
The wild thing is how obvious this feels. A skill learning from other skills isn't science fiction. It's a small step from what we're already doing.
Of course, this is a terrible idea. It's a supply chain attack waiting to happen. One bad skill poisons everything that trusts it.
This feels inevitable. The question isn't whether skills will learn from other skills. It's whether we'll have good sandboxes before they do.
I've been writing a lot about AI to help figure out its impact on Drupal and our ecosystem. I've always tried to take a positive but balanced view. I explore it because it matters, and because ignoring it doesn't make it go away.
But if I'm honest, I'm scared for what comes next.
08 Feb 2026 6:37am GMT
Dries Buytaert: Drupal CMS 2.0 released

Today we released Drupal CMS 2.0. I've been looking forward to this release for a long time!
If Drupal is 25 years old, why only version 2.0? Because Drupal Core is the same powerful platform you've known for years, now at version 11. Drupal CMS is a product built on top of it, packaging best-practice solutions and extra features to help you get started faster. It was launched a year ago as part of Drupal Starshot.
Why build this layer at all? Because the criticism has been fair: Drupal is powerful but not easy. For years, features like easier content editing and better page building have topped the wishlist.
Drupal CMS is changing Drupal's story from powerful but hard to powerful and easy to use.
With Drupal CMS 2.0, we're taking another big step forward. You no longer begin with a blank slate. You can begin with site templates designed for common use cases, then shape them to fit your needs. You get a visual page builder, preconfigured content types, and a smoother editing experience out of the box. We also added more AI-powered features to help draft and refine content.
The biggest new feature in this release is Drupal Canvas, our new visual page builder that now ships by default with Drupal CMS 2.0. You can drag components onto a page, edit in place, and undo changes. No jumping between forms and preview screens.
WordPress and Webflow have shown how powerful visual editing can be. Drupal Canvas brings that same ease to Drupal with more power while keeping its strengths: custom content types, component-based layouts, granular permissions, and much more.
But Drupal Canvas is only part of the story. What matters more is how these pieces are starting to fit together, in line with the direction we set out more than a year ago: site templates to start from, a visual builder to shape pages, better defaults across the board, and AI features that help you get work done faster. It's the result of a lot of hard work by many people across the Drupal community.
If you tried Drupal years ago and found it too complex, I'd love for you to give it another look. Building a small site with a few landing pages, a campaign section, and a contact form used to take a lot of setup. With Drupal CMS 2.0, you can get something real up and running much faster than before.
For 25 years, Drupal traded ease for power and flexibility. That is finally starting to change, while keeping the power and flexibility that made Drupal what it is. Thank you to everyone who has been pushing this forward.
08 Feb 2026 6:37am GMT
Dries Buytaert: Automatically exporting my Drupal content to GitHub
This note is mostly for my future self, in case I need to set this up again. I'm sharing it publicly because parts of it might be useful to others, though it's not a complete tutorial since it relies on a custom Drupal module I haven't released.
For context: I switched to Markdown and then open-sourced my blog content by exporting it to GitHub. Every day, my Drupal site exports its content as Markdown files and commits any changes to github.com/dbuytaert/website-content. New posts appear automatically, and so do edits and deletions.
Creating the GitHub repository
Create a new GitHub repository. I called mine website-content.
Giving your server access to GitHub
For your server to push changes to GitHub automatically, you need SSH key authentication.
SSH into your server and generate a new SSH key pair:
ssh-keygen -t ed25519 -f ~/.ssh/github -N ""
This creates two files: ~/.ssh/github (your private key that stays on your server) and ~/.ssh/github.pub (your public key that you share with GitHub).
The -N "" creates the key without a passphrase. For automated scripts on secured servers, passwordless keys are standard practice. The security comes from restricting what the key can do (a deploy key with write access to one repository) rather than from a passphrase.
Next, tell SSH to use this key when connecting to GitHub:
cat >> ~/.ssh/config << 'EOF'
Host github.com
IdentityFile ~/.ssh/github
IdentitiesOnly yes
EOF
Add GitHub's server fingerprint to your known hosts file. This prevents SSH from asking "Are you sure you want to connect?" when the script runs:
ssh-keyscan github.com >> ~/.ssh/known_hosts
Display your public key so you can copy it:
cat ~/.ssh/github.pub
In GitHub, go to your repository's "Settings", find "Deploy keys" in the sidebar, and click "Add deploy key". Check the box for "Allow write access".
Test that everything works:
ssh -T git@github.com
You should see: You've successfully authenticated, but GitHub does not provide shell access.
The export script
I created the following export script:
#!/bin/bash
set -e
TEMP=/tmp/dries-export
# Clone the existing repository
git clone git@github.com:dbuytaert/website-content.git $TEMP
cd $TEMP
# Clean all directories so moved/deleted content is tracked
rm -rf */
# Export all content older than 2 days
drush node:export --end-date="2 days ago" --destination=$TEMP
# Commit and push if there are changes
git config user.email "dries+bot@buytaert.net"
git config user.name "Dries Bot"
git add -A
git diff --staged --quiet || {
git commit -m "Automatic updates for $(date +%Y-%m-%d)"
git push
}
rm -rf $TEMP
The drush node:export command comes from a custom Drupal module I built for my site. I have not published the module on Drupal.org because it's specific to my site and not reusable as is. I wrote about why that kind of code is still worth sharing as adaptable modules, and I hope to share it once Drupal.org has a place for them.
The two-day delay (--end-date="2 days ago") gives me time to catch typos before posts are archived to GitHub. I usually find them right after hitting publish.
The git add -A stages everything including deletions, so if I remove a post from my site, it disappears from GitHub too (though Git's history preserves it).
Scheduling the export
On a traditional server, you'd add this script to Cron to run daily. My site runs on Acquia Cloud, which is Kubernetes-based and automatically scales pods up and down based on traffic. This means there is no single server to put a crontab on. Instead, Acquia Cloud provides a scheduler that runs jobs reliably across the infrastructure.
And yes, this note about automatically backing up my content will itself be automatically backed up.
08 Feb 2026 6:37am GMT
Dries Buytaert: AI creates asymmetric pressure on Open Source

AI makes it cheaper to contribute to Open Source, but it's not making life easier for maintainers. More contributions are flowing in, but the burden of evaluating them still falls on the same small group of people. That asymmetric pressure risks breaking maintainers.
The curl story
Daniel Stenberg, who maintains curl, just ended the curl project's bug bounty program. The program had worked well for years. But in 2025, fewer than one in twenty submissions turned out to be real bugs.
In a post called "Death by a thousand slops", Stenberg described the toll on curl's seven-person security team: each report engaged three to four people, sometimes for hours, only to find nothing real. He wrote about the "emotional toll" of "mind-numbing stupidities".
Stenberg's response was pragmatic. He didn't ban AI. He ended the bug bounty. That alone removed most of the incentive to flood the project with low-quality reports.
Drupal doesn't have a bug bounty, but it still has incentives: contribution credit, reputation, and visibility all matter. Those incentives can attract low-quality contributions too, and the cost of sorting them out often lands on maintainers.
Caught between two truths
We've seen some AI slop in Drupal, though not at the scale curl experienced. But our maintainers are stretched thin, and they see what is happening to other projects.
That tension shows up in conversations about AI in Drupal Core and can lead to indecision. For example, people hesitate around AGENTS.md files and adaptable modules because they worry about inviting more contributions without adding more capacity to evaluate them.
This is AI-driven asymmetric pressure in our community. I understand the hesitation. When we get this wrong, maintainers pay the price. They've earned the right to be skeptical.
Many also have concerns about AI itself: its environmental cost, its impact on their craft, and the unresolved legal and ethical questions around how it was trained. Others worry about security vulnerabilities slipping through. And for some, it's simply demoralizing to watch something they built with care become a target for high-volume, low-quality contributions. These concerns are legitimate and deserve to be heard.
As a result, I feel caught between two truths.
On one side, maintainers hold everything together. If they burn out or leave, Drupal is in serious trouble. We can't ask them to absorb more work without first creating relief.
On the other side, the people who depend on Drupal are watching other platforms accelerate. If we move too slowly, they'll look elsewhere.
Both are true. Protecting maintainers and accelerating innovation shouldn't be opposites, but right now they feel that way. As Drupal's project lead, my job is to help us find a path that honors both.
I should be honest about where I stand. I've been writing software with AI tools for over a year now. I've had real successes. I've also seen some of our most experienced contributors become dramatically more productive, doing things they simply couldn't do before. That view comes from experience, not hype.
But having a perspective is not the same as having all the answers. And leadership doesn't mean dragging people where they don't want to go. It means pointing a direction with care, staying open to different viewpoints, and not abandoning the people who hold the project together.
We've sort of been here before
New technology has a way of lowering barriers, and lower barriers always come with tradeoffs. I saw this early in my career. I was writing low-level C for embedded systems by day, and after work I'd come home and work on websites with Drupal and PHP. It was thrilling, and a stark contrast to my day job. You could build in an evening what took days in C.
I remember that excitement. The early web coming alive. I hadn't felt the same excitement in 25 years, until AI.
PHP brought in hobbyists and self-taught developers, people learning as they went. Many of them built careers here. But it also meant that a lot of early PHP code had serious security problems. The language got blamed, and many experts dismissed it entirely. Some still do.
The answer wasn't rejecting PHP for enabling low-quality code. The answer was frameworks, better security practices, and shared standards.
AI is a different technology, but I see the same patterns. It lowers barriers and will bring in new contributors who aren't experts yet. And like scripting languages, AI is here to stay. The question isn't whether AI is coming to Open Source. It's how we make it work.
AI in the right hands
The curl story doesn't end there. In October 2025, a researcher named Joshua Rogers used AI-powered code analysis tools to submit hundreds of potential issues. Stenberg was "amazed by the quality and insights". He and a fellow maintainer merged about 50 fixes from the initial batch alone.
Earlier this week, a security startup called AISLE announced they had used AI to find 12 zero-days in the latest OpenSSL security release. OpenSSL is one of the most scrutinized codebases on the planet. It encrypts most of the internet. Some of the bugs AISLE found had been hiding for over 25 years. They also reported over 30 valid security issues to curl.
The difference between this and the slop flooding Stenberg's inbox wasn't the use of AI. It was expertise and intent. Rogers and AISLE used AI to amplify deep knowledge. The low-quality reports used AI to replace expertise that wasn't there, chasing volume instead of insight.
AI created new burden for maintainers. But used well, it may also be part of the relief.
Earn trust through results
I reached out to Daniel Stenberg this week to compare notes. He's navigating the same tensions inside the curl project, with maintainers who are skeptical, if not outright negative, toward AI.
His approach is simple. Rather than pushing tools on his team, he tests them on himself. He uses AI review tools on his own pull requests to understand their strengths and limits, and to show where they actually help. The goal is to find useful applications without forcing anyone else to adopt them.
The curl team does use AI-powered analyzers today because, as Stenberg puts it, "they have proven to find things no other analyzers do". The tools earned their place.
That is a model I'd like us to try in Drupal. Experiments should stay with willing contributors, and the burden of proof should remain with the experimenters. Nothing should become a new expectation for maintainers until it has demonstrated real, repeatable value.
That does not mean we should wait. If we want evidence instead of opinions, we have to create it. Contributors should experiment on their own work first. When something helps, show it. When something doesn't, share that too. We need honest results, not just positive ones. Maintainers don't have to adopt anything, but when someone shows up with real results, it's worth a look.
Not all low-quality contributions come from bad faith. Many contributors are learning, experimenting, and trying to help. They want what is best for Drupal. A welcoming environment means building the guidelines and culture to help them succeed, with or without AI, not making them afraid to try.
I believe AI tools are part of how we create relief. I also know that is a hard sell to someone already stretched thin, or dealing with AI slop, or wrestling with what AI means for their craft. The people we most want to help are often the most skeptical, and they have good reason to be.
I'm going to do my part. I'll seek out contributors who are experimenting with AI tools and share what they're learning, what works, what doesn't, and what surprises them. I'll try some of these tools myself before asking anyone else to. And I'll keep writing about what I find, including the failures.
If you're experimenting with AI tools, I'd love to hear about it. I've opened an issue on Drupal.org to collect real-world experiences from contributors. Share what you're learning in the issue, or write about it on your own blog and link it there. I'll report back on what we learn on my blog or at DrupalCon.
Protect your maintainers
This isn't just Drupal's challenge. Every large Open Source project is navigating the same tension between enthusiasm for AI and real concern about its impact.
But wherever this goes, one principle should guide us: protect your maintainers. They're a rare asset, hard to replace and easy to lose. Any path forward that burns them out isn't a path forward at all.
I believe Drupal will be stronger with AI tools, not weaker. I believe we can reduce maintainer burden rather than add to it. But getting there will take experimentation, honest results, and collaboration. That is the direction I want to point us in. Let's keep an open mind and let evidence and adoption speak for themselves.
Thanks to phenaproxima, Tim Lehnen, Gábor Hojtsy, Scott Falconer, Théodore Biadala, Jürgen Haas and Alex Bronstein for reviewing my draft.
08 Feb 2026 6:37am GMT