24 Dec 2025
Planet Grep
Lionel Dricot: Prepare for That Stupid World

Prepare for That Stupid World
You probably heard about the Wall Street Journal story where they had a snack-vending machine run by a chatbot created by Anthropic.
At first glance, it is funny and it looks like journalists doing their job criticising the AI industry. If you are curious, the video is there (requires JS).
But what appears to be journalism is, in fact, pure advertising. For both WSJ and Anthropic. Look at how WSJ journalists are presented as "world class", how no-subtle the Anthropic guy is when telling them they are the best and how the journalist blush at it. If you are taking the story at face value, you are failing for the trap which is simple: "AI is not really good but funny, we must improve it."
The first thing that blew my mind was how stupid the whole idea is. Think for one second. One full second. Why do you ever want to add a chatbot to a snack vending machine? The video states it clearly: the vending machine must be stocked by humans. Customers must order and take their snack by themselves. The AI has no value at all.
Automated snack vending machine is a solved problem since nearly a century. Why do you want to make your vending machine more expensive, more error-prone, more fragile and less efficient for your customers?
What this video is really doing is normalising the fact that "even if it is completely stupid, AI will be everywhere, get used to it!"
The Anthropic guy himself doesn't seem to believe his own lies, to the point of making me uncomfortable. Toward the ends, he even tries to warn us: "Claude AI could run your business but you don't want to come one day and see you have been locked out." At which the journalist adds, "Or has ordered 100 PlayStations."
And then he gives up:
"Well, the best you can do is probably prepare for that world."
Still from the video where Anthropic's employee says "probably prepare for that world"
None of the world class journalists seemed to care. They are probably too badly paid for that. I was astonished to see how proud they were, having spent literally hours chatting with a bot just to get a free coke, even queuing for the privilege of having a free coke. A coke that cost a few minutes of minimum-wage work.
So the whole thing is advertising a world where chatbots will be everywhere and where world-class workers will do long queue just to get a free soda.
And the best advice about it is that you should probably prepare for that world.
I'm Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
24 Dec 2025 8:01pm GMT
Lionel Dricot: How We Lost Communication to Entertainment

How We Lost Communication to Entertainment
All our communication channels are morphed into content distribution networks. We are more and more entertained but less and less connected.
A few days ago, I did a controversial blog post about Pixelfed hurting the Fediverse. I defended the theory that, in a communication network, you hurt the trust in the whole network if you create clients that arbitrarily drop messages, something that Pixelfed is doing deliberately. It gathered a lot of reactions.
When I originally wrote this post, nearly one year ago, I thought that either I was missing something or Dansup, Pixelfed's creator, was missing it. We could not both be right. But as the reactions piled in on the Fediverse, I realised that such irreconcilable opinions do not arise only from ignorance or oversight. It usually means that both parties have vastly different assumptions about the world. They don't live in the same world.
Two incompatible universes
I started to see a pattern in the two kinds of reactions to my blog post.
There were people like me, often above 40, who like sending emails and browsing old-fashioned websites. We think of ActivityPub as a "communication protocol" between humans. As such, anything that implies losing messages without feedback is the worst thing that could happen. Not losing messages is the top priority of a communication protocol.
And then there are people like Dansup, who believe that ActivityPub is a content consumption protocol. It's there for entertainment. You create as many accounts as the kinds of media you want to consume. Dansup himself is communicating through a Mastodon account, not a Pixelfed one. Many Pixelfed users also have a Mastodon account, and they never questioned that. They actually want multiple accounts for different use cases.
On the Fediverse threads, nearly all the people defending the Pixelfed philosophy posted from Mastodon accounts. They usually boasted about having both a Mastodon and a Pixelfed account.
A multiplicity of accounts
To me, the very goal of interoperability is not to force you into creating multiple accounts. Big Monopolies have managed to convince people that they need one account on each platform. This was done, on purpose, for purely unethical reasons in order to keep users captive.
That brainwash/marketing is so deeply entrenched that most people cannot see an alternative anymore. It looks like a natural law: you need an account on a platform to communicate with someone on that platform. That also explains why most politicians want to "regulate" Facebook or X. They think it is impossible not to be on those platforms. They believe those platforms are "public spaces" while they truly are "private spaces trying to destroy all other public spaces in order to get a monopoly."
People flock to the Fediverse with this philosophy of "one platform, one account", which makes no sense if you truly want to create a federated communication protocol like email or XMPP.
But Manuel Moreale cracked it for me: the Fediverse is not a communication network. ActivityPub is not a communication protocol. The spec says it: ActivityPub is a protocol to build a "social platform" whose goal is "to deliver content."
The ActivityPub protocol is a decentralised social networking protocol based upon the ActivityStreams 2.0 data format. It provides a client to server API for creating, updating and deleting content, as well as a federated server-to-server API for delivering notifications and content. (official W3C definition of ActivityPub)
No more communication
But aren't social networks also communication networks? That's what I thought. That's how they historically were marketed. That's what we all believed during the "Arab Spring."
But that was a lie. Communication networks are not profitable. Social networks are entertainment platforms, media consumption protocols. Historically, they disguised themselves as communication platforms to attract users and keep them captive.
The point was never to avoid missing a message sent from a fellow human being. The point was always to fill your time with "content."
We dreamed of decentralised social networks as "email 2.0." They truly are "television 2.0."
They are entertainment platforms that delegate media creation to the users themselves the same way Uber replaced taxis by having people drive others in their own car.
But what was created as "ride-sharing" was in fact a way to 1) destroy competition and 2) make a shittier service while people producing the work were paid less and lost labour rights. It was never about the social!
The lost messages
My own interpretation is that social media users don't mind losing messages because they were raised on algorithmic platforms that did that all the time. They don't see the point in trusting a platform because they never experienced a trusted means of communication.
Now that I write it, it may also explain why instant messaging became the dominant communication medium: because if you don't receive an immediate answer, you don't even trust the recipient to have received your messages. In fact, even if the message was received, you don't even trust the recipient's attention span to remember the message.
Multiple studies have confirmed that we don't remember the vast majority of what we see while doomscrolling. While the "view" was registered to increase statistics, we don't have the slightest memory of most of that content, even after only a few seconds. It thus makes sense not to consider social media as a means of communication at all.
There's no need for a reliable communication protocol if we assume that human brains are not reliable enough to handle asynchronous messages.
It's not Dansup who is missing something. It is me who is unadapted to the current society. I understand now that Pixelfed was only following some design decisions and protocol abuses fathered by Mastodon. Pixelfed was my own "gotcha" moment because I never understood Instagram in the first place, and, in my eyes, Pixelfed was no better. But if you take that route, Mastodon is no better than Twitter.
Many reactions pointed, justly, that other Fediverse tools such as PeerTube, WriteFreely, or Mobilizon were just not displaying messages at all.
I didn't consider it a big problem because they never pretended to do it in the first place. Nobody uses those tools to follow others. There's no expectation. Those platforms are "publish only." But this is still a big flaw in the Fediverse! Someone could, using autocompletion, send a message pinging your PeerTube address and you will never see it. Try autocomplete "@ploum" from your Mastodon account and guess which suggestion is the only one that will send me a valid notification!
On a more positive note, I should give credit to Dansup for announcing that Pixelfed will soon allow people to optionally "not drop" text messages.
How we lost email
I cling to asynchronous reliable communications, but those are disappearing. I use email a lot because I see it as a true means of communication: reliable, asynchronous, decentralised, standardised, manageable offline with my own tools. But many people, even barely younger than me, tell me that email is "too formal" or "for old people" or "even worse than social network feeds."
And they are probably right. I like it because I've learned to use it. I apply a strong inbox 0 methodology. If I don't reply or act on your email, it is because I decided not to. I'm actively keeping my inbox clean by sharing only disposable email addresses that I disable once they start to be spammed.
But for most people, their email inbox is simply one more feed full of bad advertising. They have 4 or 5 digit unread count. They scroll through their inbox like they do through their social media feeds.
Boringness of communications
The main problem with reliable communication protocols? It is a mostly solved problem. Build simple websites, read RSS feeds, write emails. Use IRC and XMPP if you truly want real-time communication. Those are working and working great.
And because of that, they are boring.
Communications protocols are boring. They don't give you that well-studied random hit of dopamine. They don't make you addicted.
They don't make you addicted which means they are not hugely profitable and thus are not advertised. They are not new. They are not as shiny as a new app or a new random chatbot.
The problem with communication protocols was never the protocol part. It's the communication part. A few sad humans never wanted to communicate in the first place and managed to become billionaires by convincing the rest of mankind that being entertained is better than communicating with other humans.
As long as I'm not alone
We believe that a communication network must reach a critical mass to be really useful. People stay on Facebook to "stay in touch with the majority." I don't believe that lie anymore. I'm falling back to good old mailing lists. I'm reading the Web and Gemini while offline through Offpunk. I also handle my emails asynchronously while offline.
- Offpunk, an offline-first command-line browser (offpunk.net)
- There Is No Content on Gemini (ploum.net)
I may be part of an endangered species.
It doesn't matter. I made peace with the fact that I will never get in touch with everyone. As long as there are people posting on their gemlogs or blogs with RSS feeds, as long as there are people willing to read my emails without automatically summarising them, there will be a place for those who want to simply communicate. A protected reserve.
You are welcome to join!
- Illustration of a message board piled with messages by David Revoy (CC By 4.0)
- Illustration of animal meeting at an intersection with messages by David Revoy (CC By 4.0)
I'm Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
24 Dec 2025 8:01pm GMT
Frederic Descamps: Deploying on OCI with the starter kit – part 8 (using MySQL REST Service)
The starter kit deploys a MySQL HeatWave DB System on OCI and enables the MySQL REST Service automatically: The REST Service enables us to provide access to data without requiring SQL. It also provides access to some Gen AI functionalities available in MySQL HeatWave. Adding data to MRS using Visual Studio Code To be able […]
24 Dec 2025 8:01pm GMT
Frederic Descamps: Deploying on OCI with the starter kit – part 7 (GenAI in HeatWave)
We saw in part 6 how to use OCI's GenAI Service. GenAI Service uses GPUs for the LLMs, but did you know it's also possible to use GenAI directly in MySQL HeatWave? And by default, those LLMs will run on CPU. The cost will then be reduced. This means that when you are connected to […]
24 Dec 2025 8:01pm GMT
Frederic Descamps: Deploying on OCI with the starter kit – part 6 (GenAI)
In the previous articles [1], [2], [3], [4], [5], we saw how to easily and quickly deploy an application server and a database to OCI. We also noticed that we have multiple programming languages to choose from. In this article, we will see how to use OCI GenAI Service (some are also available with the […]
24 Dec 2025 8:01pm GMT
Frederic Descamps: Deploying on OCI with the starter kit – part 5 (connecting to the database II)
In part 4 of our series on the OCI Hackathon Starter Kit, we saw how to connect to the deployed MySQL HeatWave instance from our clients (MySQL Shell, MySQL Shell for VS Code, and Cloud Shell). In this post, we will see how to connect from an application using a connector. We will cover connections […]
24 Dec 2025 8:01pm GMT
Frederic Descamps: Deploying on OCI with the starter kit – part 4 (connecting to the database)
Let's now see how we can connect to our MySQL HeatWave DB System, which was deployed with the OCI Hackathon Starter Kit in part 1. We have multiple possibilities to connect to the DB System, and we will use three of them: MySQL Shell in the command line MySQL Shell is already installed on the […]
24 Dec 2025 8:01pm GMT
Frank Goossens: Fietsen met een glimlach op winterzonnewende
Op mijn fietstochtje vandaag, terwijl ik door Borgharen reed, schonk een mij voor altijd onbekende wandelaar me een gulle glimlach. Amper een secondje verbondenheid en dan de zon die kort daarna doorbrak en mij Winterzonnewende op de fiets was memorabel!
24 Dec 2025 8:01pm GMT
Dries Buytaert: I open-sourced my blog content
Last week I wrote that a blog is a biography. But sometimes our most advanced technology is also our most fragile. With my blog turning twenty years old in fifteen days, I have been thinking a lot about digital preservation.
The question I keep coming back to is simple: how do you preserve a website for hundreds of years?
I don't have the answer yet, but it's something I plan to slowly work on over the next 10 years. What I'm describing here is a first step.
Humans have been trying to preserve their words since we learned to write. Medieval monks hand-copied manuscripts that survived centuries. Clay tablets from ancient Mesopotamia still tell us about daily life from 5,000 years ago. They worked because they asked very little of the future. A clay tablet basically just sits there.
In contrast, websites require continuous maintenance and recurring payments. Miss either, and they quietly disappear. That makes it hard for websites to survive for hundreds of years.
Traditional backups may help content survive, but they only work if someone knows they exist and what to do with them. Not a safe bet over hundreds of years.
So I am trying something different. I exported my blog as Markdown files and put them on GitHub. Nearly twenty years of posts are now in a public repository at github.com/dbuytaert/website-content.
I'm essentially making two bets. First, GitHub does not need me to keep paying bills or renewing domains. Second, a public Git repository can be cloned. Each clone becomes an independent copy that does not depend on me.
If you use a static site generator like Jekyll or Hugo, you are probably thinking: "Welcome to 2010!". Fair enough. You have been storing content as Markdown in Git since before my kids could walk. The difference is that most people keep their Git repositories private. I am making mine public.
To be clear, my site still runs on Drupal, and that is not changing. No need to panic. I just made my Drupal site export its content as Markdown.
For the past two weeks, my site has been auto-committing to GitHub daily. Admittedly, it feels a bit strange to share everything like this. New blog posts show up automatically, but so does everything else: tag maintenance, even deleted posts I decided were not worth keeping.
My blog has a publish button, an edit button, and a delete button. In my view, they are all equally legitimate. Now you can see me use all three. Git hides nothing.
Exporting my content to GitHub is my first bet, not my last. My plan is to build toward something like a RAID for web content, spreading copies across multiple systems.
24 Dec 2025 8:01pm GMT
Dries Buytaert: Christmas lights, powered by Drupal
Drupal-blue LEDs, controllable through a REST API and a Drupal website. Photo by Phil Norton.
It's Christmas Eve, and Phil Norton is controlling his Christmas lights with Drupal. You can visit his site, pick a color, and across the room, a strip of LEDs changes to match. That feels extra magical on Christmas Eve.
I like how straightforward his implementation is. A Drupal form stores the color value using the State API, a REST endpoint exposes that data as JSON, and MicroPython running on a Pimoroni Plasma board polls the endpoint and updates the LEDs.
I've gone down the electronics rabbit hole myself with my solar-powered website and basement temperature monitor, both using Drupal as the backend. I didn't do an electronics project in 2025, but this makes me want to do another one in 2026.
I also didn't realize you could buy light strips where each LED can be controlled individually. That alone makes me want to up my Christmas game next year.
But addressable LEDs are useful for more than holiday decorations. You could show how many people are on your site, light up a build as it moves through your CI/CD pipeline, flash on failed logins, or visualize the number of warnings in your Drupal logs. This quickly stops being a holiday decoration and starts looking like a tax-deductible business expense.
Beyond the fun factor, Phil's tutorial does real teaching. It uses Drupal features many of us barely think about anymore: the State API, REST resources, flood protection, even the built-in HTML color field. It's not just a clever demo, but also a solid tutorial.
The Drupal community gets stronger when people share work this clearly and generously. If you've been curious about IoT, this is a great entry point.
Merry Christmas to those celebrating. Go build something that blinks. May your deployments be smooth and your Drupal-powered Christmas lights shine bright.
24 Dec 2025 8:01pm GMT
Dries Buytaert: Adaptable Drupal modules: code meant to be adapted, not installed
Over the years, I've built dozens of small, site-specific Drupal modules. None of them live on Drupal.org.
It makes me wonder: how many modules like that exist across the Drupal ecosystem? I'm guessing a lot.
For example, I recently open-sourced the content of this blog by exporting my posts as Markdown files and publishing them on GitHub. To do that, I built two custom Drupal modules with Claude Code: one that converts HTML to Markdown, and another that exports content as YAML with Markdown.
Both modules embed architectural choices and algorithms I explicitly described to Claude Code. Both have unit tests and have been used in production. But both only work for my site.
They're built around my specific content model and field names. For example, my export module expects fields like field_summary and field_image to exist. I'd love to contribute them to Drupal.org, but turning site-specific code into something reusable can be a lot of work.
On Drupal.org, contributed modules are expected to work for everyone. That means abstracting away my content model, adding configuration options I'll never use, handling edge cases I'll never hit, and documenting setups I haven't tested.
There is a "generalization tax": the cost of making code flexible enough for every possible site. Drupal has always had a strong culture of contribution, but this tax has kept a lot of useful code private. My blog alone has ten custom modules that will probably never make it to Drupal.org under the current model.
Generalization work is extremely valuable, and the maintainers who do it deserve a lot of credit. But it can be a high bar, and a lot of useful code never clears it.
That made me wonder: what if we had a different category of contributed code on Drupal.org?
Let's call them "adaptable modules", though the name matters less than the idea.
The concept is simple: tested, working code that solves a real problem for a real site, shared explicitly as a starting point. You don't install these modules. You certainly don't expect them to work out of the box. Instead, an AI adapts the code for you by reading it and understanding the design decisions embedded in it. Or a human can do the same.
In practice, that might mean pointing Claude Code at my Markdown export module and prompting: "I need something like this, but my site uses Paragraphs instead of a regular Body field". Or: "I store images in a media field instead of an image field". The AI reads the code, understands the approach, and generates a version tailored to your setup.
This workflow made less sense when humans had to do all the adaptation. But AI changes the economics. AI is good at reading code, understanding what it does, and reshaping it for a new context. The mechanical work of adaptation is becoming both cheap and reliable.
What matters are the design decisions embedded in the code: the architecture, the algorithms, the trade-offs. Those came from me, a human. They are worth sharing, even if AI handles the mechanical adaptation.
This aligns with where engineering is heading. As developers, we'll spend less time on syntax and boilerplate, and more time on understanding problems, making architectural choices, and weighing trade-offs. Our craft is shifting from writing code to shaping code. And orchestrating the AI agents that writes it. Adaptable modules fit that future.
Modules that work for everyone are still important. Drupal's success will always depend on them. But maybe they're not the only kind worth sharing. The traditional contribution model, generalizing everything for everyone, makes less sense for smaller utility modules when AI can generate context-specific code on demand.
Opinionated, site-specific modules have always lived in private repositories. What is new is that AI makes them worth sharing. Code that only works for my site becomes a useful starting point when AI can adapt it to yours.
I created an issue on Drupal.org to explore this further. I'd love to hear what you think.
(Thanks to phenaproxima, Tim Lehnen, Gábor Hojtsy and Wim Leers for reviewing my draft.)
24 Dec 2025 8:01pm GMT
Dries Buytaert: AI flattens interfaces and deepens foundations
Lee Robinson, who works at Cursor, spent $260 in AI coding agent fees to migrate Cursor's marketing site away from Sanity, their headless CMS, to Markdown files. That number should unsettle anyone who sells or implements content management systems for a living. His reasoning: "With AI and coding agents, the cost of an abstraction has never been higher". He argued that a CMS gets in the way of AI coding agents.
Knut Melvær, who works at Sanity, the very CMS Lee abandoned, wrote a rebuttal worth reading. He pointed out that Lee hadn't actually escaped the complexity of a CMS. Lee still ended up with content models, version control, and user permissions. He just moved them out of the CMS and distributed them across GitHub, Vercel, and custom scripts. That reframing is hard to unsee.
Meanwhile, the broader momentum is undeniable. Lovable, the AI-first website builder, went from zero to $200 million in annual recurring revenue in twelve months. Users prompt what they want and Lovable generates complete, functional applications.
Historically, the visible layer of a CMS, the page builders and content creation workflows, is where most people spend their time. But the invisible layer is what makes organizations trust the system: structured content models, permission systems, audit trails, web service APIs, caching layers, translation workflows, design systems, component libraries, regulatory compliance and more. A useful way to think about a CMS is that roughly 30 percent is visible layer and 70 percent is invisible layer.
For more than twenty years, the visible layer was where the work started. You started from a blank state - a page builder or a content form - then wrote the headline, picked an image, and arranged the layout. The visible layer was like the production floor.
AI changes this dynamic fundamentally. You can now prompt a landing page into existence in under a minute, or generate ten variations and pick the best one. The heavy lifting of content creation is moving to AI.
But AI gets you most of the way, not all the way. The headline is close but not quite right, or there is a claim buried in paragraph three that is technically wrong. Someone still needs to review, adjust, and approve the result.
So the visible layer still matters, but it serves a different purpose. It's where humans set direction at the start and refine the result at the end. AI handles everything in between.
You can try to prompt all the way to the finish line, but for "the last mile", many people will still prefer using a UI. So the traditional page builder becomes a refinement tool rather than a production tool. And you still need the full UI because it where you review, adjust, and approve what AI generates.
What happens to the invisible layer? Its job shifts from "content management" to "context management". It provides what AI needs to do the job right: brand rules, compliance constraints, content relationships, approval workflows. The system becomes more powerful and complex, while requiring less manual work.
So my base case for the future of CMS is simple: AI handles eighty percent of the work. Humans handle the remaining twenty by setting direction at the start, and refining, approving, and taking responsibility at the end.
This is why Drupal is not standing still. We recently launched Drupal Canvas 1.0 and one of its top priorities for 2026 is maturing AI-driven page generation. As this work progresses, Drupal Canvas could become an AI-first experience for those who want it. Watching that come together has been one of the most exciting things I've worked on in years. We're far from done, but the direction feels right.
Lee proved that a skilled developer with AI coding agents can rebuild a marketing site in a weekend for $260. That is genuinely remarkable. But it doesn't prove that every organization will abandon their CMS.
CMSes have to evolve. They have to become a reliable foundation that both humans and AI agents can build on together. The visible layer shifts from where you create to where you refine. The invisible layer does more work but doesn't disappear. Someone still has to direct the system and answer for it when things go wrong.
That is not a smaller role. It's a different one.
24 Dec 2025 8:01pm GMT
Dries Buytaert: A blog is a biography
My mom as a newborn in her mother's arms, surrounded by my grandparents and great-grandparents.
I never knew my great grandparents. They left no diary, no letters, only a handful of photographs. Sometimes I look at those photos and wonder what they cared about. What were their days like? What made them laugh? What problems were they working through?
Then I realize it could be different for my descendants. A long-running blog like mine is effectively an autobiography.
So far, it captures nearly twenty years of my life: my PhD work, the birth of my children, and the years of learning how to lead Drupal and build a community. It even captures the excitement of starting two companies, and the lessons I learned along the way.
And in recent years, it captures the late night posts where I try to make sense of what AI might change. They are a snapshot of a world in transition. One day, it may be hard to remember AI was ever new.
In a way, a blog is a digital time capsule. It is the kind of record I wish my great grandparents had left behind.
I did not start blogging with this in mind. I wrote to share ideas, to think out loud, to guide the Drupal community, and to connect with others. The personal archive was a side effect.
Now I see it differently. Somewhere in there is a version of me becoming a father. A version trying to figure out how to build something that lasts. A version wrestling, late at night, with technology changes happening in front of me.
If my grandchildren ever want to know who I was, they will not have to guess. They will be able to hear my voice.
If that idea feels compelling, this might be a good time to start a blog or a website. Not to build a large audience, but just to leave a trail. Future you may be grateful you began.
24 Dec 2025 8:01pm GMT
Dries Buytaert: A RAID for web content
If you've worked with storage systems, you know RAID: redundant arrays of independent disks. RAID doesn't try to make individual disks more reliable. It accepts that disks fail and designs around it.
I recently open-sourced my blog content by automatically exporting it as Markdown to GitHub. GitHub might outlive me, but it probably won't be around in 100 years either. No one really knows.
That raises a simple question: where should content live if you want it to last decades, maybe centuries?
I don't have the answer, but I know it matters well beyond my blog. We are the first generation to create digital content, and we are not very good at preserving what we create. Studies of link rot consistently show that large portions of the web disappear within just a few years.
Every time you publish something online, you're trusting a stack: the file format, the storage medium, the content management system, the organization running the service, the economic model keeping them running. When any layer fails, your content is gone.
So my plan is to slowly build a "digital preservation RAID" across several platforms: GitHub, the Internet Archive, IPFS, and blockchain-based storage like Filecoin or Arweave. If one disappears, the others might remain.
Each option has different trade-offs and failure modes. GitHub has corporate risk because Microsoft owns it, and one day their priorities might change. The Internet Archive depends on non-profit funding and has faced costly legal battles. IPFS requires someone to actively "pin" your content - if no one cares enough to host it, it disappears. Blockchain-based solutions let you pay once for permanent storage, but the economic models are unproven and I'm not a fan of their climate impact.
If I had to bet on a single option, it would be the Internet Archive. They've been doing some pretty heroic work the past 25 years. GitHub feels durable but archiving old blog posts will never be Microsoft's priority. IPFS, Filecoin, and Arweave are fascinating technical experiments, but I wouldn't rely on them alone.
But the point is not to pick a winner. It is to accept failure as inevitable and design around it, and to keep doing that as the world changes and better preservation tools emerge.
The cost of loss isn't just data. It is the ability to learn from what came before. That feels like a responsibility worth exploring.
24 Dec 2025 8:01pm GMT
Amedee Van Gasse: sort -u vs sort | uniq: a tiny Linux fork in the road
I recently fell into one of those algorithmic rabbit holes that only the internet can provide. The spark was a YouTube Short by @TechWithHazem: a rapid-fire terminal demo showing a neat little text-processing trick built entirely out of classic Linux tools. No frameworks, no dependencies, just pipes, filters, and decades of accumulated wisdom compressed into under two minutes.
That's the modern paradox of Unix & Linux culture: tools older than many of us are being rediscovered through vertical videos and autoplay feeds. A generation raised on Shorts and Reels is bumping into sort, uniq, and friends, often for the first time, and asking very reasonable questions like: wait, why are there two ways to do this?
So let's talk about one of those deceptively small choices.
The question
What's better?
sort -u
or
sort | uniq
At first glance, they seem equivalent. Both give you sorted, unique lines of text. Both appear in scripts, blog posts, and Stack Overflow answers. Both are "correct".
But Linux has opinions, and those opinions are usually encoded in flags.
The short answer
sort -u is almost always better.
The longer answer is where the interesting bits live.
What actually happens
sort -u tells sort to do two things at once:
- sort the input
- suppress duplicate lines
That's one program, one job, one set of buffers, and one round of temporary files. Fewer processes, less data sloshing around, and fewer opportunities for your CPU to sigh quietly.
By contrast, sort | uniq is a two-step relay race. sort does the sorting, then hands everything to uniq, which removes duplicates - but only if they're adjacent. That adjacency requirement is why the sort is mandatory in the first place.
This pipeline works because Linux tools compose beautifully. But composition has a cost: an extra process, an extra pipe, and extra I/O.
On small inputs, you'll never notice. On large ones, sort -u usually wins on performance and simplicity.
Clarity matters too
There's also a human factor.
When you see sort -u, the intent is explicit: "I want sorted, unique output."
When you see sort | uniq, you have to mentally remember a historical detail: uniq only removes adjacent duplicates.
That knowledge is common among Linux people, but it's not obvious. sort -u encodes the idea directly into the command.
When uniq still earns its keep
All that said, uniq is not obsolete. It just has a narrower, sharper purpose.
Use sort | uniq when you want things that sort -u cannot do, such as:
- counting duplicates (
uniq -c) - showing only duplicated lines (
uniq -d) - showing only lines that occur once (
uniq -u)
In those cases, uniq isn't redundant - it's the point.
A small philosophical note
This is one of those Linux moments that looks trivial but teaches a bigger lesson. Linux tools evolve. Sometimes functionality migrates inward, from pipelines into flags, because common patterns deserve first-class support.
sort -u is not "less Linuxy" than sort | uniq. It's Linux noticing a habit and formalizing it.
The shell still lets you build LEGO castles out of pipes. It just also hands you pre-molded bricks when the shape is obvious.
The takeaway
If you just want unique, sorted lines:
sort -u
If you want insight about duplication:
sort | uniq …
Same ecosystem, different intentions.
And yes, it's mildly delightful that a 1'30" YouTube Short can still provoke a discussion about tools designed in the 1970s. The terminal endures. The format changes. The ideas keep resurfacing - sorted, deduplicated, and ready for reuse.
24 Dec 2025 8:01pm GMT
Amedee Van Gasse: 25 Years of amedee.be – A Quarter Century Online 🎉
Today marks exactly 25 years since I registered amedee.be. On 12 December 2000, at 17:15 CET, my own domain officially entered the world. It feels like a different era: an internet of static pages, squealing dial-up modems, and websites you assembled yourself with HTML, stubbornness, and whatever tools you could scrape together. 

I had websites before that-my first one must have been around 1996, hosted on university servers or one of those free hosting platforms that have long since disappeared. There is no trace of those early experiments, and that's probably for the best. Frames, animated GIFs, questionable colour schemes… it was all part of the charm. 

But amedee.be was the moment I claimed a place on the internet that was truly mine. And not just a website: from the very beginning, I also used the domain for email, which added a level of permanence and identity that those free services never could. 
Over the past 25 years, I have used more content management systems than I can easily list. I started with plain static HTML. Then came a parade of platforms that now feel almost archaeological: self-written Perl scripts, TikiWiki, XOOPS, Drupal… and eventually WordPress, where the site still lives today. I'm probably forgetting a few-experience tends to blur after a quarter century online. 

Not all of that content survived. I've lost plenty along the way: server crashes, rushed or ill-planned CMS migrations, and the occasional period of heroic under-backing-up. I hope I've learned something from each of those episodes. Fortunately, parts of the site's history can still be explored through the Wayback Machine at the Internet Archive-a kind of external memory for the things I didn't manage to preserve myself. 


The hosting story is just as varied. The site spent many years at Hetzner, had a period on AWS, and has been running on DigitalOcean for about a year now. I'm sure there were other stops in between-ones I may have forgotten for good reasons. 

What has remained constant is this: amedee.be is my space to write, tinker, and occasionally publish something that turns out useful for someone else. A digital layer of 25 years is nothing to take lightly. It feels a bit like personal archaeology-still growing with each passing year. 

Here's to the next 25 years. I'm curious which tools, platforms, ideas, and inevitable mishaps I'll encounter along the way. One thing is certain: as long as the internet exists, I'll be here somewhere. 
24 Dec 2025 8:01pm GMT
