18 Dec 2025
Planet Grep
Frederic Descamps: Deploying on OCI with the starter kit – part 5 (connecting to the database II)
In part 4 of our series on the OCI Hackathon Starter Kit, we saw how to connect to the deployed MySQL HeatWave instance from our clients (MySQL Shell, MySQL Shell for VS Code, and Cloud Shell). In this post, we will see how to connect from an application using a connector. We will cover connections […]
18 Dec 2025 2:56am GMT
Dries Buytaert: I open-sourced my blog content
Last week I wrote that a blog is a biography. But sometimes our most advanced technology is also our most fragile. With my blog turning twenty years old in fifteen days, I have been thinking a lot about digital preservation.
The question I keep coming back to is simple: how do you preserve a website for hundreds of years?
I don't have the answer yet, but it's something I plan to slowly work on over the next 10 years. What I'm describing here is a first step.
Humans have been trying to preserve their words since we learned to write. Medieval monks hand-copied manuscripts that survived centuries. Clay tablets from ancient Mesopotamia still tell us about daily life from 5,000 years ago. They worked because they asked very little of the future. A clay tablet basically just sits there.
In contrast, websites require continuous maintenance and recurring payments. Miss either, and they quietly disappear. That makes it hard for websites to survive for hundreds of years.
Traditional backups may help content survive, but they only work if someone knows they exist and what to do with them. Not a safe bet over hundreds of years.
So I am trying something different. I exported my blog as Markdown files and put them on GitHub. Nearly twenty years of posts are now in a public repository at github.com/dbuytaert/website-content.
I'm essentially making two bets. First, GitHub does not need me to keep paying bills or renewing domains. Second, a public Git repository can be cloned. Each clone becomes an independent copy that does not depend on me.
If you use a static site generator like Jekyll or Hugo, you are probably thinking: "Welcome to 2010!". Fair enough. You have been storing content as Markdown in Git since before my kids could walk. The difference is that most people keep their Git repositories private. I am making mine public.
To be clear, my site still runs on Drupal, and that is not changing. No need to panic. I just made my Drupal site export its content as Markdown.
For the past two weeks, my site has been auto-committing to GitHub daily. Admittedly, it feels a bit strange to share everything like this. New blog posts show up automatically, but so does everything else: tag maintenance, even deleted posts I decided were not worth keeping.
My blog has a publish button, an edit button, and a delete button. In my view, they are all equally legitimate. Now you can see me use all three. Git hides nothing.
Exporting my content to GitHub is my first bet, not my last. My plan is to build toward something like a RAID for web content, spreading copies across multiple systems.
18 Dec 2025 2:56am GMT
Dries Buytaert: A RAID for web content
If you've worked with storage systems, you know RAID: redundant arrays of independent disks. RAID doesn't try to make individual disks more reliable. It accepts that disks fail and designs around it.
I recently open-sourced my blog content by automatically exporting it as Markdown to GitHub. GitHub might outlive me, but it probably won't be around in 100 years either. No one really knows.
That raises a simple question: where should content live if you want it to last decades, maybe centuries?
I don't have the answer, but I know it matters well beyond my blog. We are the first generation to create digital content, and we are not very good at preserving what we create. Studies of link rot consistently show that large portions of the web disappear within just a few years.
Every time you publish something online, you're trusting a stack: the file format, the storage medium, the content management system, the organization running the service, the economic model keeping them running. When any layer fails, your content is gone.
So my plan is to slowly build a "digital preservation RAID" across several platforms: GitHub, the Internet Archive, IPFS, and blockchain-based storage like Filecoin or Arweave. If one disappears, the others might remain.
Each option has different trade-offs and failure modes. GitHub has corporate risk because Microsoft owns it, and one day their priorities might change. The Internet Archive depends on non-profit funding and has faced costly legal battles. IPFS requires someone to actively "pin" your content - if no one cares enough to host it, it disappears. Blockchain-based solutions let you pay once for permanent storage, but the economic models are unproven and I'm not a fan of their climate impact.
If I had to bet on a single option, it would be the Internet Archive. They've been doing some pretty heroic work the past 25 years. GitHub feels durable but archiving old blog posts will never be Microsoft's priority. IPFS, Filecoin, and Arweave are fascinating technical experiments, but I wouldn't rely on them alone.
But the point is not to pick a winner. It is to accept failure as inevitable and design around it, and to keep doing that as the world changes and better preservation tools emerge.
The cost of loss isn't just data. It is the ability to learn from what came before. That feels like a responsibility worth exploring.
18 Dec 2025 2:56am GMT
17 Dec 2025
Planet Debian
Jonathan McDowell: 21 years of blogging

21 years ago today I wrote my first blog post. Did I think I'd still be writing all this time later? I've no idea to be honest. I've always had the impression my readership is small, and people who mostly know me in some manner, and I post to let them know what I'm up to in more detail than snippets of IRC conversation can capture. Or I write to make notes for myself (I frequently refer back to things I've documented here). I write less about my personal life than I used to, but I still occasionally feel the need to mark some event.
From a software PoV I started out with Blosxom, migrated to MovableType in 2008, ditched that, when the Open Source variant disappeared, for Jekyll in 2015 (when I also started putting it all in git). And have stuck there since. The static generator format works well for me, and I outsource comments to Disqus - I don't get a lot, I can't be bothered with the effort of trying to protect against spammers, and folk who don't want to use it can easily email or poke me on the Fediverse. If I ever feel the need to move from Jekyll I'll probably take a look at Hugo, but thankfully at present there's no push factor to switch.
It's interesting to look at my writing patterns over time. I obviously started keen, and peaked with 81 posts in 2006 (I've no idea how on earth that happened), while 2013 had only 2. Generally I write less when I'm busy, or stressed, or unhappy, so it's kinda interesting to see how that lines up with various life events.

During that period I've lived in 10 different places (well, 10 different houses/flats, I think it's only 6 different towns/cities), on 2 different continents, working at 6 different employers, as well as a period where I was doing my Masters in law. I've travelled around the world, made new friends, lost contact with folk, started a family. In short, I have lived, even if lots of it hasn't made it to these pages.
At this point, do I see myself stopping? No, not really. I plan to still be around, like Flameeyes, to the end. Even if my posts are unlikely to hit the frequency from back when I started out.
17 Dec 2025 5:06pm GMT
Sven Hoexter: exfatprogs: Do not try defrag.exfat / mkfs.exfat Windows compatibility in Trixie
exfatprogs 1.3.0 added a new defrag.exfat utility which turned out to be not reliable and cause data loss. exfatprogs 1.3.1 disabled the utility, and I followed that decision with the upload to Debian/unstable yesterday. But as usual it will take some time until it's migrating to testing. Thus if you use testing do not try defag.exfat! At least not without a vetted and current backup.
Beside of that there is a compatibility issue with the way mkfs.exfat, as shipped in trixie (exfatprogs 1.2.9), handles drives which have a physical sector size of 4096 bytes but emulate a logical size of 512 bytes. With exfatprogs 1.2.6 a change was implemented to prefer the physical sector size on those devices. That turned out to be not compatible with Windows, and was reverted in exfatprogs 1.3.0. Sadly John Ogness ran into the issue and spent some time to debug it. I've to admit that I missed the relevance of that change. Huge kudos to John for the bug report. Based on that I prepared an update for the next trixie point release.
If you hit that issue on trixie with exfatprogs 1.2.9-1 you can work around it by formating with mkfs.exfat -s 512 /dev/sdX to get Windows compatibility. If you use exfatprogs 1.2.9-1+deb13u1 or later, and want the performance gain back, and do not need Windows compatibility, you can format with mkfs.exfat -s 4096 /dev/sdX.
17 Dec 2025 2:38pm GMT
Dirk Eddelbuettel: RcppArmadillo 15.2.3-1 on CRAN: Upstream Update


Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language-and is widely used by (currently) 1272 other packages on CRAN, downloaded 43.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 661 times according to Google Scholar.
This versions updates to the 15.2.3 upstream Armadillo release from yesterday. It brings minor changes over the RcppArmadillo 15.2.2 release made last month (and described in this post). As noted previously, and due to both the upstream transition to C++14 coupled with the CRAN move away from C++11, the package offers a transition by allowing packages to remain with the older, pre-15.0.0 'legacy' Armadillo yet offering the current version as the default. If and when CRAN will have nudged (nearly) all maintainers away from C++11 (and now also C++14 !!) we can remove the fallback. Our offer to help with the C++ modernization still stands, so please get in touch if we can be of assistance. As a reminder, the meta-issue #475 regroups all the resources for the C++11 transition.
There were no R-side changes in this release. The detailed changes since the last release follow.
Changes in RcppArmadillo version 15.2.3-1 (2025-12-16)
Upgraded to Armadillo release 15.2.3 (Medium Roast Deluxe)
Faster
.resize()for vectorsFaster
repcube()
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
17 Dec 2025 2:11pm GMT
11 Dec 2025
Planet Lisp
Scott L. Burson: FSet v2.1.0 released: Seq improvements
I have just released FSet v2.1.0 (also on GitHub).
This release is mostly to add some performance and functionality improvements for seqs. Briefly:
- Access to and updating of elements at the beginning or end of a long seq is now faster.
- I have finally gotten around to implementing
searchandmismatchon seqs. NOTE: this may require changes to your package definitions; see below. - Seqs containing only characters are now treated specially, making them a viable replacement for CL strings in many cases.
- In an FSet 2 context, the seq constructor macros now permit specification of a default.
- There are changes to some
convertmethods. - There are a couple more FSet 2 API changes, involving
image.
See the above links for the full release notes.
UPDATE: there's already a v2.1.1; I had forgotten to export the new function char-seq?.
11 Dec 2025 4:01am GMT
09 Dec 2025
FOSDEM 2026
/dev/random and lightning talks
The room formally known as "Lightning Talks" is now known as /dev/random. After 25 years, we say goodbye to the old Lightning Talks format. In place, we have two new things! /dev/random: 15 minute talks on a random, interesting, FOSS-related subject, just like the older Lightning Talks New Lightning Talks: a highly condensed batch of 5 minute quick talks in the main auditorium on various FOSS-related subjects! Last year we experimented with running a more spontaneous lightning talk format, with a submission deadline closer to the event and strict short time limits (under five minutes) for each speaker. The experiment舰
09 Dec 2025 11:00pm GMT
04 Dec 2025
Planet Lisp
Tim Bradshaw: Literals and constants in Common Lisp
Or, constantp is not enough.
Because I do a lot of things with Štar, and for other reasons, I spend a fair amount of time writing various compile-time optimizers for things which have the semantics of function calls. You can think of iterator optimizers in Štar as being a bit like compiler macros: the aim is to take a function call form and to turn it, in good cases, into something quicker1. One important way of doing this is to be able to detect things which are known at compile-time: constants and literals, for instance.
One of the things this has made clear to me is that, like John Peel, constantp is not enough. Here's an example.
(in-row-major-array a :simple t :element-type 'fixnum) is a function call whose values Štar can use to tell it how to iterate (via row-major-aref) over an array. When used in a for form, its optimizer would like to be able to expand into something involving (declare (type (simple-array fixnum *) ...), so that the details of the array are known to the compiler, which can then generate fast code for row-major-aref. This makes a great deal of difference to performance: array access to simple arrays of known element types is usually much faster than to general arrays.
In order to do this it needs to know two things:
- that the values of the
simpleandelement-typekeyword arguments are compile-time constants; - what their values are.
You might say, well, that's what constantp is for2. It's not: constantp tells you only the first of these, and you need both.
Consider this code, in a file to be compiled:
(defconstant et 'fixnum)
(defun ... ...
(for ((e (in-array a :element-type et)))
...)
...)
Now, constantpwill tell you that et is indeed a compile-time constant. But it won't tell you its value, and in particular nothing says it needs to be bound at compile-time at all: (symbol-value 'et) may well be an error at compile-time.
constantp is not enough3! instead you need a function that tells you 'yes, this thing is a compile-time constant, and its value is …'. This is what literal does4: it conservatively answers the question, and tells you the value if so. In particular, an expression like (literal '(quote fixnum)) will return fixnum, the value, and t to say yes, it is a compile-time constant. It can't do this for things defined with defconstant, and it may miss other cases, but when it says something is a compile-time constant, it is. In particular it works for actual literals (hence its name), and for forms whose macroexpansion is a literal.
That is enough in practice.
-
Śtar's iterator optimizers are not compiler macros, because the code they write is inserted in various places in the iteration construct, but they're doing a similar job: turning a construct involving many function calls into one requiring fewer or no function calls. ↩
-
And you may ask yourself, "How do I work this?" / And you may ask yourself, "Where is that large automobile?" / And you may tell yourself, "This is not my beautiful house" / And you may tell yourself, "This is not my beautiful wife" ↩
-
Here's something that staryed as a mail message which tries to explain this in some more detail. In the case of variables
defconstantis required to tellconstantpthat a variable is a constant at compile-time but is not required (and should not be required) to evaluate the initform, let alone actually establish a binding at that time. In SBCL it does both (SBCL doesn't really have a compilation environment). In LW, say, it at least does not establish a binding, because LW does have a compilation environment. That means that in LW compiling a file has fewer compile-time side-effects than it does in SBCL. Outside of variables, it's easily possible that a compiler might be smart enough to know that, given(defun c (n) (+ n 15)), then(constantp '(c 1) <compilation environment>)is true. But you can't evaluate(c 1)at compile-time at all.constantptells you that you don't need to bind variables to prevent multiple evaluation, it doesn't, and can't, tell you what their values will be. ↩ -
Part of the
org.tfeb.star/utilitiespackage. ↩
04 Dec 2025 4:23pm GMT
01 Dec 2025
Planet Lisp
Joe Marshall: Advent of Code 2025
The Advent of Code will begin in a couple of hours. I've prepared a Common Lisp project to hold the code. You can clone it from https://github.com/jrm-code-project/Advent2025.git It contains an .asd file for the system, a package.lisp file to define the package structure, 12 subdirectories for each day's challenge (only 12 problems in this year's calendar), and a file each for common macros and common functions.
As per the Advent of Code rules, I won't use AI tools to solve the puzzles or write the code. However, since AI is now part of my normal workflow these days, I may use it for enhanced web search or for autocompletion.
As per the Advent of Code rules, I won't include the puzzle text or the puzzle input data. You will need to get those from the Advent of Code website (https://adventofcode.com/2025).
01 Dec 2025 12:42am GMT
15 Nov 2025
FOSDEM 2026
FOSDEM 2026 Accepted Stands
With great pleasure we can announce that the following project will have a stand at FOSDEM 2026! ASF Community BSD + FreeBSD Project Checkmk CiviCRM Cloud Native Computing Foundation + OpenInfra & the Linux Foundation: Building the Open Source Infrastructure Ecosystem Codeberg and Forgejo Computer networks with BIRD, KNOT and Turris Debian Delta Chat (Sunday) Digital Public Goods Dolibar ERP CRM + Odoo Community Association (OCA) Dronecode Foundation + The Zephyr Project Eclipse Foundation F-Droid and /e/OS + OW2 FOSS community / Murena degooglized phones and suite Fedora Project Firefly Zero Foreman FOSS United + fundingjson (and FLOSS/fund) FOSSASIA Framework舰
15 Nov 2025 11:00pm GMT
13 Nov 2025
FOSDEM 2026
FOSDEM 2026 Main Track Deadline Reminder
Submit your proposal for the FOSDEM main track before it's too late! The deadline for main track submissions is earlier than it usually is (16th November, that's in a couple of days!), so don't be caught out. For full details on submission information, look at the original call for participation.
13 Nov 2025 11:00pm GMT