17 Dec 2025

feedPlanet Grep

Frederic Descamps: Deploying on OCI with the starter kit – part 5 (connecting to the database II)

In part 4 of our series on the OCI Hackathon Starter Kit, we saw how to connect to the deployed MySQL HeatWave instance from our clients (MySQL Shell, MySQL Shell for VS Code, and Cloud Shell). In this post, we will see how to connect from an application using a connector. We will cover connections […]

17 Dec 2025 2:36am GMT

Dries Buytaert: I open-sourced my blog content

Last week I wrote that a blog is a biography. But sometimes our most advanced technology is also our most fragile. With my blog turning twenty years old in fifteen days, I have been thinking a lot about digital preservation.

The question I keep coming back to is simple: how do you preserve a website for hundreds of years?

I don't have the answer yet, but it's something I plan to slowly work on over the next 10 years. What I'm describing here is a first step.

Humans have been trying to preserve their words since we learned to write. Medieval monks hand-copied manuscripts that survived centuries. Clay tablets from ancient Mesopotamia still tell us about daily life from 5,000 years ago. They worked because they asked very little of the future. A clay tablet basically just sits there.

In contrast, websites require continuous maintenance and recurring payments. Miss either, and they quietly disappear. That makes it hard for websites to survive for hundreds of years.

Traditional backups may help content survive, but they only work if someone knows they exist and what to do with them. Not a safe bet over hundreds of years.

So I am trying something different. I exported my blog as Markdown files and put them on GitHub. Nearly twenty years of posts are now in a public repository at github.com/dbuytaert/website-content.

I'm essentially making two bets. First, GitHub does not need me to keep paying bills or renewing domains. Second, a public Git repository can be cloned. Each clone becomes an independent copy that does not depend on me.

If you use a static site generator like Jekyll or Hugo, you are probably thinking: "Welcome to 2010!". Fair enough. You have been storing content as Markdown in Git since before my kids could walk. The difference is that most people keep their Git repositories private. I am making mine public.

To be clear, my site still runs on Drupal, and that is not changing. No need to panic. I just made my Drupal site export its content as Markdown.

For the past two weeks, my site has been auto-committing to GitHub daily. Admittedly, it feels a bit strange to share everything like this. New blog posts show up automatically, but so does everything else: tag maintenance, even deleted posts I decided were not worth keeping.

My blog has a publish button, an edit button, and a delete button. In my view, they are all equally legitimate. Now you can see me use all three. Git hides nothing.

Exporting my content to GitHub is my first bet, not my last. My plan is to build toward something like a RAID for web content, spreading copies across multiple systems. I will explain what I mean tomorrow, and share how I set this up technically.

17 Dec 2025 2:36am GMT

Dries Buytaert: A RAID for web content

If you've worked with storage systems, you know RAID: redundant arrays of independent disks. RAID doesn't try to make individual disks more reliable. It accepts that disks fail and designs around it.

I recently open-sourced my blog content by automatically exporting it as Markdown to GitHub. GitHub might outlive me, but it probably won't be around in 100 years either. No one really knows.

That raises a simple question: where should content live if you want it to last decades, maybe centuries?

I don't have the answer, but I know it matters well beyond my blog. We are the first generation to create digital content, and we are not very good at preserving what we create. Studies of link rot consistently show that large portions of the web disappear within just a few years.

Every time you publish something online, you're trusting a stack: the file format, the storage medium, the content management system, the organization running the service, the economic model keeping them running. When any layer fails, your content is gone.

So my plan is to slowly build a "digital preservation RAID" across several platforms: GitHub, the Internet Archive, IPFS, and blockchain-based storage like Filecoin or Arweave. If one disappears, the others might remain.

Each option has different trade-offs and failure modes. GitHub has corporate risk because Microsoft owns it, and one day their priorities might change. The Internet Archive depends on non-profit funding and has faced costly legal battles. IPFS requires someone to actively "pin" your content - if no one cares enough to host it, it disappears. Blockchain-based solutions let you pay once for permanent storage, but the economic models are unproven and I'm not a fan of their climate impact.

If I had to bet on a single option, it would be the Internet Archive. They've been doing some pretty heroic work the past 25 years. GitHub feels durable but archiving old blog posts will never be Microsoft's priority. IPFS, Filecoin, and Arweave are fascinating technical experiments, but I wouldn't rely on them alone.

But the point is not to pick a winner. It is to accept failure as inevitable and design around it, and to keep doing that as the world changes and better preservation tools emerge.

The cost of loss isn't just data. It is the ability to learn from what came before. That feels like a responsibility worth exploring.

17 Dec 2025 2:36am GMT

16 Dec 2025

feedPlanet Debian

Christian Kastner: Simple-PPA, a minimalistic PPA implementation

Today, the Debusine developers launched Debusine repositories, a beta implementation of PPAs. In the announcement, Colin remarks that "[d]iscussions about this have been happening for long enough that people started referring to PPAs for Debian as 'bikesheds'"; a characterization that I'm sure most will agree with.

So it is with great amusement that on this same day, I launch a second PPA implementation for Debian: Simple-PPA.

Simple-PPA was never meant to compete with Debusine, though. In fact, it's entirely the opposite: from discussions at DebConf, I knew that it was only a matter of time until Debusine gained a PPA-like feature, but I needed a stop-gap solution earlier, and with some polish, what was once by Python script already doing APT processing for apt.ai.debian.net, recently became Simple-PPA.

Consequently, Simple-PPA lacks (and will always lack) all of the features that Debusine offers: there is no auto-building, no CI, nor any other type of QA. It's the simplest possible type of APT repository: you just upload packages, they get imported into an archive, and the archive is exposed via a web server. Under the hood, reprepro does all the heavy lifting.

However, this also means it's trivial to set up. The following is the entire configuration that simple-ppa.debian.net started with:

# simple-ppa.conf

[CORE]
SignWith = 2906D748B7551BC8
ExportDir = /srv/www/simple-ppa
MailFrom: Simple-PPA <admin@simple-ppa.debian.net>
Codenames = sid forky trixie trixie-backports bookworm bookworm-backports
AlsoAllow = forky: unstable
            trixie: unstable
            bookworm: unstable

[simple-ppa-dev]
Label = Simple-PPA's self-hosted development repository
# ckk's key
Uploaders = allow * by key E76004C5CEF0C94C+

[ckk]
Label = Christian Kastner at Simple-PPA
Uploaders = allow * by key E76004C5CEF0C94C+

The CORE section just sets some defaults and sensible rules. Two PPAs are defined, simple-ppa-dev and ckk, which accept packages signed by the key with the ID E76004C5CEF0C94C. These PPAs use the global defaults, but individual PPAs can override Architectures, Suites, and Components, and of course allow an arbitrary number of users.

Users upload to this archive using SFTP (e.g.: with dput-ng). Every 15 minutes, uploads get processed, with ACCEPTED or REJECTED mails sent to the Maintainer address. The APT archive of all PPAs is signed with a single global key.

I myself intend to use Debusine repositories soon, as the autobuilding and the QA tasks Debusine offers are something I need. However, I do still see a niche use case for Simple-PPA: when you need an APT archive, but don't want to do a deep dive into reprepro (which is extremely powerful).

If you'd like to give Simple-PPA a try, head over to simple-ppa.debian.net and follow the instructions for users.

16 Dec 2025 9:15pm GMT

Steinar H. Gunderson: Lichess

I wish more pages on the Internet were like Lichess. It's fast. It feels like it only does one thing (even though it's really more like seven or eight)-well, perhaps except for the weird blogs. It does not feel like it's trying to sell me anything; in fact, it feels like it hardly even wants my money. (I've bought two T-shirts from their Spreadshirt, to support them.) It's super-efficient; I've seen their (public) balance sheets, and it feels like it runs off of a shoestring budget. (Take note, Wikimedia Foundation!) And, perhaps most relieving in this day and age, it does not try to grift any AI.

Yes, I know, chess.com is the juggernaut, and has probably done more for chess' popularity than FIDE ever did. But I still go to Lichess every now and then and just click that 2+1 button. (Generally without even logging in, so that I don't feel angry about it when I lose.) Be more like Lichess.

16 Dec 2025 6:45pm GMT

Freexian Collaborators: Monthly report about Debian Long Term Support, November 2025 (by Santiago Ruano Rincón)

The Debian LTS Team, funded by [Freexian's Debian LTS offering] (https://www.freexian.com/lts/debian/), is pleased to report its activities for November.

Activity summary

During the month of November, 18 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).

The team released 33 DLAs fixing 219 CVEs.

The LTS Team kept going with the usual cadence of preparing security updates for Debian 11 "bullseye", but also for Debian 12 "bookworm", Debian 13 "trixie" and even Debian unstable. As in previous months, we are pleased to say that there have been multiple contributions of LTS uploads by Debian Fellows outside the regular LTS Team.

Notable security updates:

Contributions from fellows outside the LTS Team:

Other than the regular LTS updates for bullseye, the LTS Team has also contributed updates to the latest Debian releases:

Beyond security updates, there has been a significant effort in revamping our documentation, aiming to make the processes more clear and consistent for all the members of the team. This work was mainly carried out by Sylvain, Jochen and Roberto.

We would like to express our gratitude to the sponsors for making the Debian LTS project possible. Also, special thanks to the fellows outside the LTS team for their valuable help.

Individual Debian LTS contributor reports

Thanks to our sponsors

Sponsors that joined recently are in bold.

16 Dec 2025 12:00am GMT

11 Dec 2025

feedPlanet Lisp

Scott L. Burson: FSet v2.1.0 released: Seq improvements

I have just released FSet v2.1.0 (also on GitHub).

This release is mostly to add some performance and functionality improvements for seqs. Briefly:

See the above links for the full release notes.

UPDATE: there's already a v2.1.1; I had forgotten to export the new function char-seq?.

11 Dec 2025 4:01am GMT

09 Dec 2025

feedFOSDEM 2026

/dev/random and lightning talks

The room formally known as "Lightning Talks" is now known as /dev/random. After 25 years, we say goodbye to the old Lightning Talks format. In place, we have two new things! /dev/random: 15 minute talks on a random, interesting, FOSS-related subject, just like the older Lightning Talks New Lightning Talks: a highly condensed batch of 5 minute quick talks in the main auditorium on various FOSS-related subjects! Last year we experimented with running a more spontaneous lightning talk format, with a submission deadline closer to the event and strict short time limits (under five minutes) for each speaker. The experiment舰

09 Dec 2025 11:00pm GMT

04 Dec 2025

feedPlanet Lisp

Tim Bradshaw: Literals and constants in Common Lisp

Or, constantp is not enough.

Because I do a lot of things with Štar, and for other reasons, I spend a fair amount of time writing various compile-time optimizers for things which have the semantics of function calls. You can think of iterator optimizers in Štar as being a bit like compiler macros: the aim is to take a function call form and to turn it, in good cases, into something quicker1. One important way of doing this is to be able to detect things which are known at compile-time: constants and literals, for instance.

One of the things this has made clear to me is that, like John Peel, constantp is not enough. Here's an example.

(in-row-major-array a :simple t :element-type 'fixnum) is a function call whose values Štar can use to tell it how to iterate (via row-major-aref) over an array. When used in a for form, its optimizer would like to be able to expand into something involving (declare (type (simple-array fixnum *) ...), so that the details of the array are known to the compiler, which can then generate fast code for row-major-aref. This makes a great deal of difference to performance: array access to simple arrays of known element types is usually much faster than to general arrays.

In order to do this it needs to know two things:

You might say, well, that's what constantp is for2. It's not: constantp tells you only the first of these, and you need both.

Consider this code, in a file to be compiled:

(defconstant et 'fixnum)

(defun ... ...
  (for ((e (in-array a :element-type et)))
    ...)
  ...)

Now, constantpwill tell you that et is indeed a compile-time constant. But it won't tell you its value, and in particular nothing says it needs to be bound at compile-time at all: (symbol-value 'et) may well be an error at compile-time.

constantp is not enough3! instead you need a function that tells you 'yes, this thing is a compile-time constant, and its value is …'. This is what literal does4: it conservatively answers the question, and tells you the value if so. In particular, an expression like (literal '(quote fixnum)) will return fixnum, the value, and t to say yes, it is a compile-time constant. It can't do this for things defined with defconstant, and it may miss other cases, but when it says something is a compile-time constant, it is. In particular it works for actual literals (hence its name), and for forms whose macroexpansion is a literal.

That is enough in practice.


  1. Śtar's iterator optimizers are not compiler macros, because the code they write is inserted in various places in the iteration construct, but they're doing a similar job: turning a construct involving many function calls into one requiring fewer or no function calls.

  2. And you may ask yourself, "How do I work this?" / And you may ask yourself, "Where is that large automobile?" / And you may tell yourself, "This is not my beautiful house" / And you may tell yourself, "This is not my beautiful wife"

  3. Here's something that staryed as a mail message which tries to explain this in some more detail. In the case of variables defconstant is required to tell constantp that a variable is a constant at compile-time but is not required (and should not be required) to evaluate the initform, let alone actually establish a binding at that time. In SBCL it does both (SBCL doesn't really have a compilation environment). In LW, say, it at least does not establish a binding, because LW does have a compilation environment. That means that in LW compiling a file has fewer compile-time side-effects than it does in SBCL. Outside of variables, it's easily possible that a compiler might be smart enough to know that, given (defun c (n) (+ n 15)), then (constantp '(c 1) <compilation environment>) is true. But you can't evaluate (c 1) at compile-time at all. constantp tells you that you don't need to bind variables to prevent multiple evaluation, it doesn't, and can't, tell you what their values will be.

  4. Part of the org.tfeb.star/utilities package.

04 Dec 2025 4:23pm GMT

01 Dec 2025

feedPlanet Lisp

Joe Marshall: Advent of Code 2025

The Advent of Code will begin in a couple of hours. I've prepared a Common Lisp project to hold the code. You can clone it from https://github.com/jrm-code-project/Advent2025.git It contains an .asd file for the system, a package.lisp file to define the package structure, 12 subdirectories for each day's challenge (only 12 problems in this year's calendar), and a file each for common macros and common functions.

As per the Advent of Code rules, I won't use AI tools to solve the puzzles or write the code. However, since AI is now part of my normal workflow these days, I may use it for enhanced web search or for autocompletion.

As per the Advent of Code rules, I won't include the puzzle text or the puzzle input data. You will need to get those from the Advent of Code website (https://adventofcode.com/2025).

01 Dec 2025 12:42am GMT

15 Nov 2025

feedFOSDEM 2026

FOSDEM 2026 Accepted Stands

With great pleasure we can announce that the following project will have a stand at FOSDEM 2026! ASF Community BSD + FreeBSD Project Checkmk CiviCRM Cloud Native Computing Foundation + OpenInfra & the Linux Foundation: Building the Open Source Infrastructure Ecosystem Codeberg and Forgejo Computer networks with BIRD, KNOT and Turris Debian Delta Chat (Sunday) Digital Public Goods Dolibar ERP CRM + Odoo Community Association (OCA) Dronecode Foundation + The Zephyr Project Eclipse Foundation F-Droid and /e/OS + OW2 FOSS community / Murena degooglized phones and suite Fedora Project Firefly Zero Foreman FOSS United + fundingjson (and FLOSS/fund) FOSSASIA Framework舰

15 Nov 2025 11:00pm GMT

13 Nov 2025

feedFOSDEM 2026

FOSDEM 2026 Main Track Deadline Reminder

Submit your proposal for the FOSDEM main track before it's too late! The deadline for main track submissions is earlier than it usually is (16th November, that's in a couple of days!), so don't be caught out. For full details on submission information, look at the original call for participation.

13 Nov 2025 11:00pm GMT