11 May 2026

feedPlanet Grep

Frederic Descamps: Dealing with caching_sha2_password as authentication method in MariaDB Server

MariaDB Server supports different authentication methods just like MySQL. Depending on your installation, the number of available and active authentication plugins can vary. But you should always have those 3 by default: If we compare with MySQL 9.7, we can see only 2 default authentication plugins: About caching_sha2_password module In some packaging, the caching_sha2_password auhentication […]

11 May 2026 3:58pm GMT

Frederic Descamps: Adding a New Data Type to MariaDB with Type_handler – Part 5

We are concluding our series related to new data types using the Type_handler framework, with some limitations that are not yet covered by the framework: It would have been handy for our MONEY datatype to have the possibility to define, for example, the currency to show. Or the format to have something like this: Unfortunately, […]

11 May 2026 3:58pm GMT

Frederic Descamps: Adding a New Data Type to MariaDB with Type_handler – Part 4

This is part 4 of a series related to extending MariaDB with a custom data type using the Type_handler framework. You can find the previous articles below: Overriding Existing Types In the previous examples, our MONEY data type inherits from DOUBLE and then we override some methods. But all the methods of every type cannot […]

11 May 2026 3:58pm GMT

feedPlanet Debian

Colin Watson: Free software activity in April 2026

My Debian contributions this month were all sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

dput-ng

Ian Jackson reported that dput-ng could lose data when using the local install method (relevant in tests of other packages, for instance) and filed an initial merge request to fix it. I improved this to isolate its tests properly, and uploaded it.

groff

I upgraded from 1.23.0 to 1.24.1. 1.24.0 and 1.24.1 were the first upstream releases since 2023, and had extensive changes; I'd had the corresponding packaging changes in the works since January, but it took me a while to get round to finishing them off. It was good to get this off my list.

OpenSSH

I released bookworm and trixie fixes for CVE-2026-3497, and issued the corresponding BSA-130 for trixie-backports.

I upgraded from 10.2p1 to 10.3p1.

parted

I upgraded from 3.6 to 3.7. 3.7 was the first upstream release since 2023, but the changes were nowhere near as extensive as groff, so this was a fairly quick job. I also fixed the parted-doc package to ship proper API documentation.

Python packaging

New upstream versions:

I started an upstream discussion about how best to handle the pydantic and pydantic-core packages now that they share an upstream git repository.

Other bug fixes:

Rust packaging

New upstream versions:

YubiHSM packaging

I upgraded from 2.7.2 to 2.7.3.

Code reviews

11 May 2026 12:25pm GMT

Freexian Collaborators: Debusine workflow performance issues (by Colin Watson)

During March and April, we had a number of performance issues that made Debusine's core functions of running work requests and reflecting their results in workflows quite unreliable. Investigating and fixing this took up a lot of time from both the Debusine development team and Freexian's sysadmins.

The central problems involved a series of database concurrency and worker communication issues that interacted in complex ways. On bad days, this caused between 10% and 25% of processed work requests to fail unnecessarily. We communicated some of the problems to users on IRC, but not consistently since we didn't entirely understand the scope of the problems at the time.

Most of the problems are fixed now, but we had a retrospective meeting to make sure we understood what happened and that we learn from it. Here's a summary.

Data model

Debusine's workflows consist of many individual work requests. Each work request has a database row representing its state, which means that the overall state of a workflow is distributed across many rows. Changes to one work request (for example, when it is completed) can cause changes to other work requests (perhaps unblocking it so that it can be scheduled to an idle worker). Those changes may happen concurrently, and in practice often do.

Workers typically need to create artifacts containing the output of tasks: these include things like packages, build logs, and test output.

Debusine records task history so that it can make better decisions about how to schedule work requests. Since this might otherwise grow without bound, the server expires older parts of that history after a while. The same is true for many other kinds of data.

Causes

Fixes

Workflow orchestration

Figuring out what individual work requests need to be run as part of a workflow - the process we call "orchestration" - can be challenging. Unlike typical CI pipelines, these workflows often span substantial chunks of a distribution: a glibc update can involve retesting nearly everything! Nevertheless, it's not particularly helpful for it to take hours just to build the workflow graph.

Fixing this involved many classic database optimizations such as adding indexes and CTEs, but probably the most effective fix was adding a cache for lookups within each orchestrator run or work request. Profiling showed that resolving lookups was a hot spot, and the way that task data is often passed down through a workflow meant that the same lookup could be resolved hundreds or thousands of times in a large workflow.

Expiry

We knew for quite some time that our expiry job took very aggressive locks, effectively blocking most of the rest of the system. This was an early decision to make the expiry logic simpler by allowing it to follow graphs without worrying about concurrent activity, but it clearly couldn't stay that way forever.

Row locks in PostgreSQL was very helpful in figuring out the correct approach here. Since we're mainly concerned about the possibility of new foreign key references being created to artifacts we're considering for expiry, and since that would involve taking FOR KEY SHARE locks on those rows, we can explicitly take FOR UPDATE locks (which conflict with FOR KEY SHARE), and then recompute the set of artifacts to expire with any locked artifacts marked to keep. This was delicate work, but it saved minutes of downtime every day.

Whole-workflow locking

I mentioned earlier that we avoided some deadlock issues by taking locks on entire workflows. To ensure that these locks are effective even against code that isn't specifically aware of them, this is implemented by using SELECT FOR UPDATE on all the work request rows in the workflow. In some cases the search for which rows to lock itself tripped up the PostgreSQL planner.

Scheduling

We run multiple Celery workers for various purposes. Some of them can do many things in parallel, but in some specific cases (notably the task scheduler) we only ever want a single instance to run at once. Unfortunately a bug in the systemd service meant that the scheduler often ran concurrently anyway! Once we fixed that, the scheduler logs became a lot less confusing.

When Debusine was small, it was reasonable for it to perform scheduling very aggressively, typically as soon as any change occurred to a work request or a worker that might possibly influence it. This doesn't scale very well, though, and even though we tried to batch multiple scheduling triggers that occurred within a single transaction, it could still make debugging very confusing. We reduced the number of changes that would result in immediate scheduling, and deferred everything else to a regular "tick".

The scheduler may not be able to assign a work request to an idle worker due to the workflow being locked. That isn't a major problem in itself; it can just try again later. However, in very large workflows, we found that it often worked its way down all the pending work requests one by one finding that each of them was locked, which was slow and also produced a huge amount of log noise. It now assumes that if a work request is locked, then it might as well skip other work requests in the same workflow until the next scheduler run.

Between them, these changes reduced the number of locks typically being held on debusine.debian.net by about 80%:

Lock graph

Worker refactoring

The Debusine worker has always been partially asynchronous, but while it was actually executing a task - in other words, most of the time, at least in busy periods - it didn't respond to inbound websocket messages, causing spurious disconnections. We restructured the whole worker to be fully event-based.

We also had to put quite a bit of effort into improving the path by which workers report work request completion, because if that hits a timeout then it can mean throwing away hours of work. We have some further improvements in mind, but for now we defer most of this work to a Celery task so that whole-workflow locks aren't on the critical path.

Database write volume

One of our sysadmins observed that our database write volume was consistently very high. This was a puzzle, but for a long time we left that unexplored. Eventually we thought to ask PostgreSQL's own statistics, and we found a surprise:

debusine=> SELECT relname AS table_name,
debusine->        n_tup_ins AS inserts,
debusine->        n_tup_upd AS updates,
debusine->        n_tup_del AS deletes,
debusine->        (n_tup_ins + n_tup_upd + n_tup_del) AS total_dml
debusine-> FROM pg_stat_user_tables
debusine-> WHERE (n_tup_ins + n_tup_upd + n_tup_del) > 0
debusine-> ORDER BY total_dml DESC
debusine-> LIMIT 20;
              table_name              | inserts |  updates   | deletes | total_dml
--------------------------------------+---------+------------+---------+------------
 db_collectionitem                    | 1418251 | 3578202388 | 3630143 | 3583250782
 db_token                             |   15143 |   11212106 |   11389 |   11238638
 db_workrequest                       |  386196 |    6399071 | 1820500 |    8605767
 db_fileinartifact                    | 2783021 |    1837929 | 1663887 |    6284837
 django_celery_results_taskresult     | 1819301 |    1501623 | 1791656 |    5112580
 db_artifact                          |  960077 |    3340859 |  663890 |    4964826
 db_collectionitemmatchconstraint     | 1550457 |          0 | 2207486 |    3757943
 db_artifactrelation                  | 2229382 |          0 | 1363825 |    3593207
 db_fileupload                        | 1023400 |    1057036 | 1023346 |    3103782
 db_file                              | 1673194 |          0 |  970252 |    2643446
 db_fileinstore                       | 1411995 |          0 |  970259 |    2382254
 db_filestore                         |       0 |    2381578 |       0 |    2381578
 django_session                       |  645423 |    1519880 |     531 |    2165834
 db_workrequest_dependencies          |  365877 |          0 |  936537 |    1302414
 db_worker                            |   18317 |     949280 |    9487 |     977084
 db_collection                        |   10061 |         85 |  177741 |     187887
 db_workerpooltaskexecutionstatistics |   28721 |          0 |       0 |      28721
 db_workerpoolstatistics              |    1640 |          0 |       0 |       1640
 db_workflowtemplate                  |     130 |        158 |     649 |        937
 db_identity                          |      76 |        661 |       0 |        737
(20 rows)

Oh my - that's a lot of db_collectionitem updates and must surely be out of proportion with what we really need. Can we narrow that down by asking about the most recently-updated tuples?

debusine=> SELECT DISTINCT category
debusine-> FROM db_collectionitem
debusine-> WHERE id IN (
debusine->     SELECT id FROM db_collectionitem
debusine->     ORDER BY xmin::text::integer DESC LIMIT 10000
debusine-> );
           category
------------------------------
 debusine:historical-task-run
(1 row)

That might not be absolutely reliable, but it was certainly a hint. As per PostgreSQL's documentation, by default UPDATE always performs physical updates to every matching row regardless of whether the data has changed, and our code to expire old task history entries wasn't doing that properly. Once we knew where to look, it was easy to add some extra constraints.

This reduced our mean write volume on debusine.debian.net from about 23 MB/s to about 3 MB/s, which had an immediate knock-on effect on our request failure rate:

Disk write graph

HTTP errors

Current state

Our metrics indicate that things are a lot better now. We still have a few things to deal with, such as:

I hope this has been an interesting tour through the sorts of things that can go wrong in this kind of distributed system!

11 May 2026 12:00am GMT

10 May 2026

feedPlanet Debian

Steinar H. Gunderson: MySQL hypergraph optimizer

MySQL released (well, flipped the default compilation flag for) the hypergraph join optimizer in the community builds; this was the main project I started and worked on while I was there, so it's nice to see even though it's been default in e.g. their cloud column store for a long time. You can read their blog post (though beware, likely-LLM text ahead).

(The cost model improvements and TPC-DS benchmarking are from after my time.)

10 May 2026 8:30am GMT

05 May 2026

feedPlanet Lisp

ECL News: ECL 26.5.5 release

We are announcing a bugfix ECL release that addresses a few issues that has slipped through testing of the recent one.

Addressed issues:

This release is available for download in a form of a source code archive (we do not ship prebuilt binaries):

Happy Hacking,
The ECL Developers

05 May 2026 12:00pm GMT

Gábor Melis: DRef Leaves Home

Version 0.5 of DRef, the definition reifier, is now available. It has moved to its own repository, completing its separation from PAX, where it was originally developed.

This was a long time coming. Twelve years ago today, PAX was born. From the start, PAX used the concept of locatives to refer to definitions without first-class objects. For example, to generate documentation for the *MY-VAR* variable, one could use the VARIABLE locative as in (*MY-VAR* VARIABLE). PAX needed to be able to tell whether such a definition exists, as well as access its docstring and source location.

Over time, this mechanism evolved into a portable, extensible introspection library independent of PAX. I began separating the two projects two years ago and named the new library, though they continued to share a repository. I have now removed the remaining dependencies so that DRef can live on its own.

05 May 2026 12:00am GMT

01 May 2026

feedPlanet Lisp

Joe Marshall: Echoes of the Lisp Listener

The Lisp Machine Listener had an electric close parenthesis. When the user typed a close parenthesis, and this was the close parenthesis that finished the complete form at top level, the form would be sent to the REPL right away with no need to press enter. Here's how to get this behavior with SLY:

(defun my-sly-mrepl-electric-close-paren ()
  "Insert ')' and auto-send ONLY if we are closing a top-level Lisp form."
  (interactive)
  (let ((state (syntax-ppss)))
    (insert ")")
    ;; Safety checks:
    ;; 1. We were at depth 1 (so we are now at depth 0)
    ;; 2. We aren't in a string or comment
    ;; 3. The input actually starts with a paren (it's a form, not a sentence)
    (when (and (= (car state) 1)
               (not (nth 3 state))
               (not (nth 4 state))
               (string-match-p "^\\s-*(" 
                               (buffer-substring-no-properties (sly-mrepl--mark) (point))))
      (sly-mrepl-return))))

Another cool hack is to get the REPL to do double duty as a command line to the LLM chatbot. When you type RET in the REPL, it will check if the input is a complete lisp form. If so, it will send the form to the REPL as normal. If not, it will send the input to the chatbot. Here's how to do this:

(defun my-sly-mrepl-electric-return ()
  "Send to Lisp if it's a form/symbol, or wrap in (chat ...) if it's a sentence."
  (interactive)
  (let* ((beg (marker-position (sly-mrepl--mark)))
         (end (point-max))
         (input (buffer-substring-no-properties beg end))
         (trimmed (string-trim input)))
    (cond
     ;; If it's empty, just do a normal return
     ((string-blank-p trimmed)
      (sly-mrepl-return))
     
     ;; If it starts with a paren, quote, or hash, it's definitely a Lisp form
     ((string-match-p "^\\s-*[(#'\"]" trimmed)
      (sly-mrepl-return))
     
     ;; If it's a single word (no spaces), treat it as a symbol/form (e.g., *package*)
     ((not (string-match-p "\\s-" trimmed))
      (sly-mrepl-return))
     
     ;; Otherwise, it's a sentence. Wrap it and fire.
     (t
      (delete-region beg end)
      (insert (format "(chat %S)" trimmed))
      (sly-mrepl-return)))))

Install as follows:

;; Apply to SLY MREPL with a safety check for the mode map
(with-eval-after-load 'sly-mrepl
  (define-key sly-mrepl-mode-map (kbd "RET") 'my-sly-mrepl-electric-return)
  (define-key sly-mrepl-mode-map (kbd ")") 'my-sly-mrepl-electric-close-paren))

01 May 2026 5:29pm GMT

25 Apr 2026

feedFOSDEM 2026

All FOSDEM 2026 videos are online

All video recordings from FOSDEM 2026 that are worth publishing have been processed and released. Videos are linked from the individual schedule pages for the talks and the full schedule page. They are also available, organised by room, at video.fosdem.org/2026. While all released videos have been reviewed by a human, it remains possible that one or more issues fell through the cracks. If you notice any problem with a video you care about, please let us know as soon as possible so we can look into it before the video-processing infrastructure is shut down for this edition. To report any舰

25 Apr 2026 10:00pm GMT

29 Jan 2026

feedFOSDEM 2026

Join the FOSDEM Treasure Hunt!

Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰

29 Jan 2026 11:00pm GMT

26 Jan 2026

feedFOSDEM 2026

Call for volunteers

With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰

26 Jan 2026 11:00pm GMT