10 Apr 2026

feedPlanet GNOME

Thibault Martin: TIL that Kubernetes can give you a shell into a crashing container

When a container crashes, it can be for several reasons. Sometimes the log won't tell you much about why the container crashed, and you can't get a shell into that container because... it has already crashed. It turns out that kubectl debug can let you do exactly that.

I was trying to ship Helfertool on our Kubernetes cluster. The firs step was to get it to work locally in my Minikube. The container I was deploying kept crashing, with an error message that put me on the right track: Cannot write to log directory. Exiting.

The container expected me to mount a volume on /log so it could write logs, which I did. I wanted to run a quick test from within the container to see if I could create a file in that directory. But when your container has already crashed you can't get a shell into it.

My better informed colleague Quentin told me about kubectl debug, a command that lets me create a copy of the crashing container but with a different COMMAND.

So instead of running its normal program, I can ask the container to run sh with the following command

$ kubectl debug mypod -it \
    --copy-to=mypod-debug \
    --container=my-pods-image \
    -- sh

And just like that I have shell inside a similar container. Using this trick I could confirm that I can't touch a file in that /log directory because it belongs to root while my container is running unprivileged.

That's a great trick to troubleshoot from within a crashing container!

10 Apr 2026 8:00am GMT

This Week in GNOME: #244 Recognizing Hieroglyphs

Update on what happened across the GNOME project in the week from April 03 to April 10.

GNOME Core Apps and Libraries

Blueprint

A markup language for app developers to create GTK user interfaces.

James Westman reports

blueprint-compiler is now available on PyPI. You can install it with pip install blueprint-compiler.

GNOME Circle Apps and Libraries

Hieroglyphic

Find LaTeX symbols

FineFindus reports

Hieroglyphic 2.3 is out now. Thanks to the exciting work done by Bnyro, Hieroglyphic can now also recognize Typst symbols (a modern alternative to LaTeX). Hardware-acceleration will now be preferred, when available, reducing power-consumption.

Download the latest version from FlatHub.

Amberol

Plays music, and nothing else.

Emmanuele Bassi says

Amberol 2026.1 is out, using the GNOME 50 run time! This new release fixes a few issues when it comes to loading music, and has some small quality of life improvements in the UI, like: a more consistent visibility of the playlist panel when adding songs or searching; using the shortcuts dialog from libadwaita; and being able to open the file manager in the folder containing the current song. You can get Amberol on Flathub.

Third Party Projects

Alexander Vanhee says

A new version of Bazaar is out now. It features the ability to filter search results via a new popover and reworks the add-ons dialog to include a page that shows more information about a specific entry. If you try to open an add-on via the AppStream scheme, it will now display this page, which is useful when you want to redirect users to install an add-on from within your app.

Also, please take a look at the statistics dialog - it now features a cool gradient.

Check it out on Flathub

dabrain34 reports

GstPipelineStudio 0.5.1 is out now. It's a great pleasure to announce this new version allowing to deal with DOT files directly. Check the project web page for more information or the following blog post for more details about the release.

Anton Isaiev announces

RustConn (connection manager for SSH, RDP, VNC, SPICE, Telnet, Serial, Kubernetes, MOSH, and Zero Trust protocols)

Versions 0.10.9-0.10.14 landed with a solid round of usability, security, and performance work.

Staying connected got easier. If an SSH session drops unexpectedly, RustConn now polls the host and reconnects on its own as soon as it's back. Wake-on-LAN works the same way: send the magic packet and RustConn connects automatically once the machine boots. You can also right-click any connection to check if the host is online, and a new "Connect All" option opens every connection in a folder at once. For RDP there's a Mouse Jiggler that keeps idle sessions alive.

Terminal Activity Monitor is a new per-session feature that watches for output activity or silence, which is handy for long-running jobs. You get notifications as tab icons, toasts, and desktop alerts when the window is in the background.

Security got a lot of attention. RDP now defaults to trust-on-first-use certificate validation instead of blindly accepting everything. Credentials for Bitwarden and 1Password are no longer visible in the process list. VNC passwords are zeroized on drop. Export files are written with owner-only permissions. Dangerous custom arguments are blocked for both VNC and FreeRDP viewers.

Hoop.dev joins as the 11th Zero Trust provider. There's also a new custom SSH agent socket setting that lets Flatpak users connect through KeePassXC, Bitwarden, or GPG-based SSH agents, something the Flatpak sandbox previously made difficult.

Smoother on HiDPI and 4K. RDP frame rendering skips a 33 MB per-frame copy when the data is already in the right format. Highlight rules, search, and log sanitization patterns are compiled once instead of on every keystroke or terminal line.

GNOME HIG polish. Success notifications now use non-blocking toasts instead of modal dialogs. Sidebar context menus are native PopoverMenus with keyboard navigation and screen reader support. Translations completed for all 15 languages.

Project: https://github.com/totoshko88/RustConn Flatpak: https://flathub.org/en/apps/io.github.totoshko88.RustConn

Phosh

A pure wayland shell for mobile devices.

Guido announces

Phosh 0.54 is out:

There's now a notification when an app fails to start, the status bar can be extended via plugins, and the location quick toggle has a status page to set the maximum allowed accuracy.

On the compositor side we improved X11 support, making docked mode (aka convergence) with applications like emacs or ardour more fun to use.

The on screen keyboard Stevia now supports Japanese and Chinese input via UIM, has a new us+workman layout and automatic space handling can be disabled.

There's more - see the full details here.

Documentation

Emmanuele Bassi announces

The GNOME User documentation project has been ported to use Meson for its configuration, build, and installation. The User documentation contains the desktop help and the system administration guide, and gets published on the user help website, as well as being available locally through the Help browser. The switch to Meson improved build times, and moved the tests and validation in the build system. There's a whole new contribution guideline as well. If you want to help writing the GNOME documentation, join us in the Docs room on Matrix!

Shell Extensions

Weather O'Clock

Display the current weather inside the pill next to the clock.

Cleo Menezes Jr. reports

Weather O'Clock 50 released with fluffier animations: smooth fades between loading, weather and offline states; instant temperature updates; first-fetch spinner; offline indicator; GNOME Shell 45-50 support; and various bug fixes.

Get it on GNOME Extensions

Follow development

That's all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

10 Apr 2026 12:00am GMT

Jakub Steiner: Moving to Zola

Zola

I've finally gotten around to porting this blog over to Zola. I've been running on Jekyll for years now, after originally conceiving this blog in Middleman (and PHP initially). But time catches up with everything, and the friction of maintaining Ruby dependencies eventually got to me.

The Speed

I can't stress this enough - Zola is fast. Not "for a static site generator" fast. Just fast. My old Jekyll setup needed a good few seconds to rebuild after a change. Zola builds in milliseconds. The entire site rebuilds almost before I can release the key. It's not critical for a site that gets updated 5 times a year, but it's still impressive.

No Dependencies

This is the big one. Every time you leave a project alone for a few months and come back, you know it's not just going to magically work. The gem versions drift, Bundler gets confused, and suddenly you're down a rabbit hole of version conflicts. The only reason all our Jekyll projects were reasonably easy to work with was locking onto Ruby 3.1.2 using rvm. But at some point the layers of backwardism catch up with you.

Zola is a single binary. That's it. No bundle install, no Gemfile, no "works on my machine" prayers. Download, run, done. It even embeds everything - syntax highlighting, image processing, Sass compilation (if you haven't embraced the modern CSS light yet) - all built-in. The site builds the same on any machine with zero setup.

The Heritage

Zola started life as Gutenberg in 2015/2016, a learning project for Rust by Vincent Prouillet. He was using Hugo before, but hated the Go template engine. That spawned Tera, the Jinja2-inspired template engine that Zola uses.

The project got renamed to Zola in 2018 when the name conflicts with Project Gutenberg got too annoying. It's pure Rust, which means it's fast, memory-safe, and ships as a tiny static binary.

Asset Colocation

One thing I've always focused on for this blog architecture wise is the structure - images and media live right alongside the post, not stuffed into some shared /images/ folder somewhere like most Jekyll sites seem to do. Zola calls this "asset colocation," and it's a first-class feature. No plugins needed. Just put your images in the same folder as your index.md, reference them directly, and Zola handles the rest.

This is how I'd already been running things with Jekyll, so the port was refreshingly painless on that front.

The Templating

The main work was porting the templates. It was the main shostopper when Bilal suggested Zola a couple of years ago. I was hoping something with liquid to pop up, but it seems like people running their own blogs is not a Tik Tok trend. Zola uses Tera instead of Liquid. The syntax is similar enough to get by, but there's enough branches in your path to stumble on. The error messages actually make sense though and point you at the problem, which is a refreshing change from debugging broken Liquid includes.

The Improvements

Beyond speed, I've been cleaning up things the old theme dragged along:

The site's cleaner now, light by default, faster to build, and I don't need to invoke Ruby just to write a blog post. The experience was so damn good, it motivated me to jump at a much larger project I'm hopefully going to post about next.

Previously.

10 Apr 2026 12:00am GMT

09 Apr 2026

feedPlanet GNOME

Michael Meeks: 2026-04-09 Thursday

09 Apr 2026 9:00pm GMT

Andy Wingo: wastrel milestone: full hoot support, with generational gc as a treat

Hear ye, hear ye: Wastrel and Hoot means REPL!

Which is to say, Wastrel can now make native binaries out of WebAssembly files as produced by the Hoot Scheme toolchain, up to and including a full read-eval-print loop. Like the REPL on the Hoot web page, but instead of requiring a browser, you can just run it on your console. Amazing stuff!

try it at home

First, we need the latest Hoot. Build it from source, then compile a simple REPL:

echo '(import (hoot repl)) (spawn-repl)' > repl.scm
./pre-inst-env hoot compile -fruntime-modules -o repl.wasm repl.scm

This takes about a minute. The resulting wasm file has a pretty full standard library including a full macro expander and evaluator.

Normally Hoot would do some aggressive tree-shaking to discard any definitions not used by the program, but with a REPL we don't know what we might need. So, we pass -fruntime-modules to instruct Hoot to record all modules and their bindings in a central registry, so they can be looked up at run-time. This results in a 6.6 MB Wasm file; with tree-shaking we would have been at 1.2 MB.

Next, build Wastrel from source, and compile our new repl.wasm:

wastrel compile -o repl repl.wasm

This takes about 5 minutes on my machine: about 3 minutes to generate all the C, about 6.6MLOC all in all, split into a couple hundred files of about 30KLOC each, and then 2 minutes to compile with GCC and link-time optimization (parallelised over 32 cores in my case). I have some ideas to golf the first part down a bit, but the the GCC side will resist improvements.

Finally, the moment of truth:

$ ./repl
Hoot 0.8.0

Enter `,help' for help.
(hoot user)> "hello, world!"
=> "hello, world!"
(hoot user)>

statics

When I first got the REPL working last week, I gasped out loud: it's alive, it's alive!!! Now that some days have passed, I am finally able to look a bit more dispassionately at where we're at.

Firstly, let's look at the compiled binary itself. By default, Wastrel passes the -g flag to GCC, which results in binaries with embedded debug information. Which is to say, my ./repl is chonky: 180 MB!! Stripped, it's "just" 33 MB. 92% of that is in the .text (code) section. I would like a smaller binary, but it's what we got for now: each byte in the Wasm file corresponds to around 5 bytes in the x86-64 instruction stream.

As for dependencies, this is a pretty minimal binary, though dynamically linked to libc:

linux-vdso.so.1 (0x00007f6c19fb0000)
libm.so.6 => /gnu/store/…-glibc-2.41/lib/libm.so.6 (0x00007f6c19eba000)
libgcc_s.so.1 => /gnu/store/…-gcc-15.2.0-lib/lib/libgcc_s.so.1 (0x00007f6c19e8d000)
libc.so.6 => /gnu/store/…-glibc-2.41/lib/libc.so.6 (0x00007f6c19c9f000)
/gnu/store/…-glibc-2.41/lib/ld-linux-x86-64.so.2 (0x00007f6c19fb2000)

Our compiled ./repl includes a garbage collector from Whippet, about which, more in a minute. For now, we just note that our use of Whippet introduces no run-time dependencies.

dynamics

Just running the REPL with WASTREL_PRINT_STATS=1 in the environment, it seems that the REPL has a peak live data size of 4MB or so, but for some reason uses 15 MB total. It takes about 17 ms to start up and then exit.

These numbers I give are consistent over a choice of particular garbage collector implementations: the default --gc=stack-conservative-parallel-generational-mmc, or the non-generational stack-conservative-parallel-mmc, or the Boehm-Demers-Weiser bdw. Benchmarking collectors is a bit gnarly because the dynamic heap growth heuristics aren't the same between the various collectors; by default, the heap grows to 15 MB or so with all collectors, but whether it chooses to collect or expand the heap in response to allocation affects startup timing. I get the above startup numbers by setting GC_OPTIONS=heap-size=15m,heap-size-policy=fixed in the environment.

Hoot implements Guile Scheme, so we can also benchmark Hoot against Guile. Given the following test program that sums the leaf values for ten thousand quad trees of height 5:

(define (quads depth)
  (if (zero? depth)
      1
      (vector (quads (- depth 1))
              (quads (- depth 1))
              (quads (- depth 1))
              (quads (- depth 1)))))
(define (sum-quad q)
  (if (vector? q)
      (+ (sum-quad (vector-ref q 0))
         (sum-quad (vector-ref q 1))
         (sum-quad (vector-ref q 2))
         (sum-quad (vector-ref q 3)))
      q))

(define (sum-of-sums n depth)
  (let lp ((n n) (sum 0))
    (if (zero? n)
        sum
        (lp (- n 1)
            (+ sum (sum-quad (quads depth)))))))


(sum-of-sums #e1e4 5)

We can cat it to our repl to see how we do:

Hoot 0.8.0

Enter `,help' for help.
(hoot user)> => 10240000
(hoot user)>
Completed 3 major collections (281 minor).
4445.267 ms total time (84.214 stopped); 4556.235 ms CPU time (189.188 stopped).
0.256 ms median pause time, 0.272 p95, 7.168 max.
Heap size is 28.269 MB (max 28.269 MB); peak live data 9.388 MB.

That is to say, 4.44s, of which 0.084s was spent in garbage collection pauses. The default collector configuration is generational, which can result in some odd heap growth patterns; as it happens, this workload runs fine in a 15MB heap. Pause time as a percentage of total run-time is very low, so all the various GCs perform the same, more or less; we seem to be benchmarking eval more than the GC itself.

Is our Wastrel-compiled repl performance good? Well, we can evaluate it in two ways. Firstly, against Chrome or Firefox, which can run the same program; if I paste in the above program in the REPL over at the Hoot web site, it takes about 5 or 6 times as long to complete, respectively. Wastrel wins!

I can also try this program under Guile itself: if I eval it in Guile, it takes about 3.5s. Granted, Guile's implementation of the same source language is different, and it benefits from a number of representational tricks, for example using just two words for a pair instead of four on Hoot+Wastrel. But these numbers are in the same ballpark, which is heartening. Compiling the test program instead of interpreting is about 10× faster with both Wastrel and Guile, with a similar relative ratio.

Finally, I should note that Hoot's binaries are pretty well optimized in many ways, but not in all the ways. Notably, they use too many locals, and the post-pass to fix this is unimplemented, and last time I checked (a long time ago!), wasm-opt didn't work on our binaries. I should take another look some time.

generational?

This week I dotted all the t's and crossed all the i's to emit write barriers when we mutate the value of a field to store a new GC-managed data type, allowing me to enable the sticky mark-bit variant of the Immix-inspired mostly-marking collector. It seems to work fine, though this kind of generational collector still baffles me sometimes.

With all of this, Wastrel's GC-using binaries use a stack-conservative, parallel, generational collector that can compact the heap as needed. This collector supports multiple concurrent mutator threads, though Wastrel doesn't do threading yet. Other collectors can be chosen at compile-time, though always-moving collectors are off the table due to not emitting stack maps.

The neat thing is that any language that compiles to Wasm can have any of these collectors! And when the Whippet GC library gets another collector or another mode on an existing collector, you can have that too.

missing pieces

The biggest missing piece for Wastrel and Hoot is some kind of asynchrony, similar to JavaScript Promise Integration (JSPI), and somewhat related to stack switching. You want Wasm programs to be able to wait on external events, and Wastrel doesn't support that yet.

Other than that, it would be lovely to experiment with Wasm shared-everything threads at some point.

what's next

So I have an ahead-of-time Wasm compiler. It does GC and lots of neat things. Its performance is state-of-the-art. It implements a few standard libraries, including WASI 0.1 and Hoot. It can make a pretty good standalone Guile REPL. But what the hell is it for?

Friends, I... I don't know! It's really cool, but I don't yet know who needs it. I have a few purposes of my own (pushing Wasm standards, performance work on Whippet, etc), but you or someone you know needs a wastrel, do let me know at wingo@igalia.com: I would love to be able to spend more time hacking in this area.

Until next time, happy compiling to all!

09 Apr 2026 1:48pm GMT

Thibault Martin: TIL that Helix and Typst are a match made in heaven

I love Markdown with all my heart. It's a markup language so simple to understand that even people who are not software engineers can use it in a few minutes.

The flip side of that coin if that Markdown is limited. It can let you create various title levels, bold, italics, strikethrough, tables, links, and a bit more, but not so much.

When it comes to more complex documents, most people resort to full fledged office suite like Microsoft Office or LibreOffice. Both have their merits, but office file formats are brittle and heavy.

The alternative is to use another more complex markup language. Academics used to be into LaTeX but it's often tedious to use. Typst emerged more recently as a simpler yet useful markup language to create well formatted documents.

Tinymist is a language server for Typst. It provides the usual services a Language server provides, like semantic highlighting, code actions, formatting, etc.

But it really stands out by providing a live preview feature that keeps your cursor in sync in Helix when you are clicking around in the live preview!

I only had to install it with

$ brew install tinymist

I then configured Helix to use tinymist for Typst documents, enabling live preview along the way. This happens of course in ~/.config/helix/languages.toml

[language-server.tinymist]
command = "tinymist"
config = { preview.background.enabled = true, preview.background.args = ["--data-plane-host=127.0.0.1:23635", "--invert-colors=never", "--open"] }

[[language]]
name = "typst"
language-servers = ["tinymist"]

A warm thank you to my lovely friend Felix for showing me the live preview mode of tinymist!

09 Apr 2026 8:00am GMT

08 Apr 2026

feedPlanet GNOME

Michael Meeks: 2026-04-08 Wednesday

08 Apr 2026 9:00pm GMT

07 Apr 2026

feedPlanet GNOME

Andy Wingo: the value of a performance oracle

Over on his excellent blog, Matt Keeter posts some results from having ported a bytecode virtual machine to tail-calling style. He finds that his tail-calling interpreter written in Rust beats his switch-based interpreter, and even beats hand-coded assembly on some platforms.

He also compares tail-calling versus switch-based interpreters on WebAssembly, and concludes that performance of tail-calling interpreters in Wasm is terrible:

1.2× slower on Firefox, 3.7× slower on Chrome, and 4.6× slower in wasmtime. I guess patterns which generate good assembly don't map well to the WASM stack machine, and the JITs aren't smart enough to lower it to optimal machine code.

In this article, I would like to argue the opposite: patterns that generate good assembly map just fine to the Wasm stack machine, and the underperformance of V8, SpiderMonkey, and Wasmtime is an accident.

some numbers

I re-ran Matt's experiment locally on my x86-64 machine (AMD Ryzen Threadripper PRO 5955WX). I tested three toolchains:

  • Compiled natively via cargo / rustc

  • Compiled to WebAssembly, then run with Wasmtime

  • Compiled to WebAssembly, then run with Wastrel

For each of these toolchains, I tested Raven as implemented in Rust in both "switch-based" and "tail-calling" modes. Additionally, Matt has a Raven implementation written directly in assembly; I test this as well, for the native toolchain. All results use nightly/git toolchains from 7 April 2026.

My results confirm Matt's for the native and wasmtime toolchains, but wastrel puts them in context:

Bar charts showing native, wasmtime, and wastrel scenarios testing tail-calling versus switch implementations; wasmtime slows down for tail-calling, whereas wastrel speeds up.

We can read this chart from left to right: a switch-based interpreter written in Rust is 1.5× slower than a tail-calling interpreter, and the tail-calling interpreter just about reaches the speed of hand-written assembler. (Testing on AArch64, Matt even sees the tail-calling interpreter beating his hand-written assembler.)

Then moving to WebAssembly run using Wasmtime, we see that Wasmtime takes 4.3× as much time to run the switch-based interpreter, compared to the fastest run from the hand-written assembler, and worse, actually shows 6.5× overhead for the tail-calling interpreter. Hence Matt's conclusions: there must be something wrong with WebAssembly.

But if we compare to Wastrel, we see a different story: Wastrel runs the basic interpreter with 2.4× overhead, and the tail-calling interpreter improves on this marginally with a 2.3x overhead. Now, granted, two-point-whatever-x is not one; Matt's Raven VM still runs slower in Wasm than when compiled natively. Still, a tail-calling interpreter is inherently a pretty good idea.

where does the time go

When I think about it, there's no reason that the switch-based interpreter should be slower when compiled via Wastrel than when compiled via rustc. Memory accesses via Wasm should actually be cheaper due to 32-bit pointers, and all the rest of it should be pretty much the same. I looked at the assembly that Wastrel produces and I see most of the patterns that I would expect.

I do see, however, that Wastrel repeatedly reloads a struct memory value, containing the address (and size) of main memory. I need to figure out a way to keep this value in registers. I don't know what's up with the other Wasm implementations here; for Wastrel, I get 98% of time spent in the single interpreter function, and surely this is bread-and-butter for an optimizing compiler such as Cranelift. I tried pre-compilation in Wasmtime but it didn't help. It could be that there is a different Wasmtime configuration that allows for higher performance.

Things are more nuanced for the tail-calling VM. When compiling natively, Matt is careful to use a preserve_none calling convention for the opcode-implementing functions, which allows LLVM to allocate more registers to function parameters; this is just as well, as it seems that his opcodes have around 9 parameters. Wastrel currently uses GCC's default calling convention, which only has 6 registers for non-floating-point arguments on x86-64, leaving three values to be passed via global variables (described here); this obviously will be slower than the native build. Perhaps Wastrel should add the equivalent annotation to tail-calling functions.

On the one hand, Cranelift (and V8) are a bit more constrained than Wastrel by their function-at-a-time compilation model that privileges latency over throughput; and as they allow Wasm modules to be instantiated at run-time, functions are effectively closures, in which the "instance" is an additional hidden dynamic parameter. On the other hand, these compilers get to choose an ABI; last I looked into it, SpiderMonkey used the equivalent of preserve_none, which would allow it to allocate more registers to function parameters. But it doesn't: you only get 6 register arguments on x86-64, and only 8 on AArch64. Something to fix, perhaps, in the Wasm engines, but also something to keep in mind when making tail-calling virtual machines: there are only so many registers available for VM state.

the value of time

Well friends, you know us compiler types: we walk a line between collegial and catty. In that regard, I won't deny that I was delighted when I saw the Wastrel numbers coming in better than Wasmtime! Of course, most of the credit goes to GCC; Wastrel is a relatively small wrapper on top.

But my message is not about the relative worth of different Wasm implementations. Rather, it is that performance oracles are a public good: a fast implementation of a particular algorithm is of use to everyone who uses that algorithm, whether they use that implementation or not.

This happens in two ways. Firstly, faster implementations advance the state of the art, and through competition-driven convergence will in time result in better performance for all implementations. Someone in Google will see these benchmarks, turn them into an OKR, and golf their way to a faster web and also hopefully a bonus.

Secondly, there is a dialectic between the state of the art and our collective imagination of what is possible, and advancing one will eventually ratchet the other forward. We can forgive the conclusion that "patterns which generate good assembly don't map well to the WASM stack machine" as long as Wasm implementations fall short; but having shown that good performance is possible, our toolkit of applicable patterns in source languages also expands to new horizons.

Well, that is all for today. Until next time, happy hacking!

07 Apr 2026 12:49pm GMT

06 Apr 2026

feedPlanet GNOME

Jussi Pakkanen: Sorting performance rabbit hole

In an earlier blog post we found out that Pystd's simple sorting algorithm implementations were 5-10% slower than their stdlibc++ counterparts. The obvious follow up nerd snipe is to ask "can we make the Pystd implementation faster than stdlibc++?"

For all tests below the data set used was 10 million consecutive 64 bit integers shuffled in a random order. The order was the same for all algorithms.

Stable sort

It turns out that the answer for stable sorting is "yes, surprisingly easily". I made a few obvious tweaks (whose details I don't even remember any more) and got the runtime down to 0.86 seconds. This is approximately 5% faster than std::stable_sort. Done. Onwards to unstable sort.

Unstable sort

This one was not, as they say, a picnic. I suspect that stdlib developers have spent more time optimizing std::sort than std::stable_sort simply because it is used a lot more.

After all the improvements I could think of were done, Pystd's implementation was consistently 5-10% percent slower. At this point I started cheating and examined how stdlibc++'s implementation worked to see if there are any optimization ideas to steal. Indeed there were, but they did not help.

Pystd's insertion sort moves elements by pairwise swaps. Stdlibc++ does it by moving the last item to a temporary, shifting the array elements onwards and then moving the stored item to its final location. I implemented that. It made things slower.

Stdlibc++'s moves use memmove instead of copying (at least according to code comments). I implemented that. It made things slower.

Then I implemented shell sort to see if it made things faster. It didn't. It made them a lot slower. So did radix sort.

Then I reworked the way pivot selection is done and realized that if you do it in a specific way, some elements move to their correct partitions as a side effect of median selection. I implemented that and it did not make things faster. It did not make them slower, either, but the end result should be more resistant against bad pivot selection so I left it in.

At some point the implementation grew a bug which only appeared with very large data sets. For debugging purposes I reduce the limit where introsort switches from qsort to insertion sort from 16 to 8. I got the bug fixed but the change made sorting a lot slower. As it should.

But this raises a question, namely would increasing the limit from 16 to 32 make things faster? It turns out that it did. A lot. Out of all perf improvements I implemented, this was the one that yielded the biggest improvement by a fairly wide margin. Going to 64 elements made it even faster, but that made other algorithms using insertion sort slower, so 32 it is. For now at least.

After a few final tweaks I managed to finally beat stdlibc++. By how much you ask? Pystd's best observed time was 0.754 seconds while stdlibc++'s was 0.755 seconds. And it happened only once. But that's enough for me.

06 Apr 2026 3:42pm GMT

05 Apr 2026

feedPlanet GNOME

Jakub Steiner: Japan

Japan Trip 2026

Last year we went to Japan to finally visit friends after two decades of planning to. Because they live in Fukuoka, we only ended up visiting Hiroshima, Kyoto and Osaka afterwards. We loved it there and as soon as cheap flights became available, booked another one for Tokio, to be legally allowed to cross off Japan as visited.

Now if I were to book the trip today, I probably wouldn't. It's quite a gamble given the geopolitical situation and Asia running out of oil. But making it back, it's been as good as the first one. Visiting only Tokio with a short trip to Kawaguchiko in the Sakura blooming season worked out great.

At the start of the year I promised myself to shoot my Fuji more. And I don't mean the volcano, I mean the my X-T20. I haven't kept the promise at all, always relying on the iphone. Luckily for the trip I didn't chicken out carrying the extra weight and I think it paid off. I did only take my 35mm, as the desire to carry gear has really faded away with the years. As we walked over 120km in the few days my back didn't feel very young even with the little gear I did have.

While the difference in quality isn't quite visible on Pixelfed or my photo website (I don't post to Instagram anymore), working through the set on a 4K display has been a pleasure. Bigger sensor is a bigger sensor.

Check out more photos on photo.jimmac.eu -- use arrow keys of swipe to navigate the set.

I also managed to get both of my weeklybeats tracks done on the flight so that's a bonus too!

Japan is probably quite difficult to live in, but as a tourist you get so much to feast your eyes on. It's like another planet. I hope to find more time to draw some of the awesome little cars and signs and white tiles and electric cables everywhere.

05 Apr 2026 12:00am GMT

03 Apr 2026

feedPlanet GNOME

This Week in GNOME: #243 Delayed Trains

Update on what happened across the GNOME project in the week from March 27 to April 03.

GNOME Core Apps and Libraries

Maps

Maps gives you quick access to maps all across the world.

mlundblad says

Now Maps shows delays for public transit journeys (when there's a realtime GTFS-RT feed available for the affected journey)

Glycin

Sandboxed and extendable image loading and editing.

Sophie (she/her) reports

After four weeks of work, glycin now supports compiled-in loaders. The main benefit of this is that glycin should now work on other operating systems like FreeBSD, Windows, or macOS.

Glycin uses Linux exclusive technologies to sandbox image operations. For this, the image processing is happening in a separate, isolated, process. It would be very hard to impossible to replicate this technology for other operating systems. Therefore, glycin now supports building loaders into it directly. This still provides a huge benefit compared to traditional image loaders, since almost all of the code is written in safe Rust. The feature of 'builtin' loaders can, in theory, be combined with 'external' loaders.

The glycin crate now builds with external loaders for Linux, and automatically uses builtin loaders for all other operating systems. That means that libglycin should work on other operating systems as well. So far, the CI only contains builds cross compiled for x86_64-pc-windows-gnu and tested on Wine. Further testing, feedback, and fixes are very welcome.

Image loaders that are written in C/C++ like for HEIF, AVIF, SVG, and JPEG XL, are currently not supported for being used without sandbox. Since AVIF and JPEG XL already have rust-based implementations, and rsvg might move away from libxml2 in the future, potentially allowing for safe builtin loaders for these formats in the future.

If you want, you can support my work financially on various platforms.

GNOME Circle Apps and Libraries

Pika Backup

Keep your data safe.

Sophie (she/her) reports

On March 31, we observed Trans Day of Visibility 🏳️‍⚧️, World Backup Day, and fittingly, the release of Pika Backup 0.8. After two years of work, this release not only brings many small improvements, but also a rework of the code base that dates back to 2018. This will greatly help to keep Pika Backup stable and maintainable for another eight years.

You can support the development on Open Collective or support my work via various other platforms.

Big thanks to everyone who makes Pika Backup possible, especially BorgBackup, our donors, and translators.

Third Party Projects

Antonio Zugaldia announces

Speed of Sound, voice typing for the Linux desktop, is now available on Flathub!

Main features:

  • Offline, on-device transcription using Whisper. No data leaves your machine.
  • Multiple activation options: click the in-app button or use a global keyboard shortcut.
  • Types the result directly into any focused application using Portals for wide desktop support (X11, Wayland).
  • Multi-language support with switchable primary and secondary languages on the fly.
  • Works out of the box with the built-in Whisper Tiny model. Download additional models from within the app to improve accuracy.
  • Optional text polishing with LLMs, with support for a custom context and vocabulary.
  • Supports self-hosted services like vLLM, Ollama, and llama.cpp (cloud services supported but not required).
  • Built with the fantastic Java GI bindings, come hang out on #java-gi:matrix.org.

Get it from https://flathub.org/en/apps/io.speedofsound.SpeedOfSound. Learn more on https://www.speedofsound.io.

Ronnie Nissan reports

This week I released Embellish v1.0,0.This is a major rewrite of the app from Gjs to Vala, using all the experience I gain from making GTK apps for the past few years. The app now uses view models for the fonts ListBox and a GridView for the Icons. Not only is the app more performant now, the code is much nicer and easier to maintain and hack.

You can get Embellish from Flatpak

Or you can contribute to it's Development/Translation on Github

Daniel Wood reports

Design, 2D computer aided design (CAD) for GNOME sees a new release, highlights include:

  • Polyline Trim (TR)
  • Polyline Extend (EX)
  • Chamfer Command (CHA)
  • Fillet Command (F)
  • Inferred direction for Arc Command (A)
  • Diameter input for Circle Command (C)
  • Close option for Line Command (L)
  • Close and Undo options for Polyline Command (PL)
  • Multiple copies with Copy Command (CO)
  • Show angle in Distance Command (DI)
  • Performance improvements when panning
  • Nested elements with Hatch Command (H)
  • Consistent Toast message format
  • Plus many fixes!

Design is available from Flathub:

https://flathub.org/apps/details/io.github.dubstar_04.design

Cleo Menezes Jr. reports

Serigy has reached version 2, evolving into a focused, minimal clipboard manager. The release brings substantial improvements across functionality, performance, and user experience.

The new version introduces automatic expiration of old clipboard items, incognito mode for privacy, and a grid view that brings clarity to your slots. Advanced features are now accessible through context menus and tooltips, while global shortcuts let you summon Serigy instantly from anywhere.

Several bug fixes have improved stability and reliability, and the UI is significantly more responsive. The application now persists window size across sessions, and Wayland clipboard detection has been improved.

Serigy 2 also refines its design. The app now supports Brazilian Portuguese, Russian, and Spanish (Chile).

Get it on Flathub Follow the development

Wildcard

Test your regular expressions.

says

Wildcard 0.3.5 released, bringing matching of regex groups, sidebar now shows overall match and group information, and new quick reference dialog showing common regular expression use cases! You can download the latest release from Flathub!

Files

Providing a simple and integrated way of managing your files and browsing your file system.

Romain says

I have created a Nautilus extension that adds an "Open in" context menu for installed IDEs, allowing to easily open directories and files in them.

It works with any IDE marked as such in their desktop entry. That includes IDEs in development containers such as Toolbx if you create a desktop file for it on the host system.

To install, download the latest version and follow the instructions in the README file.

Metadata Cleaner

View and clean metadata in files.

GeopJr 🏳️‍⚧️🏳️‍🌈 says

Metadata Cleaner is back, now with more adaptive layouts, bug fixes and features!

Grab the latest release from Flathub!

Fractal

Matrix messaging app for GNOME written in Rust.

Kévin Commaille reports

Things have been fairly quiet since the JasonAI takeover, but here comes Fractal 14.beta.

  • Sending files & location is properly disabled while editing/replying, as it doesn't work anyway.
  • Call rooms are identified with a camera icon in the sidebar and show a banner to warn that other users might not read messages in these rooms.
  • While we still support signing in via SSO, we have dropped support for identity providers, to simplify our code and a have a closer experience to signing in with OAuth 2.0.
  • Map markers now use a darker variant of the accent color to have a better contrast with the map underneath.
  • Many small behind the scenes changes, mostly through dependency updates, and we have removed a few of them. Small improvements to the technical docs as well.

As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.

It is available to install via Flathub Beta, see the instructions in our README.

As the version implies, there might be a slight risk of regressions, but it should be mostly stable. If all goes well the next step is the release candidate!

We are very excited to see several new contributors opening MRs lately to take care of their pet peeves with Fractal, which will benefit everyone in the end. If you have a little bit of time on your hands, you can try to join them by fixing one of our newcomers issues.

Flood It

Flood the board

tfuxu reports

Flood It 2.0 has been released! It now comes with a simple game explanation dialog, the ability to replay recently played board, full translation support, and a Ctrl+1…6 keyboard shortcut for color buttons.

It also contains many under-the-hood improvements, like the transition from Gotk4 to Puregotk, a runtime update to GNOME 50, custom seed support, and much more.

Check it out on Flathub!

Bouncer

Bouncer is an application to help you choose the correct firewall zone for wireless connections.

justinrdonnelly announces

Bouncer 50 was released this week, using the GNOME 50 runtime. It has bug fixes related to NetworkManager restarts, and autostart status. It also includes translations for Italian and Polish. Check it out on Flathub!

GNOME Websites

Guillaume Bernard says

GNOME Damned Lies has seen a few UX improvements! For the release of GNOME 50, I added a specific tag for Damned Lies to track the changes and link them to existing GNOME cycles. You can see all the changes I already spoke about like merge request support, background refresh of statistics, etc. (see: https://gitlab.gnome.org/Infrastructure/damned-lies/-/releases/gnome_50). After that, I am working towards GNOME 50.1. Why follow the GNOME release calendar? Because it provides pace, and pace is important while developing. After more than 3 years fixing technical debt and refactoring, upgrading existing code, you can see the pace of changes has increased a lot. At the beginning of GNOME 50, we had more than 120 open issues in the Damned Lies tracker; it's down to 76 at the time of writing these lines. So what's new this week? I worked a lot on long-standing UX-related issues and can proudly announce a few changes in Damned Lies:

  • Better consistency in many strings ('Release' vs 'Release Set', past tense in action history that previously used the infinitive form).
  • Administrators now see the modules maintained in the site backend.
  • String freeze notifications now expose the affected versions and are far more stable when detecting string freeze breaks.
  • You now have anchors in your team pages for each language your team is working on.
  • i18n coordinators are now identified by a mini badge in their profile, helping any user to reach them more easily.
  • i18n coordinators can take action in any workflow without being a member of the team they act on.
  • Users can remove their own accounts.

That's all for this week! 😃

That's all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

03 Apr 2026 12:00am GMT

01 Apr 2026

feedPlanet GNOME

GNOME Shell and Mutter Development: What is new in GNOME Kiosk 50

GNOME Kiosk, the lightweight, specialized compositor continues to evolve In GNOME 50 by adding new configuration options and improving accessibility.

Window configuration

User configuration file monitoring

The user configuration file gets reloaded when it changes on disk, so that it is not necessary to restart the session.

New placement options

New configuration options to constrain windows to monitors or regions on screen have been added:

  • lock-on-monitor: lock a window to a monitor.
  • lock-on-monitor-area: lock to an area relative to a monitor.
  • lock-on-area: lock to an absolute area.

These options are intended to replicate the legacy "Zaphod" mode from X11, where windows could be tied to a specific monitor. It even goes further than that, as it allows to lock windows on a specific area on screen.

The window/monitor association also remains true when a monitor is disconnected. Take for example a setup where each monitor, on a multiple monitors configuration, shows different timetables. If one of the monitors is disconnected (for whatever reason), the timetable showing on that monitor should not be moved to another remaining monitor. The lock-on-monitor option prevents that.

Initial map behavior was tightened

Clients can resize or change their state before the window is mapped, so size, position, and fullscreen as set from the configuration could be skipped. Kiosk now makes sure to apply configured size, position, and fullscreen on first map when the initial configuration was not applied reliably.

Auto-fullscreen heuristics were adjusted

  • Only normal windows are considered when checking whether another window already covers the monitor (avoids false positives from e.g. xwaylandvideobridge).
  • The current window is excluded when scanning "other" fullscreen sized windows (fixes Firefox restoring monitor-sized geometry).
  • Maximized or fullscreen windows are no longer treated as non-resizable so toggling fullscreen still works when the client had already maximized.

Compositor behavior and command-line options

New command line options have been added:

  • --no-cursor: hides the pointer.
  • --force-animations: forces animations to be enabled.
  • --enable-vt-switch: restores VT switching with the keyboard.

The --no-cursor option can be used to hide the pointer cursor entirely for setups where user input does not involve a pointing device (it is similar to the -nocursor option in Xorg).

Animations can now be disabled using the desktop settings, and will also be automatically disabled when the backend reports no hardware-accelerated rendering for performance purpose. The option --force-animations can be used to forcibly enable animations in that case, similar to GNOME Shell.

The native keybindings, which include VT switching keyboard shortcuts are now disabled by default for kiosk hardening. Applications that rely on the user being able to switch to another console VT on Linux, such as e.g Anaconda, will need to explicit re-enable VT switching using --enable-vt-switch in their session.

These options need to be passed from the command line starting gnome-kiosk, which would imply updating the systemd definitions files, or better, create a custom one (taking example on the the ones provided with the GNOME Kiosk sessions).

Accessibility

Accessibility panel

An example of an accessibility panel is now included, to control the platform accessibility settings with a GUI. It is a simple Python application using GTK4.

(The gsettings options are also documented in the CONFIG.md file.)

Screen magnifier

Desktop magnification is now implemented, using the same settings as the rest of the GNOME desktop (namely screen-magnifier-enabled, mag-factor, see the CONFIG.md file for details).

It can can be enabled from the accessibility panel or from the keyboard shortcuts through the gnome-settings-daemon's "mediakeys" plugin.

Accessibility settings

The default systemd session units now start the gnome-settings-daemon accessibility plugin so that Orca (the screen reader) can be enabled through the dedicated keyboard shortcut.

Notifications

  • A new, optional notification daemon implements org.freedesktop.Notifications and org.gtk.Notifications using GTK 4 and libadwaita.
  • A small utility to send notifications via org.gtk.Notifications is also provided.

Input sources

GNOME Kiosk was ported to the new Mutter's keymap API which allows remote desktop servers to mirror the keyboard layout used on the client side.

Session files and systemd

  • X-GDM-SessionRegister is now set to false in kiosk sessions as GNOME Kiosk does not register the session itself (unlike GNOME Shell). That fixes a hang when terminating the session.
  • Script session: systemd is no longer instructed to restart the session when the script exits, so that users can logout of the script session when the script terminates.

01 Apr 2026 9:06am GMT

Matthew Garrett: Self hosting as much of my online presence as practical

Because I am bad at giving up on things, I've been running my own email server for over 20 years. Some of that time it's been a PC at the end of a DSL line, some of that time it's been a Mac Mini in a data centre, and some of that time it's been a hosted VM. Last year I decided to bring it in house, and since then I've been gradually consolidating as much of the rest of my online presence as possible on it. I mentioned this on Mastodon and a couple of people asked for more details, so here we are.

First: my ISP doesn't guarantee a static IPv4 unless I'm on a business plan and that seems like it'd cost a bunch more, so I'm doing what I described here: running a Wireguard link between a box that sits in a cupboard in my living room and the smallest OVH instance I can, with an additional IP address allocated to the VM and NATted over the VPN link. The practical outcome of this is that my home IP address is irrelevant and can change as much as it wants - my DNS points at the OVH IP, and traffic to that all ends up hitting my server.

The server itself is pretty uninteresting. It's a refurbished HP EliteDesk which idles at 10W or so, along 2TB of NVMe and 32GB of RAM that I found under a pile of laptops in my office. We're not talking rackmount Xeon levels of performance, but it's entirely adequate for everything I'm doing here.

So. Let's talk about the services I'm hosting.

Web

This one's trivial. I'm not really hosting much of a website right now, but what there is is served via Apache with a Let's Encrypt certificate. Nothing interesting at all here, other than the proxying that's going to be relevant later.

Email

Inbound email is easy enough. I'm running Postfix with a pretty stock configuration, and my MX records point at me. The same Let's Encrypt certificate is there for TLS delivery. I'm using Dovecot as an IMAP server (again with the same cert). You can find plenty of guides on setting this up.

Outbound email? That's harder. I'm on a residential IP address, so if I send email directly nobody's going to deliver it. Going via my OVH address isn't going to be a lot better. I have a Google Workspace, so in the end I just made use of Google's SMTP relay service. There's various commerical alternatives available, I just chose this one because it didn't cost me anything more than I'm already paying.

Blog

My blog is largely static content generated by Hugo. Comments are Remark42 running in a Docker container. If you don't want to handle even that level of dynamic content you can use a third party comment provider like Disqus.

Mastodon

I'm deploying Mastodon pretty much along the lines of the upstream compose file. Apache is proxying /api/v1/streaming to the websocket provided by the streaming container and / to the actual Mastodon service. The only thing I tripped over for a while was the need to set the "X-Forwarded-Proto" header since otherwise you get stuck in a redirect loop of Mastodon receiving a request over http (because TLS termination is being done by the Apache proxy) and redirecting to https, except that's where we just came from.

Mastodon is easily the heaviest part of all of this, using around 5GB of RAM and 60GB of disk for an instance with 3 users. This is more a point of principle than an especially good idea.

Bluesky

I'm arguably cheating here. Bluesky's federation model is quite different to Mastodon - while running a Mastodon service implies running the webview and other infrastructure associated with it, Bluesky has split that into multiple parts. User data is stored on Personal Data Servers, then aggregated from those by Relays, and then displayed on Appviews. Third parties can run any of these, but a user's actual posts are stored on a PDS. There are various reasons to run the others, for instance to implement alternative moderation policies, but if all you want is to ensure that you have control over your data, running a PDS is sufficient. I followed these instructions, other than using Apache as the frontend proxy rather than nginx, and it's all been working fine since then. In terms of ensuring that my data remains under my control, it's sufficient.

Backups

I'm using borgmatic, backing up to a local Synology NAS and also to my parents' home (where I have another HP EliteDesk set up with an equivalent OVH IPv4 fronting setup). At some point I'll check that I'm actually able to restore them.

Conclusion

Most of what I post is now stored on a system that's happily living under a TV, but is available to the rest of the world just as visibly as if I used a hosted provider. Is this necessary? No. Does it improve my life? In no practical way. Does it generate additional complexity? Absolutely. Should you do it? Oh good heavens no. But you can, and once it's working it largely just keeps working, and there's a certain sense of comfort in knowing that my online presence is carefully contained in a small box making a gentle whirring noise.

01 Apr 2026 2:35am GMT

28 Mar 2026

feedPlanet GNOME

Gedit Technology: gedit 50.0 released

gedit 50.0 has been released! Here are the highlights since version 49.0 from January. (Some sections are a bit technical).

No Large Language Models AI tools

The gedit project now disallows the use of LLMs for contributions.

The rationales:

Programming can be seen as a discipline between art and engineering. Both art and engineering require practice. It's the action of doing - modifying the code - that permits a deep understanding of it, to ensure correctness and quality.

When generating source code with an LLM tool, the real sources are the inputs given to it: the training dataset, plus the human commands.

Adding something generated to the version control system (e.g., Git) is usually frown upon. Moreover, we aim for reproducible results (to follow the best-practices of reproducible builds, and reproducible science more generally). Modifying afterwards something generated is also a bad practice.

Releasing earlier, releasing more often

To follow more closely the release early, release often mantra, gedit aims for a faster release cadence in 2026, to have smaller deltas between each version. Future will tell how it goes.

The website is now responsive

Since last time, we've made some efforts to the website. Small-screen-device readers should have a more pleasant experience.

libgedit-amtk becomes "The Good Morning Toolkit"

Amtk originally stands for "Actions, Menus and Toolbars Kit". There was a desire to expand it to include other GTK extras that are useful for gedit needs.

A more appropriate name would be libgedit-gtk-extras. But renaming the module - not to mention the project namespace - is more work. So we've chosen to simply continue with the name Amtk, just changing its scope and definition. And - while at it - sprinkle a bit of fun :-)

So there are now four libgedit-* modules:

Note that all of these are still constantly in construction.

Some code overhaul

Work continues steadily inside libgedit-gfls and libgedit-gtksourceview to streamline document loading.

You might think that it's a problem solved (for many years), but it's actually not the case for gedit. Many improvements are still possible.

Another area of interest is the completion framework (part of libgedit-gtksourceview), where changes are still needed to make it fully functional under Wayland. The popup windows are sometimes misplaced. So between gedit 49.0 and 50.0 some progress has been made on this. The Word Completion gedit plugin works fine under Wayland, while the LaTeX completion with Enter TeX is still buggy since it uses more features from the completion system.

28 Mar 2026 10:00am GMT

27 Mar 2026

feedPlanet GNOME

Sebastian Wick: Three Little Rust Crates

I published three Rust crates:

They might seem like rather arbitrary, unconnected things - but there is a connection!

systemd socket activation passes file descriptors and a bit of metadata as environment variables to the activated process. If the activated process exec's another program, the file descriptors get passed along because they are not CLOEXEC. If that process then picks them up, things could go very wrong. So, the activated process is supposed to mark the file descriptors CLOEXEC, and unset the socket activation environment variables. If a process doesn't do this for whatever reason however, the same problems can arise. So there is another mechanism to help prevent it: another bit of metadata contains the PID of the target. Processes can check it against their own PID to figure out if they were the target of the activation, without having to depend on all other processes doing the right thing.

PIDs however are racy because they wrap around pretty fast, and that's why nowadays we have pidfds. They are file descriptors which act as a stable handle to a process and avoid the ID wrap-around issue. Socket activation with systemd nowadays also passes a pidfd ID. A pidfd ID however is not the same as a pidfd file descriptor! It is the 64 bit inode of the pidfd file descriptor on the pidfd filesystem. This has the advantage that systemd doesn't have to install another file descriptor in the target process which might not get closed. It can just put the pidfd ID number into the $LISTEN_PIDFDID environment variable.

Getting the inode of a file descriptor doesn't sound hard. fstat(2) fills out struct stat which has the st_ino field. The problem is that it has a type of ino_t, which is 32 bits on some systems so we might end up with a process identifier which wraps around pretty fast again.

We can however use the name_to_handle syscall on the pidfd to get a struct file_handle with a f_handle field. The man page helpfully says that "the caller should treat the file_handle structure as an opaque data type". We're going to ignore that, though, because at least on the pidfd filesystem, the first 64 bits are the 64 bit inode. With systemd already depending on this and the kernel rule of "don't break user-space", this is now API, no matter what the man page tells you.

So there you have it. It's all connected.

Obviously both pidfds and name_to_handle have more exciting uses, many of which serve my broader goal: making Varlink services a first-class citizen. More about that another time.

27 Mar 2026 12:15am GMT

26 Mar 2026

feedPlanet GNOME

Lennart Poettering: Mastodon Stories for systemd v260

On March 17 we released systemd v260 into the wild.

In the weeks leading up to that release (and since then) I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd260 hash tag. In case you aren't using Mastodon, but would like to read up, here's a list of all 21 posts:

I intend to do a similar series of serieses of posts for the next systemd release (v261), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.

My series for v261 will begin in a few weeks most likely, under the #systemd261 hash tag.

In case you are interested, here is the corresponding blog story for systemd v259, here for v258, here for v257, and here for v256.

26 Mar 2026 11:00pm GMT