10 May 2026
Planet GNOME
Laura Kramolis: Computers Are Terrible
A slightly more collected version of originally 18 Signal messages. This is a simplification. I am evidently no expert in Unicode specifically or text encoding in general.
I, for a long time, believed that while many modern standards are a mess of legacy compatibility built on legacy compatibility, Unicode was an exception. That the only compromise it made was ASCII-compatibility, but even that wasn't such a big one given that its character set is the most common one in computing even to this day. I was wrong.
I got a US keyboard so now I have 2 different ways of typing accented characters. I can either hold the A key until I get an option of à, á, â, ä, ǎ, etc., or I can press ⌥ E and then A to get to á, combining ´ and a regular a. I started wondering… when typing it one way or the other, the results must be different, right? I looked for a website that showed me what code points I was typing, and… they were the same?
Most systems (the OS/browser in this case) normalize all text either one way or the other. In this case, to a single code point. Unicode does have deprecation, so you would think that when they introduced combining characters, they would have deprecated the precomposed versions of characters that can be written using them, right? Nope!
It's arbitrary which way each systems normalizes text. Some do it composed (á) and some decomposed (a + ◌́). Both are part of the standard. And of course, you need to treat them as equivalent when not normalized so you might as well do it when you can anyway.
Precomposed characters are the legacy solution for representing many special letters in various character sets. In Unicode, they were included for compatibility with early encoding systems […].
Oh well, my day is ruined. My new life goal is advocacy for the deprecation of all precomposed characters… or maybe I should just accept that all computing will be plagued by backwards compatibility headaches 'til the end of time.
10 May 2026 12:00am GMT
09 May 2026
Planet GNOME
Jakub Steiner: USS/FMS Carrier
I'm a sucker for pixel art and very constrained music grooveboxes. While I'm not into chiptunes, they sure are a cultural phenomenon.
You heard me boast about the Dirtywave M8 numerous times, even in person, because it's my tool of choice for producing and performing music. Its genius lies in high sound quality and a workflow that grew out of the tiny screen and button constraints on the Nintendo Gameboy, the platform of choice for an app called LSDJ, which the M8 is modelled after. That, and the sheer amount of sound engines living in your pocket. Building on the shoulders of giants and all.
The small M8 community has a few 'celebrities', such as Ess Mattisson. I first heard of Ess when I ran into an amazing single channel track called Wertstoffe. Ess has a great pedigree as the creator of the original Digitone FM synthesizer while working at Elektron. FM remains his forte, and after creating numerous plugins through Fors, he has now released a little 2-operator FM synth and sequencer for the platform of the future, Nintendo Gameboy Advance.
What makes FMS a bit crazy is what it's doing under the hood. The Gameboy Advance has no FM synthesis hardware at all. Its audio gives you two Direct Sound DMA channels of 8-bit signed PCM - that's 256 amplitude levels, roughly 48 dB of dynamic range. For comparison, a CD has 96 dB, in much finer fidelity. The CPU is an ARM7TDMI running at 16.78 MHz with 256 KB of RAM, and that's where all the FM math happens. Sine waves, modulation, mixing four channels, all in real time, in software, on a chip from 2001 that was designed to shuffle sprites around. The hiss you hear is just part of the deal: quantization noise from that 8-bit DAC. So few amplitude steps means everything that comes out has this fuzzy, slightly crushed quality. You can't get rid of it. It is the sound. And somehow there are four channels of 2-operator FM synthesis in there, each with envelopes and ratio control. On a Gameboy Advance.
Picking GBA as a platform of choice in 2026 may be strange. Surprisingly, it can be used on a very large array of hardware. Not only can you plug a memory card into the original hardware or new fancy clones like the Analogue Pocket, you have an exponentially larger choice of dozens if not hundreds of Chinese emulator handhelds from Anbernic, Powkiddy, Miyoo or Retroid. You can also use the Steam Deck or any PC running one of the many emulators, RetroArch being the most popular one.
FMS really touched me. Partly because I have a soft spot for the Nordic demo scene, but mainly for its novel approach to composition. Just like with the M8, creating basic building blocks and then applying transposition to break the looping monotony is my favorite workflow. This little thing has that in the form of pattern and trig transposition but also a novel take on "effects". Yes, you heard me right. There's a sorta-kinda-delay. Even does stereo field ping-pong.
I will keep on trying to create something that … sounds good. The process has been amazing. I truly love some of the sequencing tricks and workflows. The sequencer is, however, so good it would be worth seeing it run on top of a higher quality sound engine too.
09 May 2026 12:00am GMT
08 May 2026
Planet GNOME
This Week in GNOME: #248 Tracking Performance
Update on what happened across the GNOME project in the week from May 01 to May 08.
GNOME Core Apps and Libraries
Glycin ↗
Sandboxed and extendable image loading and editing.
Sophie (she/her) says
Automatically running tests on GitLab has now been a standard for a while. But tracking performance metrics is much less common. Glycin now started running basic performance tests on bencher.dev's bare metal runners, which will hopefully provide comparable results.
As of now, the benchmarks are only covering the overhead of the loader stack, by loading a 1px PNG, and the binary file sizes for glycin loaders and the thumbnailer. But the tests should be easy to expand. The benchmarks are always run for commits in the main branch, and can be manually started for merge requests. This way it will be possible to track performance improvements and catch regressions early.
Third Party Projects
Christian says
🎉 Gitte 0.2.0 is out!
This week, Gitte 0.2.0 was released with a big focus on interactive rebasing and polishing everyday Git workflows.
The biggest addition is interactive rebasing directly from the commit log. Commits can now be reordered via drag & drop, dropped, reworded, edited during a paused rebase, or squashed and fixuped without leaving the GUI.
Remote operations like push, pull, fetch and clone now use the Git CLI internally, improving credential handling and protocol support. The diff view font is now configurable, and repositories can be opened directly from the terminal using commands like
gitte ~/Code/projects/Gitte.This release also adds a unified stash dialog for workflows that require stashing changes, ahead/behind indicators for the current branch, double-click checkout for local branches, and improved merge commit information in the log viewer. There are also a few small easter eggs hidden throughout the app.
On the translation side, Gitte now includes a German translation and a Ukrainian translation by Dymko. The release also includes AUR packaging documentation contributed by Kainoa Kanter, alongside many bug fixes and smaller refinements across the application.
Get it on Flathub or check the source code.
Bilal Elmoussaoui reports
I have released the first version of gobject-linter, previously known as goblint.
This release brings a lot of new functionality: Meson integration for accurate dead code detection (functions, enum variants, structs, struct fields and more) via the new dead_code rule, mis-exported public types detection, inconsistent function signatures checking, and a type_style rule to enforce consistent use of either GLib type aliases (
gint,gfloat,gdouble) or their C equivalents across your codebase. Two new GObject introspection rules for verifying missing since annotations and the exported public APIs are bindings friendly.It also supports diff-scoped linting via
--diff -so you can incrementally integrate it into large existing projects.The release is also available on crates.io
Jeffry Samuel announces
Nocturne 1.0.0 has been released!
Nocturne is a modern music player that can play songs from your OpenSubsonic, Jellyfin and local libraries.
It includes features such as audio visualizers, equalizers and automatic lyric fetching.
Some of the new features in 1.0.0 are:
- Support for changing max bitrate
- Support for replay gain
- Added option to show sidebar player
- Compatibility with word for word lyrics
- Faster and more stable interface
- Gapless playback
- Grouping of songs in albums by their disc
- Added option to show dynamic background in the main window
- Much more
mas says
Hi, finally released my first app, Press! With has a very straight-forward interface to compress huge music libraries with ease.
You might like it because:
- Compresses multiple files simultaneously
- Never takes destructive actions on the source (but it can replace files on the destination if you want)
- Avoids re-compressing a file (if you just want to add a new album, it compresses just that one, not your entire library)
- Import basically any format GStreamer can take!
- Export to mp3, m4a, or ogg
- Move other non-auto files with you
- You can add custom formats with a bit of GStreamer know-how
It really is a one-stop solution to compress music to portable devices.
I'd love to hear feedback and suggestions.
Get it on Flathub or check the source code. Oh and, it uses libadwaita, vala, and GStreamer.
JumpLink announces
The type-definitions generator ts-for-gir produces the typings used to write GNOME applications in TypeScript. It can now experimentally run directly on GJS, without Node.js.
This is made possible by the new experimental GJSify framework, which provides Node.js and Web APIs on top of GJS. Its long-term goal is to make as much of the JavaScript / TypeScript ecosystem as possible available to GJS applications.
bhack announces
I'd like to introduce Mini EQ, a new small GTK/Libadwaita app for PipeWire desktops.
Mini EQ is a system-wide parametric equalizer. It creates a PipeWire filter-chain sink with builtin biquad filters, routes desktop playback through it with WirePlumber, and provides a compact 10-band fader workflow. It also supports Equalizer APO/AutoEq preset import and an optional spectrum analyzer through the PipeWire JACK compatibility layer.
The project is now available on Flathub, with source and packaging published on GitHub.
Flathub: https://flathub.org/apps/io.github.bhack.mini-eq GNOME Shell extension: https://extensions.gnome.org/extension/9803/mini-eq-controls/
Source: https://github.com/bhack/mini-eq
Anton Isaiev announces
RustConn is a GTK4/libadwaita connection manager for SSH, RDP, VNC, SPICE, Telnet, MOSH, and more.
Versions 0.12.8-0.13.7 were shaped heavily by user feedback. What started as a personal tool is now used daily by sysadmins and DevOps teams - and their reports drive the roadmap.
Key additions:
Local Shell in Flatpak - fully working host shell via flatpak-spawn with real PTY and job control. RDP dynamic resize - in-place resolution change via Display Control Channel, no reconnect needed; automatic fallback for legacy servers. RDP Autotype - type text as keystrokes into remote sessions, bypassing clipboard restrictions. Drag & Drop - file paths into terminals, files to RDP clipboard. Smart Folders & Dynamic Folders - filter connections by tag/protocol/pattern, or generate them from external scripts. Virt-viewer .vv file support - open SPICE/VNC files from Proxmox, oVirt, libvirt directly. CLI -format json|csv|table - machine-readable output for scripting and AI agents. GNOME HIG audit - restructured menus, unified dialogs, accessible labels across all windows. Flatpak CLI auto-versioning - 7 bundled CLI tools now resolve latest versions from upstream automatically.
Homepage: https://github.com/totoshko88/RustConn Flathub: https://flathub.org/en/apps/io.github.totoshko88.RustConn
Shell Extensions
Miklós Zsitva says
Matrix Status Monitor v7 improves room handling, notifications, and profile actions in GNOME Shell.
Matrix Status Monitor v7 is now available on GNOME Extensions, bringing a noticeably smoother experience for Matrix users running GNOME Shell. This release focuses on making the extension feel more responsive and more native to the desktop, while keeping the panel UI lightweight and fast.
The biggest change is the new weight-based room sorting system, which replaces the old timestamp-only approach. Rooms are now ranked by highlights, unread counts, direct messages, favourites, visit frequency, and recency, so the most relevant conversations surface first.
v7 also adds a clear idle/active separator in the room list, plus async menu rebuilds via GLib.idle_add to avoid blocking the UI during updates. On top of that, the extension now sends GNOME desktop notifications through MessageTray, with event ID deduplication so the same message does not trigger repeated alerts.
The profile header has been expanded as well: it now shows the user avatar, display name, user ID, plus one-click copy and QR toggle actions. The avatar loading path was also extended to handle a larger profile icon size, which helps the header feel more polished and distinct from room rows.
Overall, v7 is a refinement release that makes the extension feel more reliable, more readable, and more useful in daily GNOME use.
https://extensions.gnome.org/extension/9328/matrix-status-monitor/
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
08 May 2026 12:00am GMT
06 May 2026
Planet GNOME
Richard Hughes: LVFS Sponsorship Announcement
Some great news: I'm pleased to announce that both Dell and Lenovo have agreed to be premier sponsors for the Linux Vendor Firmware Service (LVFS) as part of our new sustainability effort.
Over 145 million firmware updates have been deployed now, from over a hundred different vendors to millions of different Linux devices.
With the huge industry support from Lenovo and Dell (and our existing sponsors of Framework, OSFF, and of course both the Linux Foundation and Red Hat) we can build this ecosystem stronger and higher than before; we can continue the great work we've done long into the future.
06 May 2026 12:13pm GMT
Steven Deobald: Apologies
I believe accountability can be a challenge in a nonprofit, which only makes it all the more important. In this post, I am holding myself accountable. For the avoidance of doubt, nothing that follows has anything to do with my exit from the GNOME Foundation last August.
I owe a few folks some apologies from my time as Executive Director. I have apologized to most of them individually already, where I could. But I believe that public accountability is the antidote to public frustration and I hope this contributes, in a small way, to the GNOME community moving forward.
First off, I sincerely apologize to Jehan Pagès and Christian Hergert. I was curt with both of you last summer and neither of you deserved it. From July 23rd to August 29th I was dealing with significant sleep deprivation but that's no excuse for the way I spoke to either of you. I'm sorry.
Next, I apologize to the former Executive Directors and active community members who raised concerns to me. Holly, you warned me. Twice. Many other people tried to share their perspectives. I was too focused on the Foundation's financial situation, and I did not take the time to fully understand what I was hearing from you all. I regret that.
Sonny
To Sonny Piers: I am sorry. I had a long call with you last June. You told me your complicated story. You seemed hurt - but I didn't believe you. My understanding was incomplete and I did not approach the situation with the care it deserved.
I'm sorry I didn't do more to support you.
Tobias
More than anyone, I want to apologize to Tobias Bernard. Tobias, I am sorry. You gave me many hours of your time, patience, and thoughtfulness. You shared your ideas openly and in good faith, and I didn't always meet that with the same level of openness.
In particular, when we discussed Sonny's situation, I did not listen as carefully as I should have. I was too focused on my existing understanding, and I failed to engage with what you were trying to convey. You deserved better from me.
Sonny is lucky to have a friend like you.
Meta
This post reflects only my personal experiences and perspectives. It is not intended to make allegations or factual claims about the conduct of any individual or organization.
Until Microsoft goes out of business, a permanent copy of this apology can be found in this gist.
06 May 2026 3:37am GMT
04 May 2026
Planet GNOME
Michael Meeks: 2026-05-04 Monday
- A day off - about time. Early partner call.
- Helped J. put up stainless wire for rose training in the garden. Plugged away at garage tidying with more good progress.
- Lunch with the family outside in the sun; tidied my office for the first time in a while; got the ladder moved into J's garden shed.
- Made a wooden spatula with H. in the evening, turning plus band-sawing action; fun. Left it in tung-oil overnight.
04 May 2026 9:00pm GMT
03 May 2026
Planet GNOME
Nick Richards: WhatCable, Framework, and USB-C
USB-C is excellent, provided you don't look too closely.
I've been seeing a drum beat of interest in the internals of USB-C. Darryl Morley's macOS WhatCable, Chromebooks exposing lots of lovely info about emarkers, USB cable testers and a bit more. Very infrastructure club topics. So I made a small GTK app also called WhatCable which is intended to show what Linux knows about your USB ports, cables, chargers and devices, but written as a GNOME/libadwaita app and using the interfaces Linux exposes through sysfs.
The hope was fairly straightforward: plug things into my Framework 13, ask Linux what is going on, and present the answer in a way that doesn't require remembering which bit of /sys to poke. In particular I wanted cable identity and e-marker details. These are the useful little facts that tell you whether a cable is what it claims to be, or at least what it claims to be electronically. Given the number of USB-C cables in the house whose origin story is "came in a box with something", this felt like a public service, or at least a satisfying evening.
The first bit is pleasantly sensible. Linux has standard-ish places for this information:
/sys/bus/usb/devices
/sys/class/typec
/sys/class/usb_power_delivery
/sys/bus/thunderbolt/devices
When those are populated, a normal unprivileged app can learn quite a lot. It can show USB devices, Type-C ports, partners, cables, roles, power data, Thunderbolt and USB4 domains. That's exactly the sort of thing a small Flatpak app should be good at: read some public kernel state, translate it into something at least moderately human friendly and then depart.
On my Framework 13, the USB device and Thunderbolt sides were useful. The Type-C side was not. /sys/class/typec existed but had no ports. /sys/class/usb_power_delivery existed but was empty. This is a slightly annoying result, because it means the nice standard API is present as a signpost rather than a destination.
The next clue was that the machine clearly does have USB-C machinery, and not just because I could look at the side of the device. It is a Framework 13 with the embedded controller and Cypress CCG power delivery controllers doing real work. The relevant kernel modules were loaded, including UCSI and Chrome EC pieces. There was also an ACPI UCSI device at:
/sys/bus/acpi/devices/USBC000:00
but ucsi_acpi did not appear to bind to it and create the Type-C class ports. So the hardware and firmware know things, but they were not arriving in the standard Linux userspace shape.
Framework's own tooling gives another route in. I built framework_tool from FrameworkComputer/framework-system and asked the EC what it could see. The Framework-specific PD port command did not work on this firmware:
USB-C Port 0:
[ERROR] EC Response Code: InvalidCommand
and similarly for the other ports. That's not very poetic, but it is at least clear.
The Chromebook-style power command was more useful. With a charger connected it reported, for example:
USB-C Port 0 (Right Back):
Role: Sink
Charging Type: PD
Voltage Now: 19.776 V, Max: 20.0 V
Current Lim: 2250 mA, Max: 2250 mA
Dual Role: Charger
Max Power: 45.0 W
That's good information. It's not cable identity, but it is the kind of port state people actually want when they are trying to work out why a laptop is charging slowly, or not charging, or doing something else mildly USB-C shaped.
framework_tool --pd-info could also talk through the EC to the Cypress controllers and report their firmware details:
Right / Ports 01
Silicon ID: 0x2100
Mode: MainFw
Ports Enabled: 0, 1
FW2 (Main) Version: Base: 3.4.0.A10, App: 3.8.00
Left / Ports 23
Silicon ID: 0x2100
Mode: MainFw
Ports Enabled: 0, 1
FW2 (Main) Version: Base: 3.4.0.A10, App: 3.8.00
Again, useful. Again, not the cable.
Much of this investigation and app code was written with AI tools in the loop. That was useful for chasing down boring plumbing and generating probes. The decisive test was asking the Chrome EC for the newer Type-C discovery data directly. The EC advertised USB PD support, but not the newer Type-C command set. EC_CMD_TYPEC_STATUS and EC_CMD_TYPEC_DISCOVERY both came back as invalid commands on all four ports.
That means that on this Framework 13 firmware path I cannot get Discover Identity results, SOP/SOP' discovery data, SVIDs, mode lists or e-marker details through Chrome EC host commands. The cable may well be telling the PD controller interesting things, but those things are not exposed through a stable unprivileged interface I can sensibly use in a desktop app.
This is the main lesson from the whole exercise: USB-C inspection on Linux is not one API. It is a set of possible stories. Sometimes the kernel Type-C class tells you lots of things. Sometimes Thunderbolt sysfs tells you a different useful slice. Sometimes a vendor EC can tell you power state, but only as root. Sometimes the information exists below you somewhere, but not in a form you should build an app around.
So WhatCable needs to be honest. It should show the sources it can read, and it should say when a source is unavailable rather than pretending absence means certainty. "No cable identity exposed on this machine" is a very different statement from "this cable has no identity". The former is boring but true. The latter is how you end up lying with an icon (it is not a nice icon).
The current shape I think is right is:
- use USB, Type-C, USB PD and Thunderbolt sysfs whenever they are available;
- show raw values as well as friendly summaries;
- explain missing sources in diagnostics;
- treat Framework EC data as an optional extra, not a default dependency;
- if EC access is added, put it behind a narrow read-only helper rather than teaching a Flatpak app to fling arbitrary commands at
/dev/cros_ec.
That last point matters. On the host /dev/cros_ec exists, but it is root-only. Making a normal app require broad device access would be a poor bargain. A small privileged helper that answers a few known-safe questions might be acceptable. A graphical app with arbitrary EC command execution would be exciting in the wrong way.
This is not quite the result I wanted when I started. I wanted to show a friendly "this is a 100W e-marked cable" label and feel very clever about it. What I have instead is a more modest app and a better understanding of where the bodies are buried. That's still useful. A tool that tells you what your machine actually exposes is better than one that implies the USB-C universe is more orderly than it is. Given this, I'm not going to be sharing this one more widely, but fork away if you wish, or come back with a better idea.
It's very easy to run with GNOME Builder, so just check out the source and 'press play' or get an artifact out of the Github Actions. If you run WhatCable on a different laptop and see rich Type-C data, lovely. If you run it on a Framework 13 like mine and mostly see USB devices, Thunderbolt controllers and a note that Type-C data is missing, that is also information. Not as glamorous as catching a suspicious cable in the act, but much more likely to be true.
03 May 2026 8:10pm GMT
02 May 2026
Planet GNOME
Andrea Veri: SELinux MCS challenges with GitLab Runners
Table of Contents
- Introduction
- The MCS problem
- The test script
- GitLab's official suggestion and why it falls short
- How GNOME currently handles this
- Exploring libkrun
- Firecracker and the custom executor path
- What comes next
Introduction
GNOME's GitLab runners use Podman as the container runtime with SELinux in Enforcing mode on Fedora. The GitLab Runner Docker/Podman executor spawns multiple containers per job: a helper container that clones the repository and handles artifacts, and a build container that runs the actual CI script. Both containers need to share a /builds volume - and this is where SELinux's Multi-Category Security (MCS) becomes a problem.
The MCS problem
An SELinux label has four fields: user:role:type:level. For containers the interesting part is the level, also called the MCS field. A level looks like s0:c123,c456 - s0 is the sensitivity (always s0 in targeted policy), and c123,c456 are the categories. A process or file can carry up to two categories.
MCS access is based on dominance. A subject's label dominates an object's label if the subject's categories are a superset of (or equal to) the object's categories:
| Subject | Object | Access? | Why |
|---|---|---|---|
s0:c100,c200 |
s0:c100,c200 |
Yes | Exact match |
s0:c100,c200 |
s0:c100 |
Yes | Subject's categories are a superset |
s0:c100,c200 |
s0:c100,c300 |
No | Subject lacks c300 |
s0:c0.c1023 |
s0:c100,c200 |
Yes | Full range dominates everything |
s0 |
s0:c100,c200 |
No | No categories can't dominate any |
s0 |
s0 |
Yes | Both have no categories |
How this applies to the runners:
- Container A runs as
container_t:s0:c100,c100- it can only access objects labeleds0:c100,c100(ors0:c100, ors0) - Container B runs as
container_t:s0:c200,c200- it can only access objects labeleds0:c200,c200(ors0:c200, ors0) - Container A cannot access Container B's files -
c100,c100doesn't dominatec200,c200 - Overlay layers labeled
s0(no categories) - accessible by all containers since every category set dominates the empty set - Podman at
container_runtime_t:s0-s0:c0.c1023- the full range means it dominates every possible category combination, so it can manage all containers
The range syntax (s0-s0:c0.c1023) is used for processes that need to operate across multiple levels. It means "my low clearance is s0 and my high clearance is s0:c0.c1023." The process can read objects at any level within that range and create objects at any level within it. This is why Podman needs the full range - it creates containers with different MCS labels and needs to access all of them.
When Podman starts a container, it picks a random pair of categories (e.g., s0:c512,c768) from within its allowed range and assigns that as the container's process label. Files created by the container inherit that label. Another container gets a different random pair (e.g., s0:c33,c901). Since c512,c768 and c33,c901 do not match - neither is a superset of the other - SELinux denies cross-container file access. This is the isolation mechanism, and the root cause of the problem with GitLab Runner's multi-container-per-job architecture.
The helper container gets one random MCS pair, writes the cloned repo to /builds labeled with that pair, and the build container gets a different pair. The build container cannot read or write those files. The :Z volume flag (exclusive relabel) relabels the volume to the mounting container's category, but that only helps the first container - the second one still has a different label.
The test script
I wrote a script that demonstrates the problem with both standard containers (crun) and microVMs (libkrun). The script creates two containers per test - a helper that writes a file to a shared /builds volume, and a build container that tries to read it - simulating the GitLab Runner workflow:
#!/bin/bash
# Description: SELinux MCS Diagnostic (crun vs krun)
if [ "$(getenforce)" != "Enforcing" ]; then
echo "WARNING: SELinux is not in Enforcing mode. This test requires Enforcing mode."
exit 1
fi
TEST_BASE="/tmp/gitlab-runner-mcs-test"
CRUN_DIR="$TEST_BASE/crun-builds"
KRUN_DIR="$TEST_BASE/krun-builds"
# Cleanup from previous runs
rm -rf "$TEST_BASE"
mkdir -p "$CRUN_DIR" "$KRUN_DIR"
echo "======================================================="
echo " TEST 1: Standard Container Isolation (crun)"
echo "======================================================="
# 1. CREATE Helper
podman create --name crun-helper -v "$CRUN_DIR:/builds:Z" fedora bash -c "
echo '[crun] -> Helper Process Context (Inside):'
cat /proc/self/attr/current
echo 'crun-data' > /builds/artifact.txt
echo '[crun] -> File Label INSIDE Helper:'
ls -Z /builds/artifact.txt
" > /dev/null
echo "[crun] Starting Helper Container (applying :Z relabel)..."
HELPER_HOST_LABEL_CRUN=$(podman inspect -f '{{.ProcessLabel}}' crun-helper)
echo "[crun] -> HOST METADATA: Podman assigned process label: $HELPER_HOST_LABEL_CRUN"
podman start -a crun-helper
echo ""
echo "[crun] -> File Label ON HOST (Notice the specific MCS category):"
ls -Z "$CRUN_DIR/artifact.txt"
# 2. CREATE Build Container (The Victim)
podman create --name crun-build -v "$CRUN_DIR:/builds" fedora bash -c "
echo ' [Build-Internal] Process Context:'
cat /proc/self/attr/current 2>/dev/null
echo ' [Build-Internal] Executing ls -laZ /builds :'
ls -laZ /builds 2>&1 | sed 's/^/ /'
echo ' [Build-Internal] Executing cat /builds/artifact.txt :'
cat /builds/artifact.txt 2>&1 | sed 's/^/ /'
" > /dev/null
echo ""
echo "[crun] Starting Build Container to inspect shared volume..."
BUILD_HOST_LABEL_CRUN=$(podman inspect -f '{{.ProcessLabel}}' crun-build)
echo "[crun] -> HOST METADATA: Podman assigned process label: $BUILD_HOST_LABEL_CRUN"
podman start -a crun-build
podman rm -f crun-helper crun-build > /dev/null
echo ""
echo "======================================================="
echo " TEST 2: MicroVM Isolation (libkrun / virtio-fs) FIXED"
echo "======================================================="
# --- Write the execution scripts to the host to avoid parsing errors ---
cat << 'EOF' > "$TEST_BASE/krun_helper.sh"
#!/bin/bash
echo '[krun] -> Helper Process Context (Inside VM):'
cat /proc/self/attr/current 2>/dev/null || echo ' (SELinux disabled/unavailable in guest kernel)'
echo 'krun-data' > /builds/artifact.txt
echo '[krun] -> File Label INSIDE Helper VM (Blindspot):'
ls -laZ /builds/artifact.txt 2>&1 | sed 's/^/ /'
EOF
cat << 'EOF' > "$TEST_BASE/krun_build.sh"
#!/bin/bash
echo ' [Build-Internal] Process Context (Inside VM):'
cat /proc/self/attr/current 2>/dev/null || echo ' (SELinux disabled/unavailable in guest kernel)'
echo ' [Build-Internal] Executing ls -laZ /builds :'
ls -laZ /builds 2>&1 | sed 's/^/ /'
echo ' [Build-Internal] Executing cat /builds/artifact.txt :'
cat /builds/artifact.txt 2>&1 | sed 's/^/ /'
EOF
chmod +x "$TEST_BASE/krun_helper.sh" "$TEST_BASE/krun_build.sh"
# ---------------------------------------------------------------------
# 1. CREATE Helper MicroVM
podman create --name krun-helper --runtime krun --memory=1024m \
-v "$KRUN_DIR:/builds:Z" \
-v "$TEST_BASE/krun_helper.sh:/script.sh:ro,Z" \
fedora /script.sh > /dev/null
echo "[krun] Starting Helper MicroVM (applying :Z relabel)..."
HELPER_HOST_LABEL_KRUN=$(podman inspect -f '{{.ProcessLabel}}' krun-helper)
echo "[krun] -> HOST METADATA: Podman assigned process label: $HELPER_HOST_LABEL_KRUN"
podman start -a krun-helper
echo ""
echo "[krun] -> File Label ON HOST (Podman applied the helper's MCS category via :Z):"
ls -Z "$KRUN_DIR/artifact.txt"
# 2. CREATE Build MicroVM (The Victim)
podman create --name krun-build --runtime krun --memory=1024m \
-v "$KRUN_DIR:/builds" \
-v "$TEST_BASE/krun_build.sh:/script.sh:ro,Z" \
fedora /script.sh > /dev/null
echo ""
echo "[krun] Starting Build MicroVM to inspect shared volume..."
BUILD_HOST_LABEL_KRUN=$(podman inspect -f '{{.ProcessLabel}}' krun-build)
echo "[krun] -> HOST METADATA: Podman assigned process label: $BUILD_HOST_LABEL_KRUN"
echo " *** THE virtiofsd DAEMON ON THE HOST IS TRAPPED IN THIS CONTEXT ***"
podman start -a krun-build
# Cleanup
podman rm -f krun-helper krun-build > /dev/null
echo ""
echo "======================================================="
echo " Test Complete."
Test 1 (crun) creates a helper container that mounts the builds directory with :Z (exclusive relabel) and writes artifact.txt. Podman assigns it a random MCS label - in this run it was s0:c20,c540. The file on disk inherits that label. Then a second container (the build container) mounts the same path without :Z and gets a different random label (s0:c46,c331). Since c46,c331 does not dominate c20,c540, the build container is denied access to the file.
Test 2 (krun) runs the same scenario but with --runtime krun, which boots each container inside a lightweight microVM via libkrun. The helper VM gets container_kvm_t:s0:c823,c999 and the build VM gets container_kvm_t:s0:c309,c405 - same MCS mismatch, same denial. The type changes from container_t to container_kvm_t, but the MCS mechanism is identical. On the host side, virtiofsd - the daemon that serves the volume into the VM via virtio-fs - runs under the MCS label Podman assigned to the VM. The build VM's virtiofsd is trapped in s0:c309,c405 and cannot access files labeled s0:c823,c999.
An interesting detail: inside the libkrun VMs, cat /proc/self/attr/current returns just kernel - SELinux is not available in the guest. The VM thinks it has no mandatory access control, but the host-side virtiofsd is still fully subject to MCS enforcement. This is a blindspot worth being aware of.
The output from a run on Fedora with SELinux Enforcing and Podman 5.8.2:
=======================================================
TEST 1: Standard Container Isolation (crun)
=======================================================
[crun] Starting Helper Container (applying :Z relabel)...
[crun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_t:s0:c20,c540
[crun] -> Helper Process Context (Inside):
system_u:system_r:container_t:s0:c20,c540 [crun] -> File Label INSIDE Helper:
system_u:object_r:container_file_t:s0:c20,c540 /builds/artifact.txt
[crun] -> File Label ON HOST (Notice the specific MCS category):
system_u:object_r:container_file_t:s0:c20,c540 /tmp/gitlab-runner-mcs-test/crun-builds/artifact.txt
[crun] Starting Build Container to inspect shared volume...
[crun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_t:s0:c46,c331
*** COMPARE THE cXXX,cYYY ABOVE TO THE FILE LABEL. THIS MISMATCH CAUSES THE DENIAL ***
[Build-Internal] Process Context:
system_u:system_r:container_t:s0:c46,c331 [Build-Internal] Executing ls -laZ /builds :
ls: cannot open directory '/builds': Permission denied
[Build-Internal] Executing cat /builds/artifact.txt :
cat: /builds/artifact.txt: Permission denied
=======================================================
TEST 2: MicroVM Isolation (libkrun / virtio-fs) FIXED
=======================================================
[krun] Starting Helper MicroVM (applying :Z relabel)...
[krun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_kvm_t:s0:c823,c999
[krun] -> Helper Process Context (Inside VM):
kernel [krun] -> File Label INSIDE Helper VM (Blindspot):
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c823,c999 10 May 2 2026 /builds/artifact.txt
[krun] -> File Label ON HOST (Podman applied the helper's MCS category via :Z):
system_u:object_r:container_file_t:s0:c823,c999 /tmp/gitlab-runner-mcs-test/krun-builds/artifact.txt
[krun] Starting Build MicroVM to inspect shared volume...
[krun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_kvm_t:s0:c309,c405
*** THE virtiofsd DAEMON ON THE HOST IS TRAPPED IN THIS CONTEXT ***
[Build-Internal] Process Context (Inside VM):
kernel [Build-Internal] Executing ls -laZ /builds :
ls: /builds: Permission denied
ls: cannot open directory '/builds': Permission denied
[Build-Internal] Executing cat /builds/artifact.txt :
cat: /builds/artifact.txt: Permission denied
=======================================================
Test Complete.
GitLab's official suggestion and why it falls short
GitLab's documentation on configuring SELinux MCS suggests applying the same MCS label to all containers launched by a runner:
[[runners]]
[runners.docker]
security_opt = ["label=level:s0:c1000,c1000"]
This works - all containers get the same category pair, so the helper and build containers can share files. But it collapses MCS isolation between all concurrent jobs on that runner. With concurrent = 4, four simultaneous jobs all run as s0:c1000,c1000 and can read each other's /builds content - cloned source code, build artifacts, cached dependencies. On a shared or multi-tenant runner, this is a security regression: it trades MCS isolation for functionality.
For runners with concurrent = 1 or dedicated single-tenant runners this is an acceptable tradeoff, but it does not generalize to shared infrastructure where multiple untrusted projects run side by side.
How GNOME currently handles this
GNOME's runners are managed via an Ansible role that enforces SELinux in Enforcing mode, installs rootless Podman running as a dedicated podman system user with linger enabled, and deploys custom SELinux policy modules. The Podman service runs under SELinuxContext=system_u:system_r:container_runtime_t:s0-s0:c0.c1023 via a systemd override - the full MCS range (s0-s0:c0.c1023) gives the container runtime the ability to spawn containers at any MCS level and relabel volumes accordingly, as explained in the dominance rules above.
Four custom SELinux .te modules are compiled and loaded on every runner host: pydocuum (allows the image cleanup daemon to talk to the Podman socket), podman (grants user_namespace create and /dev/null mapping), flatpak (permits the filesystem mounts flatpak builds need), and gnome_runner (covers binfmt_misc access, device nodes, and other permissions GNOME OS builds require).
For the MCS problem specifically, the runner config.toml - rendered from a Jinja2 template via per-host Ansible variables - sets a fixed MCS label per runner type. Here's a representative snippet from one of the runner hosts:
[[runners]]
name = "a15948139c78"
executor = "docker"
[runners.docker]
image = "quay.io/fedora/fedora:latest"
privileged = false
security_opt = ["label=level:s0:c100,c100"]
devices = ["/dev/kvm", "/dev/udmabuf"]
cap_add = ["SYS_PTRACE", "SYS_CHROOT"]
[[runners]]
name = "a15948139c78-flatpak"
executor = "docker"
[runners.docker]
image = "quay.io/gnome_infrastructure/gnome-runtime-images:gnome-master"
privileged = false
security_opt = ["seccomp:/home/podman/gitlab-runner/flatpak.seccomp.json", "label=level:s0:c200,c200"]
cap_drop = ["all"]
This is the same approach GitLab's documentation suggests, with one refinement: we use different fixed categories per runner type - c100,c100 for untagged runners and c200,c200 for flatpak runners - so that flatpak builds and regular builds remain MCS-isolated from each other, even though builds of the same type share a category.
This is a pragmatic compromise, not an ideal solution. All concurrent jobs on the same runner type share the same MCS category. With concurrent: 4 on our Hetzner runners, four simultaneous untagged jobs can read each other's /builds content. For GNOME's use case - a community CI infrastructure where the runners are shared by GNOME project maintainers - this is an acceptable tradeoff. The alternative, leaving MCS labels random, would break every single job. But it is precisely this tradeoff that motivates exploring per-job VM isolation via microVMs.
Exploring libkrun
libkrun is a lightweight Virtual Machine Monitor (VMM) that integrates with Podman via --runtime krun, running each container inside a microVM with its own lightweight kernel. The appeal is strong: per-container VM isolation would give each job its own kernel and address space, making the MCS cross-container problem irrelevant inside the VM.
I tested libkrun on a Fedora system and hit an immediate blocker: Fatal glibc error: rseq registration failed. The rseq (Restartable Sequences) syscall was introduced in Linux kernel 5.3 and is required by glibc >= 2.35. libkrun uses a custom minimal kernel that does not expose rseq support. Since the guest images - Fedora in our case - ship modern glibc that expects rseq to be available, the process aborts at startup before any user code runs.
The libkrun kernel is compiled into the library itself and cannot be modified or replaced by the user. This is not a configuration issue but a fundamental limitation of the current libkrun release.
Even if the rseq issue were resolved, the MCS challenge would still be there - as the test script demonstrates in Test 2. On the host side, Podman assigns MCS labels to the virtiofsd process that serves the volume into the VM via virtio-fs. Different VMs get different host-side MCS labels, meaning the same :Z relabel / cross-container access denial applies. The mechanism changes from overlay mounts to virtio-fs, but the SELinux enforcement is identical: virtiofsd for the build VM runs at container_kvm_t:s0:c309,c405 and cannot access files labeled s0:c823,c999 by the helper VM's virtiofsd.
Firecracker and the custom executor path
Firecracker is another microVM technology, the one behind AWS Lambda and Fly.io, that could provide strong per-job isolation. However, there is no native GitLab Runner executor for Firecracker. The only integration path is the Custom Executor, which requires implementing prepare, run, and cleanup scripts from scratch.
The job image is exposed via CUSTOM_ENV_CI_JOB_IMAGE, but everything else is on the operator: pulling the OCI image, extracting a rootfs, booting a Firecracker VM with the right kernel and network configuration, injecting the build script, mounting or copying the cloned repository into the VM, collecting artifacts and cache after the job finishes, and tearing the VM down. GitLab provides an LXD-based example that shows the pattern - prepare creates a container and installs dependencies, run pipes the job script into it, cleanup destroys it - but adapting that to microVMs adds the complexity of VM lifecycle management, kernel and rootfs preparation, networking, and storage. This is a significant engineering effort, essentially rebuilding the entire Docker executor workflow from scratch.
What comes next
MCS is a core SELinux feature. Type enforcement (TE) already confines processes by type - container_t can only access container_file_t, not user_home_t or httpd_sys_content_t - but TE alone cannot distinguish one container_t process from another. MCS adds that layer: by assigning each container a unique category pair, the kernel enforces isolation between processes that share the same type. Container A at s0:c100,c100 and Container B at s0:c200,c200 are both container_t, but MCS ensures they cannot touch each other's files. The conflict with GitLab Runner's multi-container-per-job architecture is that two containers that need to share a volume are given different categories by default. The workarounds we deploy today, including the fixed MCS labels on GNOME's runners, trade that inter-container isolation for functionality.
The most promising direction I've found so far is the combination of Cloud Hypervisor and the fleeting-plugin-fleetingd plugin. Cloud Hypervisor is built on Intel's Rust-VMM crate and is essentially a more capable sibling of Firecracker - it supports CPU and memory hotplugging, VFIO device passthrough, and virtio-fs, features that are often necessary for complex CI tasks like building large binaries or running UI tests and that Firecracker's minimalist design deliberately omits. The fleeting-plugin-fleetingd is a community plugin for GitLab's Instance Executor (the modern evolution of the Custom Executor) that automates the full VM lifecycle: downloading cloud images, creating Copy-on-Write disks, launching Cloud Hypervisor VMs with direct kernel boot, provisioning them via cloud-init, and tearing them down after each build. Each job gets a fresh disposable VM, which is exactly the per-job isolation model we need. The plugin already handles networking via TAP interfaces and nftables SNAT, and supports customization of the VM image through cloud-init commands - so preinstalling Podman or other build tools is straightforward.
Beyond that, I'll also keep evaluating libkrun (promising Red Hat technology), Firecracker with a hand-rolled custom executor, and QEMU's microvm machine type. The common denominator across all of these - except for the fleeting-plugin-fleetingd path - is that none of them have an existing GitLab Runner integration. Regardless of which microVM technology we settle on, the path forward involves either building a workflow from scratch using the Custom Executor and its prepare, run, cleanup hooks, or leveraging the fleeting plugin ecosystem that GitLab has been building around the Instance and Docker Autoscaler executors.
CVE-2026-31431
The urgency of per-job VM isolation was underscored by CVE-2026-31431 ("Copy Fail"), a nine-year-old logic bug in the kernel's algif_aead cryptographic module disclosed at the end of April. The flaw lets an unprivileged local user write four controlled bytes into the page cache of any readable file - enough to patch a setuid binary like /usr/bin/su and escalate to root. Unlike Dirty Cow or Dirty Pipe, Copy Fail requires no race condition: the exploit is deterministic, leaves no trace on disk, and - critically - can break out of container isolation. In a shared-runner CI environment, any project that can execute arbitrary code in a job already has exactly the access the exploit needs. Separately, Claude Mythos - an Anthropic model trained for cybersecurity research that escaped its own sandbox during a red-team exercise in April - demonstrated that AI-assisted vulnerability discovery and exploitation is no longer theoretical; models can now autonomously find and chain bugs that would take human researchers weeks to exploit. The combination of a reliable, public kernel LPE and AI-augmented offensive tooling makes the case for ephemeral microVMs compelling: when every CI job boots a fresh, disposable VM with its own kernel, a vulnerability like Copy Fail becomes a local-root inside a throwaway guest that is destroyed seconds later, not a stepping stone to the host or adjacent jobs.
That should be all for today, stay tuned!
02 May 2026 1:00am GMT
01 May 2026
Planet GNOME
Allan Day: GNOME Foundation Update, 2026-05-01
It's the first day of May, and it's time for another update on what's been happening at the GNOME Foundation. It's been two weeks since my last post, and this update covers highlights of what we've been doing since then.
Remembering Seth Nickell
This week we received the very sad news of the death of Seth Nickell. It's been a long time since Seth was active in the GNOME project, so many of our members won't be familiar with him or his work. However, Seth played an important part in GNOME's history, and was a special and unique character.
Jonathan wrote a wonderful post about Seth, with some great stories. Federico migrated the memorial page from the old wiki to the handbook, and added Seth there (work is currently ongoing to develop that page). Seth's death has also been covered by LWN, which includes dedications from GNOME contributors.
Whether you knew Seth or came to GNOME after his time, I think we can all appreciate the contributions that he made, which live on in the project and wider ecosystem to this day.
GNOME Fellowship
Applications for the first round of the new GNOME Fellowship program closed last week, on 20th April. We had a great response and received some excellent proposals, and now we have the tough job of deciding who is going to receive support through the program.
To that end, the Fellowship Committee met this week to review the proposals and begin the selection process. We have identified a shortlist of candidates, and will be meeting again next week to narrow the selection further.
Since this is the first round of the Fellowship, we are establishing the selection process as we go. Hopefully we'll get to put this to use again in future Fellowship rounds!
Conferences
Linux App Summit (LAS) will be held in Berlin on 16-17 May - that's in a little over two weeks! The schedule has been finalized and looks great, and this year's LAS is shaping up to be a fantastic event. Please do consider going, and please do register!
Due to high demand, the organizing team have decided to stream the talks from this year, so look out for details about remote participation.
Aside from LAS, preparations for July's GUADEC conference continue to be worked on. Travel sponsorship is still available if you need assistance in order to attend, so do consider applying for that.
Office transitions ongoing
Work to update many of our backoffice systems and processes has continued at a steady pace over the past fortnight. Many of the big moves are done (new payments system, email accounts, mailing system, accounting procedures, credit card platform), and we are now firmly in the final stages, making sure that our new address is used everywhere, emails are going to the right places, recurring payments are transferred over to new credit cards, and vendors are setup on the new payments system.
The value of this work is already showing, with smoother accounting procedures, more up to date finance reports, and better tracking of incoming queries.
That's it for this update. Thanks for reading, and take care.
01 May 2026 10:34am GMT
This Week in GNOME: #247 International Workers' Day
Update on what happened across the GNOME project in the week from April 24 to May 01.
GNOME Circle Apps and Libraries
NewsFlash feed reader ↗
Follow your favorite blogs & news sites.
Jan Lukas announces
Hi TWIG. Newsflash can now swipe between articles. This closes off one of the oldest still standing feature requests. And hopefully makes all the mobile users happy.
Third Party Projects
xjuan reports
Casilda 1.2.4 Released!
I am very happy to announce a new version of Casilda!
A simple Wayland compositor widget for Gtk 4 and GNOME
This release comes with several new features like fractional scaling support, bug fixes and extra polish that it is making it start to feel like a proper compositor. You can read more about it at https://blogs.gnome.org/xjuan/2026/04/19/casilda-1-2-4-released/
Anton Isaiev says
RustConn (connection manager for SSH, RDP, VNC, SPICE, Telnet, Serial, Kubernetes, MOSH, and Zero Trust protocols)
Versions 0.11.0-0.12.7 bring the three biggest features since the project started, plus a mountain of polish driven by community feedback.
Cloud Sync landed. You can now synchronize connection configurations between devices and team members through any shared directory - Google Drive, Syncthing, Nextcloud, Dropbox, or even a USB stick. Two modes: Group Sync (per-group .rcn files with Master/Import access) and Simple Sync (single-file bidirectional merge). A file watcher auto-imports changes, and the new Cloud Sync settings page shows sync status, synced groups, and available files. CLI got
sync status,sync list,sync export,sync import, andsync nowcommands.SSH Tunnel Manager is a standalone window for managing headless SSH port-forwarding tunnels without terminal sessions - Local, Remote, and Dynamic forwards with auto-start on launch and auto-reconnect. SSH jump host support was extended to RDP, VNC, and SPICE connections, so you can tunnel graphical sessions through a bastion host. Ctrl+T opens the tunnel manager.
Tab management was completely reworked around AdwTabView. Tab Overview (Ctrl+Shift+O) gives a GNOME Web-style grid of all open tabs. Tab Pinning keeps important tabs at the left edge. A tab switcher in the Command Palette (% prefix) provides fuzzy search across open tabs. Right-click context menu gained Close Others / Left / Right / All / Ungrouped actions.
Other highlights: custom terminal color themes with full 16-color ANSI palette editor; terminal scrollbar; font zoom (Ctrl+Scroll); copy-on-select; SSH Keep-Alive and verbose mode; Hoop.dev as the 11th Zero Trust provider; custom SSH agent socket override (fixes KeePassXC/Bitwarden agent in Flatpak); RDP mouse jiggler; terminal activity/silence monitor; host online check with auto-connect; highlight rules now render with actual colors via Cairo overlay; connection dialog rebuilt with adw:: widgets following GNOME HIG.
Packaging grew significantly. RustConn is now available as Flatpak on Flathub, Snap with strict confinement, AppImage, native .deb and .rpm packages via OBS repositories (Debian 13, Ubuntu 24.04/26.04, Fedora 43/44, openSUSE Tumbleweed/Slowroll/Leap 16.0), plus ARM64 builds. A huge thank you to the community maintainers: the AUR package for Arch Linux, the FreeBSD port, and there is an open request to include RustConn in Debian proper.
Thank you to everyone who reported issues, contributed translations, and tested pre-releases - your feedback shaped every one of these 25 releases. Special thanks to GaaChun for the complete Simplified Chinese translation, and to Phil Dodd and Todor Todorov for the support.
Project: https://github.com/totoshko88/RustConn Flatpak: https://flathub.org/en/apps/io.github.totoshko88.RustConn
Capypara says
Field Monitor 50.0
Field Monitor - the remote desktop viewer focused on accessing VMs - has been updated to version 50.0.
Some highlights:
- Support for multiple monitors for SPICE connections.
- Support for sharing USB devices with SPICE sessions using the XDG USB Portal (even with the Flatpak).
- KVM/QEMU VMs can now be accessed with hardware accelerated GPU rendering - if enabled.
- Field Monitor now validates server certificates and asks you for your trust if a certificate isn't automatically trusted by your system.
- Several bugfixes to RDP and SPICE sessions, such as cursor rendering issues and overall performance.
Field Monitor is available via Flathub: https://flathub.org/apps/de.capypara.FieldMonitor
Christian says
The first public release of Gitte is out!
Gitte is a GTK4/libadwaita git GUI written in Rust, built on Relm4 and git2 (no shelling out to the git binary).
What's in the initial release:
- Browse repositories with a saved repositories start screen
- View the working copy, stage and unstage changes, commit them, amend commits
- Read the commit log and inspect diffs file by file
- Manage branches, tags, remotes, and stashes
- Push from and pull to remotes, auto-fetching remotes in the background
It's early days, so expect rough edges. Bug reports and feedback are very welcome.
Get it on Flathub or check the source code.
Parabolic ↗
Download web video and audio.
Nick reports
Parabolic V2026.4.1 is here with plenty of bug fixes!
Here's the full changelog:
- Fixed an issue where some settings would not save correctly
- Fixed an issue where playlist downloads with a resolution limit had no audio
- Fixed an issue where portrait/vertical videos in playlists downloaded at incorrect resolutions
- Fixed an issue where downloads from sites with muxed-only streams would fail
- Fixed an issue where downloading a time frame clip from a long video produced an incomplete result
- Fixed an issue where downloading a time frame clip from a long video could hang indefinitely with aria2c enabled
- Fixed an issue where X/Twitter quoted downloads could produce the same video twice
- Fixed an issue where deno was unable to be updated in-app on Linux
- Fixed an issue where browser cookies could not be found when running via Flatpak on Linux
- Fixed an issue where Parabolic would not start on KDE desktops
- Fixed an issue where Parabolic did not open links from browser extension on Windows
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
01 May 2026 12:00am GMT
30 Apr 2026
Planet GNOME
Felipe Borges: Let’s Welcome Our Google Summer of Code 2026 Contributors!
GNOME is once again participating in GSoC. This year, we have 6 contributors working on adding Debug Adapter Protocol support to GJS, incorporating vocab-style puzzles into GNOME Crosswords, creating a native GTK4/Rust rewrite of the Pitivi timeline ruler, porting gitg to GTK4, implementing app uninstallation in the GNOME Shell app grid, and enabling recovery from GPU resets.
As we onboard the contributors, we will be adding them to Planet GNOME, where you can get to know them better and follow their project updates.
GSoC is a great opportunity to welcome new people into our project. Please help them get started and make them feel at home in our community!
Special thanks to our community mentors, who are donating their time and energy to help welcome and guide our new contributors: Philip Chimento, Jonathan Blandford, Yatin, Alex Băluț, Alberto Fanjul, Adrian Vovk, Jonas Ådahl, and Robert Mader.
For more information, visit https://summerofcode.withgoogle.com/programs/2026/organizations/gnome-foundation
30 Apr 2026 9:05pm GMT
Sophie Herold: Testing Library Code in GNOME OS
Yesterday, I wanted to debug a glycin (or Shell) issue on GNOME OS. Turns out, there is currently no documentation that works or includes all necessary steps.
Here is the simplest variant if you don't develop on GNOME OS and have an internet connection that can download 16 GB in a reasonable amount of time.
First we get a toolbox image to build our code.
$ toolbox create gnomeos-nightly -i quay.io/gnome_infrastructure/gnome-build-meta:gnomeos-devel-nightly
After entering the toolbox with
$ toolbox enter gnomeos-nightly
we can clone and build our project with sysext-utils that are included in our image:
$ meson setup ./build --prefix /usr --libdir="lib/$(gcc -print-multiarch)"
$ sysext-build example ./build
This creates a example.sysext.raw file.
Now, we need a GNOME OS to test our build. We can download the image and install it in Boxes. After logging in, we can just drag and drop the example.sysext.raw into the VM.
Before we can install it, we need to get the development tools for our VM:
$ run0 updatectl enable devel --now
After that, we need to restart the VM.
Finally, we can test our build:
$ run0 sysext-add ~/Downloads/example.sysext.raw
Adding the --persistent flag to this command will make the changes stay active across reboots.
If the changes made it impossible to boot into the VM again, we can start the VM in "Safe mode" from the boot menu. After logging in, we can manually remove the extension:
$ run0 rm /var/lib/extensions/example.raw
Happy hacking!
30 Apr 2026 12:58pm GMT
vixalien: A love letter to mise
Recently, I have been using GNOME OS, as my daily driver.
After being a seasoned Linux for long, dabbling in distros like Alpine Linux, Arch Linux, Fedora (and even Silverblue), I tried switching to something more opinionated and that "works by default" all while being hard to break.
And given my existing relationship with GNOME, GNOME OS was a choice worth looking into.
One feature of GNOME OS is that it is immutable (i.e. system files are read-only). It also doesn't ship with a package manager, so it doesn't have functionality built-in to install extra packages.
You can install GUI Applications normally using Flathub (and Snap/AppImage), but installing non-GUI applications like development tools or CLI packages is not built-in.
There are of course several solutions you can use, such as homebrew, coldbrew, but today we will focus on mise.
What is mise?
mise pitches itself as "One tool to manage languages, env vars, and tasks per project, reproducibly."
However, I only use a fraction of it's functionality, in that I only use it to install packages.
How to install it?
The instructions are here: https://mise.jdx.dev/getting-started.html
But essentially it's as easy as running this (remember to read the source of the installer first):
curl https://mise.run | sh
Activating mise
Then you will need to "activate" mise, which essentially makes tools installed by mise available by modifying your $PATH variable
echo 'eval "$(~/.local/bin/mise activate bash --shims)"' >> ~/.bashrc
The instructions above are for bash, so you will need to consult the docs to get instructions for your shell.
You will need to re-login for the mise command to be available, or open a new shell.
A note on shims
Feel free to skip this section, as it's just an explainer
Also, note that the above command use the --shims flag, which is NOT the default. It essentially means that mise will modify the $PATH variable, instead of doing a weird thing where it will re-activate itself after each command you run.
The non-shim way to activate mise is useful when you use mise to install different package versions across different repositories, but that sometimes breaks IDEs and is our of the scope of this blog post.
Installing packages
You can start installing your first package with mise:
mise use -g java
The above command installs java globally (hence the -g flag), which you can now confirm by running:
$ java --version
openjdk 26.0.1 2026-04-21
OpenJDK Runtime Environment (build 26.0.1+8-34)
OpenJDK 64-Bit Server VM (build 26.0.1+8-34, mixed mode, sharing)
You can install much more tools, of which you can find a non-complete list here: mise-tools.
For example, you can similarly install a specific major version of nodejs
mise use -g node@22
Or install the latest LTS version of node
mise use -g node@lts
Or you can be overlay specific
mise use -g node@v25.9.0
mise use -g node@25.9.0 # this works too!
Searching
Use mise search to find packages.
mise search typ
Tool Description
typos Source code spell checker. https://github.com/crate-ci/typos
typst A new markup-based typesetting system that is powerful and easy to learn. https://github.com/typst/typst
typstyle Beautiful and reliable typst code formatter. https://github.com/Enter-tainer/typstyle
quicktype Generate types and converters from JSON, Schema, and GraphQL provided by https://quicktype.io. https://www.npmjs.com/package/quicktype
Uninstalling
mise unuse -g node
Updating
mise self-update # updating mise itself
mise up # updating tools installed by mise
mise outdated # checking if you have outdated tools
Config File
Tools you install with mise globally will be saved in the file ~/.config/mise/config.toml, which you can commit to your dotfiles so you can have similar tools across different machines.
Here's an example of my mise config file at the time of writing this blog post.
# ~/.config/mise/config.toml
[tools]
bat = "latest"
btop = "latest"
bun = "latest"
caddy = "latest"
"cargo:mergiraf" = "latest"
deno = "latest"
difftastic = "latest"
doggo = "latest"
fastfetch = "latest"
fzf = "latest"
github-cli = "latest"
"github:railwayapp/railpack" = "latest"
glab = "latest"
helix = "latest"
java = "latest"
lazygit = "latest"
node = "latest"
"npm:vscode-langservers-extracted" = "latest"
oha = "latest"
pipx = "latest"
pnpm = "latest"
prettier = "latest"
rust = "latest"
scooter = "latest"
tmux = "latest"
usage = "latest"
yt-dlp = { version = "latest", rename_exe = "yt-dlp" }
zellij = "latest"
"github:patryk-ku/music-discord-rpc" = { version = "latest", asset_pattern = "music-discord-rpc" }
rclone = "latest"
mc = "latest"
go = "latest"
"go:git.sr.ht/~migadu/alps/cmd/alps" = "latest"
"npm:localtunnel" = "latest"
After the tools inside the config has changed, you can run the following comand to make mise re-install packages from the config file
mise install
Mise Backends
Mise is able to install packages from multiple sources. These sources are called "backends" by mise.
When you type mise use -g node@22, it will resolve node against the registry and figure out that the default backend for node is core
Core
The default backend is called core and tools from this backend are usually provided from the official source.
Other tools that are available from core include Node.js, Ruby, Python, etc...
We could also have been explicit with the backend we want to use
mise use -g core:node
You can find a list of all core packages here.
Aqua
You can also install packages from the Aqua registry.
Language Package Managers
You can also install tools from their respective package managers. Here are a few examples
npm
You can install prettier, typescript, oxlint and other JavaScript/TypeScript tools published on the npm registry. Find the tools on npm
mise use -g npm:prettier
pipx
You can install black, poetry and other Python tools from pypi. Find the tools on pypi
mise use -g pipx:black
pipx:git+https://github.com/psf/black.git # from a github repo
cargo
You can install cargo packages with this backed. You need to have rust installed beforehand though, which you can do with mise
mise use -g rust
Then install your packages
mise use -g cargo:eza
There are more language package manager backends like: gem, go and more.
Github
You can install packages from Github directly, as long as the project you are trying to install from uses Github releases
mise use -g github:railwayapp/railpack
mise will usually auto-detect which asset you want to use, but you can also specify the asset glob in ~/.config/mise/config.toml
[tools]
"github:patryk-ku/music-discord-rpc" = { version = "latest", asset_pattern = "music-discord-rpc" }
30 Apr 2026 12:00am GMT
29 Apr 2026
Planet GNOME
Jonathan Blandford: Remembering Seth
I heard the news about Seth Nickell's passing last week, and have been in a bit of a funk ever since.
Seth was brilliant, iconoclastic, fearless.
It's been a long while since Seth was an active part of the GNOME Community, but his influence on the project can still be seen in its DNA if you know where to look. He arrived on the GNOME scene while still in school with hundreds of ideas on how to improve things. It was an interesting time: We had just launched GNOME 1.5 and were searching for a new path towards GNOME 2.0. The Sun usability study had been published and the community had internalized the need to change directions. Seth rolled up his sleeves and did the work needed to help light that path.
Seth championed radical proposals such as instant apply, button ordering, message dialog fixes, and more. He cleaned up the control-center proposing some of the most visible changes from GNOME 1 to 2. He also did the initial designs for epiphany, pushing for a cleaner browser experience during an era of high browser complexity. He had a vision of desktops as a democratic tool, as easy and natural to use as any other tool in the human experience.
As a designer, Seth was focused on trying to understand who we were designing for and making sure we were solving problems for them. While he wasn't beyond fixing paddings / layouts, he wanted to get the Big Picture right. He wasn't beyond rolling up his sleeves writing code to move things forward, but was at his best as a champion and visionary, arguing for us to take risks and continue to innovate.
Spending time was Seth was a hoot. He had such a flair for the dramatic. I remember…
- …the time he sold the design for what would become NetworkManager to a bunch of engineers. He got up on the stage and announced: "We are going to make this [holding an ethernet cable] as easy to use as this [producing a power plug]!" It's hard to describe how many steps it took to set up networking back then.
- …his vision of an improved messaging system - Project Yarrr. He used
(U+2620) as the SVN repo name partially to see how many internal tools weren't UTF-8 clean. - …him breaking out into an operatic rendition of "Tradition" when developers were pushing back on a change he was proposing.
- …the time he changed everyone's background in the RH office to have crop circles over night. He showed up the next morning in a robe dressed as an old-testament prophet, beating a drum and carrying a "RHEL5 IS NIGH" sign.
- …hanging printouts of hate mail he got for various design choices outside of the Mega Cube (a group activity)!
- And everyone who was around for the Dark Princess Incident will always remember it.
Being one of the public faces of GNOME2 was hard, and he moved on. Later, he worked on OLPC and Sugar, and made his mark there. After that, he seemed to travel a lot. We lost touch, though he'd reappear every couple of years to say hi. I hope he found what he was looking for.
Farewell, my friend. The world now has less color in it.

29 Apr 2026 5:07am GMT
28 Apr 2026
Planet GNOME
Thibault Martin: TIL that Yubikeys are convenient for Linux login
I got myself a Yubikey recently, and I wanted to use it as a nice convenience to:
- Grant me sudo privileges
- Unlock my session
- Decrypt my LUKS-encrypted disk
I've only managed to do the first two, since they both rely on Linux Pluggable Authentication Modules (PAM). Luckily for me, one of PAM's modules supports U2F, the standard Yubikeys rely on.
First I need to install pam-u2f to add U2F support to PAM, and pamu2fcfg to configure my key.
$ sudo rpm-ostree install pam-u2f pamu2fcfg
Since I'm running an immutable OS I need to reboot, and then I can create the correct directory and file to dump an U2F key into it.
$ mkdir -p ~/.config/Yubico
$ pamu2fcfg > ~/.config/Yubico/u2f_keys
Then I make sure to have a root session open in case I lock myself out of sudoers.
$ sudo su
#
In a different terminal, I can edit the sudoers file to add this line
#%PAM-1.0
auth sufficient pam_u2f.so cue openasuser
auth include system-auth
account include system-auth
password include system-auth
session optional pam_keyinit.so revoke
session required pam_limits.so
session include system-auth
I save this file and open a new terminal. I type in sudo vi and it asks me to touch my FIDO authenticator before opening vi! If I touch the Yubikey, it indeed opens vi with root privileges.
Let's break down the line:
authfor authenticationsufficientpassing this authentication challenge is enough (it's not an additional factor of authentication)pam_u2f.sothe module we load is for U2F, the standard Yubikeys usecueprint "Please touch the FIDO authenticator." when the user needs to authenticateopenasuserto fetch the authentication file without root privileges
It's also possible to use it to unlock my session, but it would be a bit reckless to allow anyone with my Yubikey to log into my laptop. If my backpack gets stolen and it has both my Yubikey and my laptop, anyone can log in.
It's possible to make the login screen require either my user password, or all of
- The Yubikey itself
- The PIN of the Yubikey
- Me to touch the Yubikey
If someone fails more than three times to enter the correct PIN, the Yubikey will lock itself and require a PUK to be unlocked. This gives me an additional layer of security, and it's more convenient than having to type a full length passphrase.
I've added the following line to /etc/pam.d/greetd (the greeter I use):
#%PAM-1.0
auth sufficient pam_u2f.so cue openasuser pinverification=1 userpresence=1
auth substack system-auth
[...]
[!warning] I can lose my Yubikey
I use my Yubikey as a nice convenience to set up a weaker PIN while not compromising too much on security. I use it instead of a password, no in addition to it.
Since I can lose or break my Yubikey and I don't want to buy two of them, I make the U2F login
sufficientbut notrequired. This means I can still fallback to password authentication if I lose my Yubikey.
Finally, DankMaterialShell uses its own lockscreen manager too. I still want to be able to fallback to password authentication if need be, so I'll configure it to accept U2F OR the password, not both.
This means that the lockscreen will call /etc/pam.d/dankshell-u2f to know what to do when the screen is locked. Since this file doesn't exist, I can create it with the following content.
#%PAM-1.0
auth sufficient pam_u2f.so cue openasuser pinverification=1 userpresence=1
I need a fallback for when I don't have my Yubikey, so I also create the one for this occasion
#%PAM-1.0
auth include system-auth
Finally, I have a consistent setup where both my login and lock screen require me to plug my key, enter its PIN and touch it, or enter my full password. When it comes to sudo, I can only touch my key without requiring an PIN.
My next quest will be to use my Yubikey to unlock my LUKS-encrypted disk.
28 Apr 2026 10:00am GMT
27 Apr 2026
Planet GNOME
Jordan Petridis: Goblins in your toolchain
At the start of the month, Bilal gave us all a giant gift with Goblint. On the first week it was already impressive. Now it's an invaluable tool for anyone that ever interfaced with GObject, glib or GTK. It will catch leaks, bugs, or even offer to auto fix and modernize your code to the modern paradigms we use. It's one of those things that is going to save countless hours of debugging and more importantly, prevent the issues before they even get committed. Jonathan Blandford wrote about using it two days ago, and I suggest you read the post.
Everyone is trying to use goblint, and we are all stumbling upon the same issues integrating it into our tooling. Initially, it was only able to produce Sarif reports, which GitLab still has behind a feature flag, in addition to only be available in GitLab Enterprise Editions.
I added an export for GitLab's Code Quality format which has some support in the non-proprietary Community Edition we use in the GNOME and Freedesktop.org instances. Sadly, almost everything nice is still only available in the enterprise editions, but at least there is this little Widget in the Merge Requests page.

Additionally, we now have CI templates for Goblint. One is adding a job to the existing gnomeos-basic-ci component we use everywhere. Simply go to your latest pipeline and look for the job.
The report will also show up in Merge Requests that have been updated since yesterday. The gnomeos-basic-ci has other goodies like sanitizers, static analyzers, test coverage, etc wired out of the box, so you should give it a try if you are not using it yet.
If you do but don't want the goblint job, you can disable it easily with inputs: goblint: "disabled" similar to all the other tools the component provides.
include:
- project: "GNOME/citemplates"
file: "templates/default-rules.yml"
- component: "gitlab.gnome.org/GNOME/citemplates/gnomeos-basic-ci@26.1"If you want only a goblint job, I've also added a standalone template that you can use. (Or copy-paste from it).
include:
- component: "gitlab.gnome.org/GNOME/citemplates/goblint@26.1"
inputs:
job-stage: "lint"In order for the Code Quality report to work, you will need to have a report uploaded from your target branch, so GitLab will have something to compare the one from the merge request with. The template rules will handle that for you, but keep it in mind.
At this moment all the lints are warnings so the job will never be fatal. This is why we can enabled it by default without worrying about breaking pipelines for now. You can further configure its behavior to your needs, and error out if you want to, through the configuration file.
min_glib_version = "2.76"
[rules.g_declare_semicolon]
level = "ignore"
[rules.untranslated_string]
level = "error"
ignore = ["**/test-*.c"]It's also very likely that we are going to add goblint and its LSP server to the GNOME SDK Flatpak runtime, along with GNOME OS, so it will always be available for use with tools like Builder and foundry.
Enjoy
27 Apr 2026 10:05am GMT























