03 May 2026
Planet GNOME
Nick Richards: WhatCable, Framework, and USB-C
USB-C is excellent, provided you don't look too closely.
I've been seeing a drum beat of interest in the internals of USB-C. Darryl Morley's macOS WhatCable, Chromebooks exposing lots of lovely info about emarkers, USB cable testers and a bit more. Very infrastructure club topics. So I made a small GTK app also called WhatCable which is intended to show what Linux knows about your USB ports, cables, chargers and devices, but written as a GNOME/libadwaita app and using the interfaces Linux exposes through sysfs.
The hope was fairly straightforward: plug things into my Framework 13, ask Linux what is going on, and present the answer in a way that doesn't require remembering which bit of /sys to poke. In particular I wanted cable identity and e-marker details. These are the useful little facts that tell you whether a cable is what it claims to be, or at least what it claims to be electronically. Given the number of USB-C cables in the house whose origin story is "came in a box with something", this felt like a public service, or at least a satisfying evening.
The first bit is pleasantly sensible. Linux has standard-ish places for this information:
/sys/bus/usb/devices
/sys/class/typec
/sys/class/usb_power_delivery
/sys/bus/thunderbolt/devices
When those are populated, a normal unprivileged app can learn quite a lot. It can show USB devices, Type-C ports, partners, cables, roles, power data, Thunderbolt and USB4 domains. That's exactly the sort of thing a small Flatpak app should be good at: read some public kernel state, translate it into something at least moderately human friendly and then depart.
On my Framework 13, the USB device and Thunderbolt sides were useful. The Type-C side was not. /sys/class/typec existed but had no ports. /sys/class/usb_power_delivery existed but was empty. This is a slightly annoying result, because it means the nice standard API is present as a signpost rather than a destination.
The next clue was that the machine clearly does have USB-C machinery, and not just because I could look at the side of the device. It is a Framework 13 with the embedded controller and Cypress CCG power delivery controllers doing real work. The relevant kernel modules were loaded, including UCSI and Chrome EC pieces. There was also an ACPI UCSI device at:
/sys/bus/acpi/devices/USBC000:00
but ucsi_acpi did not appear to bind to it and create the Type-C class ports. So the hardware and firmware know things, but they were not arriving in the standard Linux userspace shape.
Framework's own tooling gives another route in. I built framework_tool from FrameworkComputer/framework-system and asked the EC what it could see. The Framework-specific PD port command did not work on this firmware:
USB-C Port 0:
[ERROR] EC Response Code: InvalidCommand
and similarly for the other ports. That's not very poetic, but it is at least clear.
The Chromebook-style power command was more useful. With a charger connected it reported, for example:
USB-C Port 0 (Right Back):
Role: Sink
Charging Type: PD
Voltage Now: 19.776 V, Max: 20.0 V
Current Lim: 2250 mA, Max: 2250 mA
Dual Role: Charger
Max Power: 45.0 W
That's good information. It's not cable identity, but it is the kind of port state people actually want when they are trying to work out why a laptop is charging slowly, or not charging, or doing something else mildly USB-C shaped.
framework_tool --pd-info could also talk through the EC to the Cypress controllers and report their firmware details:
Right / Ports 01
Silicon ID: 0x2100
Mode: MainFw
Ports Enabled: 0, 1
FW2 (Main) Version: Base: 3.4.0.A10, App: 3.8.00
Left / Ports 23
Silicon ID: 0x2100
Mode: MainFw
Ports Enabled: 0, 1
FW2 (Main) Version: Base: 3.4.0.A10, App: 3.8.00
Again, useful. Again, not the cable.
Much of this investigation and app code was written with AI tools in the loop. That was useful for chasing down boring plumbing and generating probes. The decisive test was asking the Chrome EC for the newer Type-C discovery data directly. The EC advertised USB PD support, but not the newer Type-C command set. EC_CMD_TYPEC_STATUS and EC_CMD_TYPEC_DISCOVERY both came back as invalid commands on all four ports.
That means that on this Framework 13 firmware path I cannot get Discover Identity results, SOP/SOP' discovery data, SVIDs, mode lists or e-marker details through Chrome EC host commands. The cable may well be telling the PD controller interesting things, but those things are not exposed through a stable unprivileged interface I can sensibly use in a desktop app.
This is the main lesson from the whole exercise: USB-C inspection on Linux is not one API. It is a set of possible stories. Sometimes the kernel Type-C class tells you lots of things. Sometimes Thunderbolt sysfs tells you a different useful slice. Sometimes a vendor EC can tell you power state, but only as root. Sometimes the information exists below you somewhere, but not in a form you should build an app around.
So WhatCable needs to be honest. It should show the sources it can read, and it should say when a source is unavailable rather than pretending absence means certainty. "No cable identity exposed on this machine" is a very different statement from "this cable has no identity". The former is boring but true. The latter is how you end up lying with an icon (it is not a nice icon).
The current shape I think is right is:
- use USB, Type-C, USB PD and Thunderbolt sysfs whenever they are available;
- show raw values as well as friendly summaries;
- explain missing sources in diagnostics;
- treat Framework EC data as an optional extra, not a default dependency;
- if EC access is added, put it behind a narrow read-only helper rather than teaching a Flatpak app to fling arbitrary commands at
/dev/cros_ec.
That last point matters. On the host /dev/cros_ec exists, but it is root-only. Making a normal app require broad device access would be a poor bargain. A small privileged helper that answers a few known-safe questions might be acceptable. A graphical app with arbitrary EC command execution would be exciting in the wrong way.
This is not quite the result I wanted when I started. I wanted to show a friendly "this is a 100W e-marked cable" label and feel very clever about it. What I have instead is a more modest app and a better understanding of where the bodies are buried. That's still useful. A tool that tells you what your machine actually exposes is better than one that implies the USB-C universe is more orderly than it is. Given this, I'm not going to be sharing this one more widely, but fork away if you wish, or come back with a better idea.
It's very easy to run with GNOME Builder, so just check out the source and 'press play' or get an artifact out of the Github Actions. If you run WhatCable on a different laptop and see rich Type-C data, lovely. If you run it on a Framework 13 like mine and mostly see USB devices, Thunderbolt controllers and a note that Type-C data is missing, that is also information. Not as glamorous as catching a suspicious cable in the act, but much more likely to be true.
03 May 2026 8:10pm GMT
02 May 2026
Planet GNOME
Andrea Veri: SELinux MCS challenges with GitLab Runners
Table of Contents
- Table of Contents
- Introduction
- The MCS problem
- The test script
- GitLab's official suggestion and why it falls short
- How GNOME currently handles this
- Exploring libkrun
- Firecracker and the custom executor path
- What comes next
Introduction
GNOME's GitLab runners use Podman as the container runtime with SELinux in Enforcing mode on Fedora. The GitLab Runner Docker/Podman executor spawns multiple containers per job: a helper container that clones the repository and handles artifacts, and a build container that runs the actual CI script. Both containers need to share a /builds volume - and this is where SELinux's Multi-Category Security (MCS) becomes a problem.
The MCS problem
An SELinux label has four fields: user:role:type:level. For containers the interesting part is the level, also called the MCS field. A level looks like s0:c123,c456 - s0 is the sensitivity (always s0 in targeted policy), and c123,c456 are the categories. A process or file can carry up to two categories.
MCS access is based on dominance. A subject's label dominates an object's label if the subject's categories are a superset of (or equal to) the object's categories:
| Subject | Object | Access? | Why |
|---|---|---|---|
s0:c100,c200 |
s0:c100,c200 |
Yes | Exact match |
s0:c100,c200 |
s0:c100 |
Yes | Subject's categories are a superset |
s0:c100,c200 |
s0:c100,c300 |
No | Subject lacks c300 |
s0:c0.c1023 |
s0:c100,c200 |
Yes | Full range dominates everything |
s0 |
s0:c100,c200 |
No | No categories can't dominate any |
s0 |
s0 |
Yes | Both have no categories |
How this applies to the runners:
- Container A runs as
container_t:s0:c100,c100- it can only access objects labeleds0:c100,c100(ors0:c100, ors0) - Container B runs as
container_t:s0:c200,c200- it can only access objects labeleds0:c200,c200(ors0:c200, ors0) - Container A cannot access Container B's files -
c100,c100doesn't dominatec200,c200 - Overlay layers labeled
s0(no categories) - accessible by all containers since every category set dominates the empty set - Podman at
container_runtime_t:s0-s0:c0.c1023- the full range means it dominates every possible category combination, so it can manage all containers
The range syntax (s0-s0:c0.c1023) is used for processes that need to operate across multiple levels. It means "my low clearance is s0 and my high clearance is s0:c0.c1023." The process can read objects at any level within that range and create objects at any level within it. This is why Podman needs the full range - it creates containers with different MCS labels and needs to access all of them.
When Podman starts a container, it picks a random pair of categories (e.g., s0:c512,c768) from within its allowed range and assigns that as the container's process label. Files created by the container inherit that label. Another container gets a different random pair (e.g., s0:c33,c901). Since c512,c768 and c33,c901 do not match - neither is a superset of the other - SELinux denies cross-container file access. This is the isolation mechanism, and the root cause of the problem with GitLab Runner's multi-container-per-job architecture.
The helper container gets one random MCS pair, writes the cloned repo to /builds labeled with that pair, and the build container gets a different pair. The build container cannot read or write those files. The :Z volume flag (exclusive relabel) relabels the volume to the mounting container's category, but that only helps the first container - the second one still has a different label.
The test script
I wrote a script that demonstrates the problem with both standard containers (crun) and microVMs (libkrun). The script creates two containers per test - a helper that writes a file to a shared /builds volume, and a build container that tries to read it - simulating the GitLab Runner workflow:
#!/bin/bash
# Description: SELinux MCS Diagnostic (crun vs krun)
if [ "$(getenforce)" != "Enforcing" ]; then
echo "WARNING: SELinux is not in Enforcing mode. This test requires Enforcing mode."
exit 1
fi
TEST_BASE="/tmp/gitlab-runner-mcs-test"
CRUN_DIR="$TEST_BASE/crun-builds"
KRUN_DIR="$TEST_BASE/krun-builds"
# Cleanup from previous runs
rm -rf "$TEST_BASE"
mkdir -p "$CRUN_DIR" "$KRUN_DIR"
echo "======================================================="
echo " TEST 1: Standard Container Isolation (crun)"
echo "======================================================="
# 1. CREATE Helper
podman create --name crun-helper -v "$CRUN_DIR:/builds:Z" fedora bash -c "
echo '[crun] -> Helper Process Context (Inside):'
cat /proc/self/attr/current
echo 'crun-data' > /builds/artifact.txt
echo '[crun] -> File Label INSIDE Helper:'
ls -Z /builds/artifact.txt
" > /dev/null
echo "[crun] Starting Helper Container (applying :Z relabel)..."
HELPER_HOST_LABEL_CRUN=$(podman inspect -f '{{.ProcessLabel}}' crun-helper)
echo "[crun] -> HOST METADATA: Podman assigned process label: $HELPER_HOST_LABEL_CRUN"
podman start -a crun-helper
echo ""
echo "[crun] -> File Label ON HOST (Notice the specific MCS category):"
ls -Z "$CRUN_DIR/artifact.txt"
# 2. CREATE Build Container (The Victim)
podman create --name crun-build -v "$CRUN_DIR:/builds" fedora bash -c "
echo ' [Build-Internal] Process Context:'
cat /proc/self/attr/current 2>/dev/null
echo ' [Build-Internal] Executing ls -laZ /builds :'
ls -laZ /builds 2>&1 | sed 's/^/ /'
echo ' [Build-Internal] Executing cat /builds/artifact.txt :'
cat /builds/artifact.txt 2>&1 | sed 's/^/ /'
" > /dev/null
echo ""
echo "[crun] Starting Build Container to inspect shared volume..."
BUILD_HOST_LABEL_CRUN=$(podman inspect -f '{{.ProcessLabel}}' crun-build)
echo "[crun] -> HOST METADATA: Podman assigned process label: $BUILD_HOST_LABEL_CRUN"
podman start -a crun-build
podman rm -f crun-helper crun-build > /dev/null
echo ""
echo "======================================================="
echo " TEST 2: MicroVM Isolation (libkrun / virtio-fs) FIXED"
echo "======================================================="
# --- Write the execution scripts to the host to avoid parsing errors ---
cat << 'EOF' > "$TEST_BASE/krun_helper.sh"
#!/bin/bash
echo '[krun] -> Helper Process Context (Inside VM):'
cat /proc/self/attr/current 2>/dev/null || echo ' (SELinux disabled/unavailable in guest kernel)'
echo 'krun-data' > /builds/artifact.txt
echo '[krun] -> File Label INSIDE Helper VM (Blindspot):'
ls -laZ /builds/artifact.txt 2>&1 | sed 's/^/ /'
EOF
cat << 'EOF' > "$TEST_BASE/krun_build.sh"
#!/bin/bash
echo ' [Build-Internal] Process Context (Inside VM):'
cat /proc/self/attr/current 2>/dev/null || echo ' (SELinux disabled/unavailable in guest kernel)'
echo ' [Build-Internal] Executing ls -laZ /builds :'
ls -laZ /builds 2>&1 | sed 's/^/ /'
echo ' [Build-Internal] Executing cat /builds/artifact.txt :'
cat /builds/artifact.txt 2>&1 | sed 's/^/ /'
EOF
chmod +x "$TEST_BASE/krun_helper.sh" "$TEST_BASE/krun_build.sh"
# ---------------------------------------------------------------------
# 1. CREATE Helper MicroVM
podman create --name krun-helper --runtime krun --memory=1024m \
-v "$KRUN_DIR:/builds:Z" \
-v "$TEST_BASE/krun_helper.sh:/script.sh:ro,Z" \
fedora /script.sh > /dev/null
echo "[krun] Starting Helper MicroVM (applying :Z relabel)..."
HELPER_HOST_LABEL_KRUN=$(podman inspect -f '{{.ProcessLabel}}' krun-helper)
echo "[krun] -> HOST METADATA: Podman assigned process label: $HELPER_HOST_LABEL_KRUN"
podman start -a krun-helper
echo ""
echo "[krun] -> File Label ON HOST (Podman applied the helper's MCS category via :Z):"
ls -Z "$KRUN_DIR/artifact.txt"
# 2. CREATE Build MicroVM (The Victim)
podman create --name krun-build --runtime krun --memory=1024m \
-v "$KRUN_DIR:/builds" \
-v "$TEST_BASE/krun_build.sh:/script.sh:ro,Z" \
fedora /script.sh > /dev/null
echo ""
echo "[krun] Starting Build MicroVM to inspect shared volume..."
BUILD_HOST_LABEL_KRUN=$(podman inspect -f '{{.ProcessLabel}}' krun-build)
echo "[krun] -> HOST METADATA: Podman assigned process label: $BUILD_HOST_LABEL_KRUN"
echo " *** THE virtiofsd DAEMON ON THE HOST IS TRAPPED IN THIS CONTEXT ***"
podman start -a krun-build
# Cleanup
podman rm -f krun-helper krun-build > /dev/null
echo ""
echo "======================================================="
echo " Test Complete."
Test 1 (crun) creates a helper container that mounts the builds directory with :Z (exclusive relabel) and writes artifact.txt. Podman assigns it a random MCS label - in this run it was s0:c20,c540. The file on disk inherits that label. Then a second container (the build container) mounts the same path without :Z and gets a different random label (s0:c46,c331). Since c46,c331 does not dominate c20,c540, the build container is denied access to the file.
Test 2 (krun) runs the same scenario but with --runtime krun, which boots each container inside a lightweight microVM via libkrun. The helper VM gets container_kvm_t:s0:c823,c999 and the build VM gets container_kvm_t:s0:c309,c405 - same MCS mismatch, same denial. The type changes from container_t to container_kvm_t, but the MCS mechanism is identical. On the host side, virtiofsd - the daemon that serves the volume into the VM via virtio-fs - runs under the MCS label Podman assigned to the VM. The build VM's virtiofsd is trapped in s0:c309,c405 and cannot access files labeled s0:c823,c999.
An interesting detail: inside the libkrun VMs, cat /proc/self/attr/current returns just kernel - SELinux is not available in the guest. The VM thinks it has no mandatory access control, but the host-side virtiofsd is still fully subject to MCS enforcement. This is a blindspot worth being aware of.
The output from a run on Fedora with SELinux Enforcing and Podman 5.8.2:
=======================================================
TEST 1: Standard Container Isolation (crun)
=======================================================
[crun] Starting Helper Container (applying :Z relabel)...
[crun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_t:s0:c20,c540
[crun] -> Helper Process Context (Inside):
system_u:system_r:container_t:s0:c20,c540 [crun] -> File Label INSIDE Helper:
system_u:object_r:container_file_t:s0:c20,c540 /builds/artifact.txt
[crun] -> File Label ON HOST (Notice the specific MCS category):
system_u:object_r:container_file_t:s0:c20,c540 /tmp/gitlab-runner-mcs-test/crun-builds/artifact.txt
[crun] Starting Build Container to inspect shared volume...
[crun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_t:s0:c46,c331
*** COMPARE THE cXXX,cYYY ABOVE TO THE FILE LABEL. THIS MISMATCH CAUSES THE DENIAL ***
[Build-Internal] Process Context:
system_u:system_r:container_t:s0:c46,c331 [Build-Internal] Executing ls -laZ /builds :
ls: cannot open directory '/builds': Permission denied
[Build-Internal] Executing cat /builds/artifact.txt :
cat: /builds/artifact.txt: Permission denied
=======================================================
TEST 2: MicroVM Isolation (libkrun / virtio-fs) FIXED
=======================================================
[krun] Starting Helper MicroVM (applying :Z relabel)...
[krun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_kvm_t:s0:c823,c999
[krun] -> Helper Process Context (Inside VM):
kernel [krun] -> File Label INSIDE Helper VM (Blindspot):
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c823,c999 10 May 2 2026 /builds/artifact.txt
[krun] -> File Label ON HOST (Podman applied the helper's MCS category via :Z):
system_u:object_r:container_file_t:s0:c823,c999 /tmp/gitlab-runner-mcs-test/krun-builds/artifact.txt
[krun] Starting Build MicroVM to inspect shared volume...
[krun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_kvm_t:s0:c309,c405
*** THE virtiofsd DAEMON ON THE HOST IS TRAPPED IN THIS CONTEXT ***
[Build-Internal] Process Context (Inside VM):
kernel [Build-Internal] Executing ls -laZ /builds :
ls: /builds: Permission denied
ls: cannot open directory '/builds': Permission denied
[Build-Internal] Executing cat /builds/artifact.txt :
cat: /builds/artifact.txt: Permission denied
=======================================================
Test Complete.
GitLab's official suggestion and why it falls short
GitLab's documentation on configuring SELinux MCS suggests applying the same MCS label to all containers launched by a runner:
[[runners]]
[runners.docker]
security_opt = ["label=level:s0:c1000,c1000"]
This works - all containers get the same category pair, so the helper and build containers can share files. But it collapses MCS isolation between all concurrent jobs on that runner. With concurrent = 4, four simultaneous jobs all run as s0:c1000,c1000 and can read each other's /builds content - cloned source code, build artifacts, cached dependencies. On a shared or multi-tenant runner, this is a security regression: it trades MCS isolation for functionality.
For runners with concurrent = 1 or dedicated single-tenant runners this is an acceptable tradeoff, but it does not generalize to shared infrastructure where multiple untrusted projects run side by side.
How GNOME currently handles this
GNOME's runners are managed via an Ansible role that enforces SELinux in Enforcing mode, installs rootless Podman running as a dedicated podman system user with linger enabled, and deploys custom SELinux policy modules. The Podman service runs under SELinuxContext=system_u:system_r:container_runtime_t:s0-s0:c0.c1023 via a systemd override - the full MCS range (s0-s0:c0.c1023) gives the container runtime the ability to spawn containers at any MCS level and relabel volumes accordingly, as explained in the dominance rules above.
Four custom SELinux .te modules are compiled and loaded on every runner host: pydocuum (allows the image cleanup daemon to talk to the Podman socket), podman (grants user_namespace create and /dev/null mapping), flatpak (permits the filesystem mounts flatpak builds need), and gnome_runner (covers binfmt_misc access, device nodes, and other permissions GNOME OS builds require).
For the MCS problem specifically, the runner config.toml - rendered from a Jinja2 template via per-host Ansible variables - sets a fixed MCS label per runner type. Here's a representative snippet from one of the runner hosts:
[[runners]]
name = "a15948139c78"
executor = "docker"
[runners.docker]
image = "quay.io/fedora/fedora:latest"
privileged = false
security_opt = ["label=level:s0:c100,c100"]
devices = ["/dev/kvm", "/dev/udmabuf"]
cap_add = ["SYS_PTRACE", "SYS_CHROOT"]
[[runners]]
name = "a15948139c78-flatpak"
executor = "docker"
[runners.docker]
image = "quay.io/gnome_infrastructure/gnome-runtime-images:gnome-master"
privileged = false
security_opt = ["seccomp:/home/podman/gitlab-runner/flatpak.seccomp.json", "label=level:s0:c200,c200"]
cap_drop = ["all"]
This is the same approach GitLab's documentation suggests, with one refinement: we use different fixed categories per runner type - c100,c100 for untagged runners and c200,c200 for flatpak runners - so that flatpak builds and regular builds remain MCS-isolated from each other, even though builds of the same type share a category.
This is a pragmatic compromise, not an ideal solution. All concurrent jobs on the same runner type share the same MCS category. With concurrent: 4 on our Hetzner runners, four simultaneous untagged jobs can read each other's /builds content. For GNOME's use case - a community CI infrastructure where the runners are shared by GNOME project maintainers - this is an acceptable tradeoff. The alternative, leaving MCS labels random, would break every single job. But it is precisely this tradeoff that motivates exploring per-job VM isolation via microVMs.
Exploring libkrun
libkrun is a lightweight Virtual Machine Monitor (VMM) that integrates with Podman via --runtime krun, running each container inside a microVM with its own lightweight kernel. The appeal is strong: per-container VM isolation would give each job its own kernel and address space, making the MCS cross-container problem irrelevant inside the VM.
I tested libkrun on a Fedora system and hit an immediate blocker: Fatal glibc error: rseq registration failed. The rseq (Restartable Sequences) syscall was introduced in Linux kernel 5.3 and is required by glibc >= 2.35. libkrun uses a custom minimal kernel that does not expose rseq support. Since the guest images - Fedora in our case - ship modern glibc that expects rseq to be available, the process aborts at startup before any user code runs.
The libkrun kernel is compiled into the library itself and cannot be modified or replaced by the user. This is not a configuration issue but a fundamental limitation of the current libkrun release.
Even if the rseq issue were resolved, the MCS challenge would still be there - as the test script demonstrates in Test 2. On the host side, Podman assigns MCS labels to the virtiofsd process that serves the volume into the VM via virtio-fs. Different VMs get different host-side MCS labels, meaning the same :Z relabel / cross-container access denial applies. The mechanism changes from overlay mounts to virtio-fs, but the SELinux enforcement is identical: virtiofsd for the build VM runs at container_kvm_t:s0:c309,c405 and cannot access files labeled s0:c823,c999 by the helper VM's virtiofsd.
Firecracker and the custom executor path
Firecracker is another microVM technology, the one behind AWS Lambda and Fly.io, that could provide strong per-job isolation. However, there is no native GitLab Runner executor for Firecracker. The only integration path is the Custom Executor, which requires implementing prepare, run, and cleanup scripts from scratch.
The job image is exposed via CUSTOM_ENV_CI_JOB_IMAGE, but everything else is on the operator: pulling the OCI image, extracting a rootfs, booting a Firecracker VM with the right kernel and network configuration, injecting the build script, mounting or copying the cloned repository into the VM, collecting artifacts and cache after the job finishes, and tearing the VM down. GitLab provides an LXD-based example that shows the pattern - prepare creates a container and installs dependencies, run pipes the job script into it, cleanup destroys it - but adapting that to microVMs adds the complexity of VM lifecycle management, kernel and rootfs preparation, networking, and storage. This is a significant engineering effort, essentially rebuilding the entire Docker executor workflow from scratch.
What comes next
MCS is a core SELinux feature. Type enforcement (TE) already confines processes by type - container_t can only access container_file_t, not user_home_t or httpd_sys_content_t - but TE alone cannot distinguish one container_t process from another. MCS adds that layer: by assigning each container a unique category pair, the kernel enforces isolation between processes that share the same type. Container A at s0:c100,c100 and Container B at s0:c200,c200 are both container_t, but MCS ensures they cannot touch each other's files. The conflict with GitLab Runner's multi-container-per-job architecture is that two containers that need to share a volume are given different categories by default. The workarounds we deploy today, including the fixed MCS labels on GNOME's runners, trade that inter-container isolation for functionality.
The most promising direction I've found so far is the combination of Cloud Hypervisor and the fleeting-plugin-fleetingd plugin. Cloud Hypervisor is built on Intel's Rust-VMM crate and is essentially a more capable sibling of Firecracker - it supports CPU and memory hotplugging, VFIO device passthrough, and virtio-fs, features that are often necessary for complex CI tasks like building large binaries or running UI tests and that Firecracker's minimalist design deliberately omits. The fleeting-plugin-fleetingd is a community plugin for GitLab's Instance Executor (the modern evolution of the Custom Executor) that automates the full VM lifecycle: downloading cloud images, creating Copy-on-Write disks, launching Cloud Hypervisor VMs with direct kernel boot, provisioning them via cloud-init, and tearing them down after each build. Each job gets a fresh disposable VM, which is exactly the per-job isolation model we need. The plugin already handles networking via TAP interfaces and nftables SNAT, and supports customization of the VM image through cloud-init commands - so preinstalling Podman or other build tools is straightforward.
Beyond that, I'll also keep evaluating libkrun (promising Red Hat technology), Firecracker with a hand-rolled custom executor, and QEMU's microvm machine type. The common denominator across all of these - except for the fleeting-plugin-fleetingd path - is that none of them have an existing GitLab Runner integration. Regardless of which microVM technology we settle on, the path forward involves either building a workflow from scratch using the Custom Executor and its prepare, run, cleanup hooks, or leveraging the fleeting plugin ecosystem that GitLab has been building around the Instance and Docker Autoscaler executors.
CVE-2026-31431
The urgency of per-job VM isolation was underscored by CVE-2026-31431 ("Copy Fail"), a nine-year-old logic bug in the kernel's algif_aead cryptographic module disclosed at the end of April. The flaw lets an unprivileged local user write four controlled bytes into the page cache of any readable file - enough to patch a setuid binary like /usr/bin/su and escalate to root. Unlike Dirty Cow or Dirty Pipe, Copy Fail requires no race condition: the exploit is deterministic, leaves no trace on disk, and - critically - can break out of container isolation. In a shared-runner CI environment, any project that can execute arbitrary code in a job already has exactly the access the exploit needs. Separately, Claude Mythos - an Anthropic model trained for cybersecurity research that escaped its own sandbox during a red-team exercise in April - demonstrated that AI-assisted vulnerability discovery and exploitation is no longer theoretical; models can now autonomously find and chain bugs that would take human researchers weeks to exploit. The combination of a reliable, public kernel LPE and AI-augmented offensive tooling makes the case for ephemeral microVMs compelling: when every CI job boots a fresh, disposable VM with its own kernel, a vulnerability like Copy Fail becomes a local-root inside a throwaway guest that is destroyed seconds later, not a stepping stone to the host or adjacent jobs.
That should be all for today, stay tuned!
02 May 2026 1:00am GMT
01 May 2026
Planet GNOME
Allan Day: GNOME Foundation Update, 2026-05-01
It's the first day of May, and it's time for another update on what's been happening at the GNOME Foundation. It's been two weeks since my last post, and this update covers highlights of what we've been doing since then.
Remembering Seth Nickell
This week we received the very sad news of the death of Seth Nickell. It's been a long time since Seth was active in the GNOME project, so many of our members won't be familiar with him or his work. However, Seth played an important part in GNOME's history, and was a special and unique character.
Jonathan wrote a wonderful post about Seth, with some great stories. Federico migrated the memorial page from the old wiki to the handbook, and added Seth there (work is currently ongoing to develop that page). Seth's death has also been covered by LWN, which includes dedications from GNOME contributors.
Whether you knew Seth or came to GNOME after his time, I think we can all appreciate the contributions that he made, which live on in the project and wider ecosystem to this day.
GNOME Fellowship
Applications for the first round of the new GNOME Fellowship program closed last week, on 20th April. We had a great response and received some excellent proposals, and now we have the tough job of deciding who is going to receive support through the program.
To that end, the Fellowship Committee met this week to review the proposals and begin the selection process. We have identified a shortlist of candidates, and will be meeting again next week to narrow the selection further.
Since this is the first round of the Fellowship, we are establishing the selection process as we go. Hopefully we'll get to put this to use again in future Fellowship rounds!
Conferences
Linux App Summit (LAS) will be held in Berlin on 16-17 May - that's in a little over two weeks! The schedule has been finalized and looks great, and this year's LAS is shaping up to be a fantastic event. Please do consider going, and please do register!
Due to high demand, the organizing team have decided to stream the talks from this year, so look out for details about remote participation.
Aside from LAS, preparations for July's GUADEC conference continue to be worked on. Travel sponsorship is still available if you need assistance in order to attend, so do consider applying for that.
Office transitions ongoing
Work to update many of our backoffice systems and processes has continued at a steady pace over the past fortnight. Many of the big moves are done (new payments system, email accounts, mailing system, accounting procedures, credit card platform), and we are now firmly in the final stages, making sure that our new address is used everywhere, emails are going to the right places, recurring payments are transferred over to new credit cards, and vendors are setup on the new payments system.
The value of this work is already showing, with smoother accounting procedures, more up to date finance reports, and better tracking of incoming queries.
That's it for this update. Thanks for reading, and take care.
01 May 2026 10:34am GMT
This Week in GNOME: #247 International Workers' Day
Update on what happened across the GNOME project in the week from April 24 to May 01.
GNOME Circle Apps and Libraries
NewsFlash feed reader ↗
Follow your favorite blogs & news sites.
Jan Lukas announces
Hi TWIG. Newsflash can now swipe between articles. This closes off one of the oldest still standing feature requests. And hopefully makes all the mobile users happy.
Third Party Projects
xjuan reports
Casilda 1.2.4 Released!
I am very happy to announce a new version of Casilda!
A simple Wayland compositor widget for Gtk 4 and GNOME
This release comes with several new features like fractional scaling support, bug fixes and extra polish that it is making it start to feel like a proper compositor. You can read more about it at https://blogs.gnome.org/xjuan/2026/04/19/casilda-1-2-4-released/
Anton Isaiev says
RustConn (connection manager for SSH, RDP, VNC, SPICE, Telnet, Serial, Kubernetes, MOSH, and Zero Trust protocols)
Versions 0.11.0-0.12.7 bring the three biggest features since the project started, plus a mountain of polish driven by community feedback.
Cloud Sync landed. You can now synchronize connection configurations between devices and team members through any shared directory - Google Drive, Syncthing, Nextcloud, Dropbox, or even a USB stick. Two modes: Group Sync (per-group .rcn files with Master/Import access) and Simple Sync (single-file bidirectional merge). A file watcher auto-imports changes, and the new Cloud Sync settings page shows sync status, synced groups, and available files. CLI got
sync status,sync list,sync export,sync import, andsync nowcommands.SSH Tunnel Manager is a standalone window for managing headless SSH port-forwarding tunnels without terminal sessions - Local, Remote, and Dynamic forwards with auto-start on launch and auto-reconnect. SSH jump host support was extended to RDP, VNC, and SPICE connections, so you can tunnel graphical sessions through a bastion host. Ctrl+T opens the tunnel manager.
Tab management was completely reworked around AdwTabView. Tab Overview (Ctrl+Shift+O) gives a GNOME Web-style grid of all open tabs. Tab Pinning keeps important tabs at the left edge. A tab switcher in the Command Palette (% prefix) provides fuzzy search across open tabs. Right-click context menu gained Close Others / Left / Right / All / Ungrouped actions.
Other highlights: custom terminal color themes with full 16-color ANSI palette editor; terminal scrollbar; font zoom (Ctrl+Scroll); copy-on-select; SSH Keep-Alive and verbose mode; Hoop.dev as the 11th Zero Trust provider; custom SSH agent socket override (fixes KeePassXC/Bitwarden agent in Flatpak); RDP mouse jiggler; terminal activity/silence monitor; host online check with auto-connect; highlight rules now render with actual colors via Cairo overlay; connection dialog rebuilt with adw:: widgets following GNOME HIG.
Packaging grew significantly. RustConn is now available as Flatpak on Flathub, Snap with strict confinement, AppImage, native .deb and .rpm packages via OBS repositories (Debian 13, Ubuntu 24.04/26.04, Fedora 43/44, openSUSE Tumbleweed/Slowroll/Leap 16.0), plus ARM64 builds. A huge thank you to the community maintainers: the AUR package for Arch Linux, the FreeBSD port, and there is an open request to include RustConn in Debian proper.
Thank you to everyone who reported issues, contributed translations, and tested pre-releases - your feedback shaped every one of these 25 releases. Special thanks to GaaChun for the complete Simplified Chinese translation, and to Phil Dodd and Todor Todorov for the support.
Project: https://github.com/totoshko88/RustConn Flatpak: https://flathub.org/en/apps/io.github.totoshko88.RustConn
Capypara says
Field Monitor 50.0
Field Monitor - the remote desktop viewer focused on accessing VMs - has been updated to version 50.0.
Some highlights:
- Support for multiple monitors for SPICE connections.
- Support for sharing USB devices with SPICE sessions using the XDG USB Portal (even with the Flatpak).
- KVM/QEMU VMs can now be accessed with hardware accelerated GPU rendering - if enabled.
- Field Monitor now validates server certificates and asks you for your trust if a certificate isn't automatically trusted by your system.
- Several bugfixes to RDP and SPICE sessions, such as cursor rendering issues and overall performance.
Field Monitor is available via Flathub: https://flathub.org/apps/de.capypara.FieldMonitor
Christian says
The first public release of Gitte is out!
Gitte is a GTK4/libadwaita git GUI written in Rust, built on Relm4 and git2 (no shelling out to the git binary).
What's in the initial release:
- Browse repositories with a saved repositories start screen
- View the working copy, stage and unstage changes, commit them, amend commits
- Read the commit log and inspect diffs file by file
- Manage branches, tags, remotes, and stashes
- Push from and pull to remotes, auto-fetching remotes in the background
It's early days, so expect rough edges. Bug reports and feedback are very welcome.
Get Gitte from Flathub: https://flathub.org/apps/de.wwwtech.gitte
Parabolic ↗
Download web video and audio.
Nick reports
Parabolic V2026.4.1 is here with plenty of bug fixes!
Here's the full changelog:
- Fixed an issue where some settings would not save correctly
- Fixed an issue where playlist downloads with a resolution limit had no audio
- Fixed an issue where portrait/vertical videos in playlists downloaded at incorrect resolutions
- Fixed an issue where downloads from sites with muxed-only streams would fail
- Fixed an issue where downloading a time frame clip from a long video produced an incomplete result
- Fixed an issue where downloading a time frame clip from a long video could hang indefinitely with aria2c enabled
- Fixed an issue where X/Twitter quoted downloads could produce the same video twice
- Fixed an issue where deno was unable to be updated in-app on Linux
- Fixed an issue where browser cookies could not be found when running via Flatpak on Linux
- Fixed an issue where Parabolic would not start on KDE desktops
- Fixed an issue where Parabolic did not open links from browser extension on Windows
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
01 May 2026 12:00am GMT
30 Apr 2026
Planet GNOME
Felipe Borges: Let’s Welcome Our Google Summer of Code 2026 Contributors!
GNOME is once again participating in GSoC. This year, we have 6 contributors working on adding Debug Adapter Protocol support to GJS, incorporating vocab-style puzzles into GNOME Crosswords, creating a native GTK4/Rust rewrite of the Pitivi timeline ruler, porting gitg to GTK4, implementing app uninstallation in the GNOME Shell app grid, and enabling recovery from GPU resets.
As we onboard the contributors, we will be adding them to Planet GNOME, where you can get to know them better and follow their project updates.
GSoC is a great opportunity to welcome new people into our project. Please help them get started and make them feel at home in our community!
Special thanks to our community mentors, who are donating their time and energy to help welcome and guide our new contributors: Philip Chimento, Jonathan Blandford, Yatin, Alex Băluț, Alberto Fanjul, Adrian Vovk, Jonas Ådahl, and Robert Mader.
For more information, visit https://summerofcode.withgoogle.com/programs/2026/organizations/gnome-foundation
30 Apr 2026 9:05pm GMT
Sophie Herold: Testing Library Code in GNOME OS
Yesterday, I wanted to debug a glycin (or Shell) issue on GNOME OS. Turns out, there is currently no documentation that works or includes all necessary steps.
Here is the simplest variant if you don't develop on GNOME OS and have an internet connection that can download 16 GB in a reasonable amount of time.
First we get a toolbox image to build our code.
$ toolbox create gnomeos-nightly -i quay.io/gnome_infrastructure/gnome-build-meta:gnomeos-devel-nightly
After entering the toolbox with
$ toolbox enter gnomeos-nightly
we can clone and build our project with sysext-utils that are included in our image:
$ meson setup ./build --prefix /usr --libdir="lib/$(gcc -print-multiarch)"
$ sysext-build example ./build
This creates a example.sysext.raw file.
Now, we need a GNOME OS to test our build. We can download the image and install it in Boxes. After logging in, we can just drag and drop the example.sysext.raw into the VM.
Before we can install it, we need to get the development tools for our VM:
$ run0 updatectl enable devel --now
After that, we need to restart the VM.
Finally, we can test our build:
$ run0 sysext-add ~/Downloads/example.sysext.raw
Adding the --persistent flag to this command will make the changes stay active across reboots.
If the changes made it impossible to boot into the VM again, we can start the VM in "Safe mode" from the boot menu. After logging in, we can manually remove the extension:
$ run0 rm /var/lib/extensions/example.raw
Happy hacking!
30 Apr 2026 12:58pm GMT
vixalien: A love letter to mise
Recently, I have been using GNOME OS, as my daily driver.
After being a seasoned Linux for long, dabbling in distros like Alpine Linux, Arch Linux, Fedora (and even Silverblue), I tried switching to something more opinionated and that "works by default" all while being hard to break.
And given my existing relationship with GNOME, GNOME OS was a choice worth looking into.
One feature of GNOME OS is that it is immutable (i.e. system files are read-only). It also doesn't ship with a package manager, so it doesn't have functionality built-in to install extra packages.
You can install GUI Applications normally using Flathub (and Snap/AppImage), but installing non-GUI applications like development tools or CLI packages is not built-in.
There are of course several solutions you can use, such as homebrew, coldbrew, but today we will focus on mise.
What is mise?
mise pitches itself as "One tool to manage languages, env vars, and tasks per project, reproducibly."
However, I only use a fraction of it's functionality, in that I only use it to install packages.
How to install it?
The instructions are here: https://mise.jdx.dev/getting-started.html
But essentially it's as easy as running this (remember to read the source of the installer first):
curl https://mise.run | sh
Activating mise
Then you will need to "activate" mise, which essentially makes tools installed by mise available by modifying your $PATH variable
echo 'eval "$(~/.local/bin/mise activate bash --shims)"' >> ~/.bashrc
The instructions above are for bash, so you will need to consult the docs to get instructions for your shell.
You will need to re-login for the mise command to be available, or open a new shell.
A note on shims
Feel free to skip this section, as it's just an explainer
Also, note that the above command use the --shims flag, which is NOT the default. It essentially means that mise will modify the $PATH variable, instead of doing a weird thing where it will re-activate itself after each command you run.
The non-shim way to activate mise is useful when you use mise to install different package versions across different repositories, but that sometimes breaks IDEs and is our of the scope of this blog post.
Installing packages
You can start installing your first package with mise:
mise use -g java
The above command installs java globally (hence the -g flag), which you can now confirm by running:
$ java --version
openjdk 26.0.1 2026-04-21
OpenJDK Runtime Environment (build 26.0.1+8-34)
OpenJDK 64-Bit Server VM (build 26.0.1+8-34, mixed mode, sharing)
You can install much more tools, of which you can find a non-complete list here: mise-tools.
For example, you can similarly install a specific major version of nodejs
mise use -g node@22
Or install the latest LTS version of node
mise use -g node@lts
Or you can be overlay specific
mise use -g node@v25.9.0
mise use -g node@25.9.0 # this works too!
Searching
Use mise search to find packages.
mise search typ
Tool Description
typos Source code spell checker. https://github.com/crate-ci/typos
typst A new markup-based typesetting system that is powerful and easy to learn. https://github.com/typst/typst
typstyle Beautiful and reliable typst code formatter. https://github.com/Enter-tainer/typstyle
quicktype Generate types and converters from JSON, Schema, and GraphQL provided by https://quicktype.io. https://www.npmjs.com/package/quicktype
Uninstalling
mise unuse -g node
Updating
mise self-update # updating mise itself
mise up # updating tools installed by mise
mise outdated # checking if you have outdated tools
Config File
Tools you install with mise globally will be saved in the file ~/.config/mise/config.toml, which you can commit to your dotfiles so you can have similar tools across different machines.
Here's an example of my mise config file at the time of writing this blog post.
# ~/.config/mise/config.toml
[tools]
bat = "latest"
btop = "latest"
bun = "latest"
caddy = "latest"
"cargo:mergiraf" = "latest"
deno = "latest"
difftastic = "latest"
doggo = "latest"
fastfetch = "latest"
fzf = "latest"
github-cli = "latest"
"github:railwayapp/railpack" = "latest"
glab = "latest"
helix = "latest"
java = "latest"
lazygit = "latest"
node = "latest"
"npm:vscode-langservers-extracted" = "latest"
oha = "latest"
pipx = "latest"
pnpm = "latest"
prettier = "latest"
rust = "latest"
scooter = "latest"
tmux = "latest"
usage = "latest"
yt-dlp = { version = "latest", rename_exe = "yt-dlp" }
zellij = "latest"
"github:patryk-ku/music-discord-rpc" = { version = "latest", asset_pattern = "music-discord-rpc" }
rclone = "latest"
mc = "latest"
go = "latest"
"go:git.sr.ht/~migadu/alps/cmd/alps" = "latest"
"npm:localtunnel" = "latest"
After the tools inside the config has changed, you can run the following comand to make mise re-install packages from the config file
mise install
Mise Backends
Mise is able to install packages from multiple sources. These sources are called "backends" by mise.
When you type mise use -g node@22, it will resolve node against the registry and figure out that the default backend for node is core
Core
The default backend is called core and tools from this backend are usually provided from the official source.
Other tools that are available from core include Node.js, Ruby, Python, etc...
We could also have been explicit with the backend we want to use
mise use -g core:node
You can find a list of all core packages here.
Aqua
You can also install packages from the Aqua registry.
Language Package Managers
You can also install tools from their respective package managers. Here are a few examples
npm
You can install prettier, typescript, oxlint and other JavaScript/TypeScript tools published on the npm registry. Find the tools on npm
mise use -g npm:prettier
pipx
You can install black, poetry and other Python tools from pypi. Find the tools on pypi
mise use -g pipx:black
pipx:git+https://github.com/psf/black.git # from a github repo
cargo
You can install cargo packages with this backed. You need to have rust installed beforehand though, which you can do with mise
mise use -g rust
Then install your packages
mise use -g cargo:eza
There are more language package manager backends like: gem, go and more.
Github
You can install packages from Github directly, as long as the project you are trying to install from uses Github releases
mise use -g github:railwayapp/railpack
mise will usually auto-detect which asset you want to use, but you can also specify the asset glob in ~/.config/mise/config.toml
[tools]
"github:patryk-ku/music-discord-rpc" = { version = "latest", asset_pattern = "music-discord-rpc" }
30 Apr 2026 12:00am GMT
29 Apr 2026
Planet GNOME
Jonathan Blandford: Remembering Seth
I heard the news about Seth Nickell's passing last week, and have been in a bit of a funk ever since.
Seth was brilliant, iconoclastic, fearless.
It's been a long while since Seth was an active part of the GNOME Community, but his influence on the project can still be seen in its DNA if you know where to look. He arrived on the GNOME scene while still in school with hundreds of ideas on how to improve things. It was an interesting time: We had just launched GNOME 1.5 and were searching for a new path towards GNOME 2.0. The Sun usability study had been published and the community had internalized the need to change directions. Seth rolled up his sleeves and did the work needed to help light that path.
Seth championed radical proposals such as instant apply, button ordering, message dialog fixes, and more. He cleaned up the control-center proposing some of the most visible changes from GNOME 1 to 2. He also did the initial designs for epiphany, pushing for a cleaner browser experience during an era of high browser complexity. He had a vision of desktops as a democratic tool, as easy and natural to use as any other tool in the human experience.
As a designer, Seth was focused on trying to understand who we were designing for and making sure we were solving problems for them. While he wasn't beyond fixing paddings / layouts, he wanted to get the Big Picture right. He wasn't beyond rolling up his sleeves writing code to move things forward, but was at his best as a champion and visionary, arguing for us to take risks and continue to innovate.
Spending time was Seth was a hoot. He had such a flair for the dramatic. I remember…
- …the time he sold the design for what would become NetworkManager to a bunch of engineers. He got up on the stage and announced: "We are going to make this [holding an ethernet cable] as easy to use as this [producing a power plug]!" It's hard to describe how many steps it took to set up networking back then.
- …his vision of an improved messaging system - Project Yarrr. He used
(U+2620) as the SVN repo name partially to see how many internal tools weren't UTF-8 clean. - …him breaking out into an operatic rendition of "Tradition" when developers were pushing back on a change he was proposing.
- …the time he changed everyone's background in the RH office to have crop circles over night. He showed up the next morning in a robe dressed as an old-testament prophet, beating a drum and carrying a "RHEL5 IS NIGH" sign.
- …hanging printouts of hate mail he got for various design choices outside of the Mega Cube (a group activity)!
- And everyone who was around for the Dark Princess Incident will always remember it.
Being one of the public faces of GNOME2 was hard, and he moved on. Later, he worked on OLPC and Sugar, and made his mark there. After that, he seemed to travel a lot. We lost touch, though he'd reappear every couple of years to say hi. I hope he found what he was looking for.
Farewell, my friend. The world now has less color in it.

29 Apr 2026 5:07am GMT
28 Apr 2026
Planet GNOME
Thibault Martin: TIL that Yubikeys are convenient for Linux login
I got myself a Yubikey recently, and I wanted to use it as a nice convenience to:
- Grant me sudo privileges
- Unlock my session
- Decrypt my LUKS-encrypted disk
I've only managed to do the first two, since they both rely on Linux Pluggable Authentication Modules (PAM). Luckily for me, one of PAM's modules supports U2F, the standard Yubikeys rely on.
First I need to install pam-u2f to add U2F support to PAM, and pamu2fcfg to configure my key.
$ sudo rpm-ostree install pam-u2f pamu2fcfg
Since I'm running an immutable OS I need to reboot, and then I can create the correct directory and file to dump an U2F key into it.
$ mkdir -p ~/.config/Yubico
$ pamu2fcfg > ~/.config/Yubico/u2f_keys
Then I make sure to have a root session open in case I lock myself out of sudoers.
$ sudo su
#
In a different terminal, I can edit the sudoers file to add this line
#%PAM-1.0
auth sufficient pam_u2f.so cue openasuser
auth include system-auth
account include system-auth
password include system-auth
session optional pam_keyinit.so revoke
session required pam_limits.so
session include system-auth
I save this file and open a new terminal. I type in sudo vi and it asks me to touch my FIDO authenticator before opening vi! If I touch the Yubikey, it indeed opens vi with root privileges.
Let's break down the line:
authfor authenticationsufficientpassing this authentication challenge is enough (it's not an additional factor of authentication)pam_u2f.sothe module we load is for U2F, the standard Yubikeys usecueprint "Please touch the FIDO authenticator." when the user needs to authenticateopenasuserto fetch the authentication file without root privileges
It's also possible to use it to unlock my session, but it would be a bit reckless to allow anyone with my Yubikey to log into my laptop. If my backpack gets stolen and it has both my Yubikey and my laptop, anyone can log in.
It's possible to make the login screen require either my user password, or all of
- The Yubikey itself
- The PIN of the Yubikey
- Me to touch the Yubikey
If someone fails more than three times to enter the correct PIN, the Yubikey will lock itself and require a PUK to be unlocked. This gives me an additional layer of security, and it's more convenient than having to type a full length passphrase.
I've added the following line to /etc/pam.d/greetd (the greeter I use):
#%PAM-1.0
auth sufficient pam_u2f.so cue openasuser pinverification=1 userpresence=1
auth substack system-auth
[...]
[!warning] I can lose my Yubikey
I use my Yubikey as a nice convenience to set up a weaker PIN while not compromising too much on security. I use it instead of a password, no in addition to it.
Since I can lose or break my Yubikey and I don't want to buy two of them, I make the U2F login
sufficientbut notrequired. This means I can still fallback to password authentication if I lose my Yubikey.
Finally, DankMaterialShell uses its own lockscreen manager too. I still want to be able to fallback to password authentication if need be, so I'll configure it to accept U2F OR the password, not both.
This means that the lockscreen will call /etc/pam.d/dankshell-u2f to know what to do when the screen is locked. Since this file doesn't exist, I can create it with the following content.
#%PAM-1.0
auth sufficient pam_u2f.so cue openasuser pinverification=1 userpresence=1
I need a fallback for when I don't have my Yubikey, so I also create the one for this occasion
#%PAM-1.0
auth include system-auth
Finally, I have a consistent setup where both my login and lock screen require me to plug my key, enter its PIN and touch it, or enter my full password. When it comes to sudo, I can only touch my key without requiring an PIN.
My next quest will be to use my Yubikey to unlock my LUKS-encrypted disk.
28 Apr 2026 10:00am GMT
27 Apr 2026
Planet GNOME
Jordan Petridis: Goblins in your toolchain
At the start of the month, Bilal gave us all a giant gift with Goblint. On the first week it was already impressive. Now it's an invaluable tool for anyone that ever interfaced with GObject, glib or GTK. It will catch leaks, bugs, or even offer to auto fix and modernize your code to the modern paradigms we use. It's one of those things that is going to save countless hours of debugging and more importantly, prevent the issues before they even get committed. Jonathan Blandford wrote about using it two days ago, and I suggest you read the post.
Everyone is trying to use goblint, and we are all stumbling upon the same issues integrating it into our tooling. Initially, it was only able to produce Sarif reports, which GitLab still has behind a feature flag, in addition to only be available in GitLab Enterprise Editions.
I added an export for GitLab's Code Quality format which has some support in the non-proprietary Community Edition we use in the GNOME and Freedesktop.org instances. Sadly, almost everything nice is still only available in the enterprise editions, but at least there is this little Widget in the Merge Requests page.

Additionally, we now have CI templates for Goblint. One is adding a job to the existing gnomeos-basic-ci component we use everywhere. Simply go to your latest pipeline and look for the job.
The report will also show up in Merge Requests that have been updated since yesterday. The gnomeos-basic-ci has other goodies like sanitizers, static analyzers, test coverage, etc wired out of the box, so you should give it a try if you are not using it yet.
If you do but don't want the goblint job, you can disable it easily with inputs: goblint: "disabled" similar to all the other tools the component provides.
include:
- project: "GNOME/citemplates"
file: "templates/default-rules.yml"
- component: "gitlab.gnome.org/GNOME/citemplates/gnomeos-basic-ci@26.1"If you want only a goblint job, I've also added a standalone template that you can use. (Or copy-paste from it).
include:
- component: "gitlab.gnome.org/GNOME/citemplates/goblint@26.1"
inputs:
job-stage: "lint"In order for the Code Quality report to work, you will need to have a report uploaded from your target branch, so GitLab will have something to compare the one from the merge request with. The template rules will handle that for you, but keep it in mind.
At this moment all the lints are warnings so the job will never be fatal. This is why we can enabled it by default without worrying about breaking pipelines for now. You can further configure its behavior to your needs, and error out if you want to, through the configuration file.
min_glib_version = "2.76"
[rules.g_declare_semicolon]
level = "ignore"
[rules.untranslated_string]
level = "error"
ignore = ["**/test-*.c"]It's also very likely that we are going to add goblint and its LSP server to the GNOME SDK Flatpak runtime, along with GNOME OS, so it will always be available for use with tools like Builder and foundry.
Enjoy
27 Apr 2026 10:05am GMT
25 Apr 2026
Planet GNOME
Jakub Steiner: Revert That Vector Nonsense!
A few years back I did a quick exploration of what GNOME app icons might look like in an alternate universe where we kept on using VGA displays. Chiselling pixels away is therapeutic. So while there is absolutely no use for these, I keep on making them if only to bring some attention to what really matters for GNOME, having nice apps.
Here's a batch of mostly GNOME Circle app icons, with some 3rd party ones thrown in.
If you're reading this on my site rather than Planet GNOME or some flickering terminal in an abandoned Vault, then congratulations. You've stumbled upon a working Pip-Boy module! Found it half-buried under irradiated rubble, its phosphor display still humming with that familiar green glow. Enjoy these icons the way the dwellers of Vault 101 were always meant to, one glorious scanline at a time.
25 Apr 2026 12:00am GMT
24 Apr 2026
Planet GNOME
Michael Catanzaro: git config am.threeWay
If you work with patches and git am, then you're probably used to seeing patches fail to apply. For example:
$ git am CVE-2025-14512.patch
Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings
error: patch failed: gio/gfileattribute.c:166
error: gio/gfileattribute.c: patch does not apply
Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings
hint: Use 'git am --show-current-patch=diff' to see the failed patch
hint: When you have resolved this problem, run "git am --continue".
hint: If you prefer to skip this patch, run "git am --skip" instead.
hint: To restore the original branch and stop patching, run "git am --abort".
hint: Disable this message with "git config set advice.mergeConflict false"
This is sad and frustrating because the entire patch has failed, and now you have to apply the entire thing manually. That is no good.
Here is the solution, which I wish I had learned long ago:
$ git config --global am.threeWay true
This enables three-way merge conflict resolution, same as if you were using git cherry-pick or git merge. For example:
$ git am CVE-2025-14512.patch
Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings
Using index info to reconstruct a base tree...
M gio/gfileattribute.c
Falling back to patching base and 3-way merge...
Auto-merging gio/gfileattribute.c
CONFLICT (content): Merge conflict in gio/gfileattribute.c
error: Failed to merge in the changes.
Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings
hint: Use 'git am --show-current-patch=diff' to see the failed patch
hint: When you have resolved this problem, run "git am --continue".
hint: If you prefer to skip this patch, run "git am --skip" instead.
hint: To restore the original branch and stop patching, run "git am --abort".
hint: Disable this message with "git config set advice.mergeConflict false"
Now you have merge conflicts, which you can handle as usual. This seems like a better default for pretty much everybody, so if you use git am, you should probably enable it.
I've no doubt that many readers will have known about this already, but it's new to me, and it makes me happy, so I wanted to share. You're welcome, Internet!
24 Apr 2026 10:57pm GMT
Jonathan Blandford: Goblint Notes
I was excited to see Bilal's announcement of goblint, and I've spent the past week getting Crosswords to work with it. This is a tool I've always wanted and I'm pretty convinced it will be a great boon for the GNOME ecosystem. I'm posting my notes in hope that more people try it out:
- First and most importantly, Bilal has been so great to work with. I have filed ~20 issues and feature requests and he fixed them all very quickly. In some cases, he fixed the underlying issue before I completed adding annotations to the code.
- Most of the issues flagged were idiomatic and stylistic, but it did find real bugs. It found a half-dozen leaks, a missing g_timeout removal, and five missing class function chain ups. One was a long-standing crasher. There's a definite improvement in quality from adopting this tool.
- I'm also excited about pairing this with new GSoC interns. The types of things goblint flags are the things that students hit in particular (when they don't write it all their code with AI). I think goblint will be even more important to our ecosystem as a teaching tool to our C codebase. It's already effectively replaced my styleguide.
- In a few instances, the
use_g_autoptrrule outstripped static-scan's ability to track leaks. Ultimately, I ended up annotating and removing theg_autoptr()calls as I couldn't get the two to play nicely together. - Along the same lines, cairo, pango, and librsvg all lack
G_DEFINE_AUTOPTR_CLEANUP_FUNC. It would be really great if we could fix these core libraries. In the meantime, you can add the following to your project's goblint.toml file:
[rules.use_g_autoptr_inline_cleanup] level = "error" ignore_types = ["cairo_*", "Pango*", "RsvgHandle"]
- I had some trouble getting the pipeline integrated with GNOME's gitlab. The gitlab recipe on his page uses premium features unavailable in the self hosted version. If it's helpful for others, here's what I ended up using:
goblint: stage: analysis extends: - "opensuse-container@x86_64.stable" - ".fdo.distribution-image@opensuse" needs: - job: opensuse-container@x86_64.stable artifacts: false before_script: - source ci/env.sh - cargo install --git https://github.com/bilelmoussaoui/goblint goblint script: # Goblint is fast. We run it twice: Once to generate the report, # and a second time to display the output and triger an error - /root/.cargo/bin/goblint . --format sarif > goblint.sarif || true - /root/.cargo/bin/goblint . --format text artifacts: reports: sast: goblint.sarif when: always
YMMV
24 Apr 2026 7:57am GMT
This Week in GNOME: #246 Offline Dictionaries
Update on what happened across the GNOME project in the week from April 17 to April 24.
GNOME Core Apps and Libraries
Libadwaita ↗
Building blocks for modern GNOME apps using GTK4.
Alice (she/her) 🏳️⚧️🏳️🌈 says
libadwaita demo runs on android now, and apk files can be grabbed from CI
Alice (she/her) 🏳️⚧️🏳️🌈 reports
AdwSidebarandAdwViewSwitcherSidebarnow allow adding widgets above and below their content. This can be used to add things like account switchers
Third Party Projects
Haydn Trowell says
Kotoba, a fast, fully offline Japanese-English dictionary, is now available on Flathub.
Key features:
- Flexible search: Look up words using kanji, kana, rōmaji, or English meanings
- Responsive results: Matches appear almost instantly
- Detailed entries: Readings, meanings, example sentences, and usage notes where available
- Smart conjugation handling: Recognizes inflected verb and adjective forms and maps them to their base entries
- Bookmarks: Save words to review later
- Fully offline: Works without an internet connection
Get it on Flathub: https://flathub.org/apps/net.trowell.kotoba
Antonio Zugaldia says
Stargate is a new Java and Kotlin library that gives JVM applications access to XDG Desktop Portals on Linux.
- Full coverage of the portal spec, including Global Shortcuts, Remote Desktop, Notification, and Settings.
- Adds system tray icon support via the XDG Status Notifier Item specification.
- Ships with a demo app built using Java GI, the GTK/GNOME Java bindings (come say hi at
#java-gi:matrix.org).Available on Maven Central. More at https://github.com/zugaldia/stargate.
Bilal Elmoussaoui reports
goblint has received a lot of work lately. Supporting 22 new rules since the last update and a webpage displaying the list of the rules and the available per-rule configurations https://bilelmoussaoui.github.io/goblint/. The page can include an extensive documentation like the following https://bilelmoussaoui.github.io/goblint/#use_g_autoptr_inline_cleanup but that is the only rule having that for now.
Jan-Michael Brummer reports
Take control of your health with Blood Pressure 1.0.0.
Released just a month ago, this powerful yet easy-to-use app helps you track systolic and diastolic blood pressure as well as pulse with precision. Visualize your progress through clear, intuitive charts and gain valuable insights with in-depth statistics based on ESH/ESC guidelines.
Log measurements effortlessly with date, time, and optional notes, explore your history at a glance, and benefit from comprehensive analysis designed to support better health decisions. It's available on Flathub: https://flathub.org/en/apps/org.tabos.bloodpressure
Nathan Perlman says
After a few months of on-and-off development, Rewaita v1.1.2 has finally been released!
For anyone who doesn't know, Rewaita is a customization tool for applying colour schemes to Adwaita and GNOME.
What's new:
- Finally added proper Firefox support, and is also compatible with the Firefox Gnome Theme for those who use it.
- More customization options, including three more window control themes, and a new colour scheme (Gruvbox Hard).
- Now includes an option to force light text in the app overview, which might be useful to blur-my-shell users.
- Some quality of life improvements like a better user guide, and bug fixes with significantly less crashing.
- Added Chinese Simplified translations.
Thanks to everyone who helped out with this release! Available on Flathub as always, as well as the AUR. More at https://github.com/swordpuffin/Rewaita.
Solitaire ↗
Play Patience Games
Will Warner reports
Solitaire 50.1 has been released! I want to thank everyone who contributed to this update. Firstly, to the translators who created translations for the app within the first two weeks of it being on Damned Lies. Second, to everyone who submitted and commented on issues, I appreciate your help.
Here's what changed: • Added translations: Brazilian Portuguese, Cornish, Kazakh, Serbian, Swedish, Ukrainian, Slovenian, Russian • Added categories to the desktop entry • Added all of the missing Aisleriot card themes except for Guyenne Classic • Fixed a bug where re-deals weren't decremented upon undoing them • Fixed a bug that allowed one too many re-deals • Clarified the uses of the solver in the preferences • Added a 'Display the Winnability Warning' option to the preferences • Made the theme refresh when the theme selector dialog is closed • Adjusted the brand colors to not clash with the window colors • Added spacing between cards and margins for the playing area • Added a 'New Game' option to the 'Impossible to win' dialog • Added a fail-safe for GTK not handling drags correctly on X11
Get it on Flathub
Pipeline ↗
Follow your favorite video creators.
schmiddi reports
Pipeline version 4.0.0 was released this week! This release overhauls downloading videos. Instead of depending on an external program to download videos, Pipeline now has downloading videos built-in. This also includes watching downloaded videos directly in the application. Besides that, you can now inspect your watch history in the application. There were further minor additions and improvements, which you can read about in the changelog of this release.
Note that this release also contains a breaking change for Pipeline users using external video players: By default, the Flatpak on Flathub does not have the permission to spawn external video players anymore. If you want to continue using external players instead of the built-in video player, you will to manually grant Pipeline this permission, as detailed in the preferences dialog of Pipeline or in this wiki page.
Parabolic ↗
Download web video and audio.
Nick reports
Parabolic V2026.4.0 is here!
This release contains many bug fixes, new features and design improvements making Parabolic an even better app! This release also includes a new macOS version of Parabolic - expanding our userbase!
A huge thank you to everyone who has tested and reported bugs throughout this development cycle. ❤️
Here's the full changelog:
- Added macOS app for the GNOME version of Parabolic
- Added Windows portable version of Parabolic
- Added the ability to toggle super resolution formats in Parabolic's settings
- Added the ability to specify a preferred frame rate for video downloads in Parabolic's settings
- Added the ability to toggle immediate audio or video downloads separately
- Added the ability to automatically translate embedded metadata and chapters to the app's language on supported sites. This can be turned off in Converter settings
- Added the ability to update deno from within the app
- Added thumbnail image preview to add download dialog and downloads view
- Added failed filter to downloads view
- Added total duration label to playlist items view
- Improved Parabolic's startup time by using NativeAOT compilation
- Improved selection of playlist video formats when resolutions are specified
- Improved selection of playlist audio formats on Windows when bitrates are specified
- Improved cropping of audio thumbnails
- Improved handling of long file names, they will now be truncated if too long
- Removed unsupported cookie browsers on Windows. Manual txt files should be used instead
- Fixed an issue where download progress did not show correctly
- Fixed an issue where the preferred video codec was ignored when a preferred frame rate was also set
- Fixed an issue where the exported M3U playlist file would contain duplicate entries
- Fixed an issue where credentials would not save on Linux
- Fixed an issue where batch files were unusable on Linux and macOS
- Fixed an issue where uploading a cookies file did not work on Windows
- Fixed an issue where time frame downloads would not complete on Windows
- Fixed an issue where certain video formats would process infinitely on Windows
- Updated yt-dlp
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
24 Apr 2026 12:00am GMT
23 Apr 2026
Planet GNOME
Sam Thursfield: Status update, 23rd April 2026
Hello there,
You thought I'd given up on "status update" blog posts, did you ? I haven't given up, despite my better judgement, this one is just even later than usual.
Recently I've been using my rather obscure platform as a blogger to theorize about AI and the future of the tech industry, mixed with the occasional life update, couched in vague terms, perhaps due to the increasing number of weirdos in the world who think doxxing and sending death threats to open source contributors is a meaningful use of their time.
In fact I do have some theories about how George Orwell (in "Why I Write") and Italo Calvino (in "If On a Winter's Night a Traveller") made some good guesses from the 20th century about how easy access to LLMs would affect communication, politics and art here in the 21st. But I'll leave that for another time.
It's also 8 years since I moved to this new country where I live now, driving off the boat in a rusty transit van to enjoy a series of unexpected and amazing opportunities. Next week I'm going to mark the occasion with a five day bike ride through the mountains of Asturias, something I've been dreaming of doing for several years. But I'm not going to talk about that, either.
The original idea of writing a monthly post was to keep tabs on various open source software projects I sometimes manage to contribute to, and perhaps even to motivate me to do more such volunteering. Well that part didn't work, house renovations and an unexpectedly successful gig playing synth and trombone took over all my free time; but after many years of working on corporate consultancy and doing a little open source in the background, I'm trying to make a space at work to contribute in the open again.
I could tell the whole story here of how Codethink became "the build system people". Maybe I will actually. It all started with BuildStream. In fact, that's not even true. it all started in 2011 when some colleagues working with MeeGo and Yocto thought, "This is horrible, isn't it?"
They set out to create something better, and produced Baserock, which unfortunately turned out even worse. But it did have some good ideas. The concept of "cache keys" to identify build inputs and content-addressed storage to hold build outputs began there, as did the idea of opening a "workspace" to make drive-by changes in build inputs within a large project.
BuildStream took this core idea, extended it to support arbitrary source kinds and element kinds defined by plugins, and added a shiny interface on top. initially It used OSTree to store and distribute build artifacts, later migrating to the Google REAPI with the goal of supporting Enterprise(TM) infrastructure. You can even use it alongside Bazel, if you like having three thousand commandline options at your disposal.
Unfortunately it was 2016, so we wrote the whole thing in Python. (In our defence, the Rust programming language had only recently hit 1.0 and crates.io was still a ghost town, and we'd probably still be rewriting the ruamel.yaml package in Rust if we had taken that road.) But the company did make some great decisions, particularly making a condition of success for the BuildStream project that it could unify the 5 different build+integration systems that GNOME release team were maintaining. And that success meant not just a prototype of this, but release team actually using BuildStream to make releases. Tristan even ended up joining the GNOME release team for a while. We discussed it all at the 2017 Manchester GUADEC event, coincidentally. It was a great time. (Aside from those 6 months leading up to the conference.)
At this point, the Freedesktop SDK already existed, with the same rather terrible name that it has today, and was already the base runtime for this new app container tool that was named… xdg-app. (At least that eventually gained a better name). However, if you can remember 8 years ago, it had a very different form than today. Now, my memory of what happened next is especially hazy at this point, because like I told you in the beginning, I was on a boat with my transit van heading towards a new life in Spain. All I have to go on 8 years later is the Git history, but somehow the Freedesktop SDK grew a 3-stage compiler bootstrap, over 600 reusable BuildStream elements, its own Gitlab namespace, and even some controversial stickers. As a parting gift I apparently added support for building VMs, the idea being that we'd reinstate the old GNOME Continuous CI system that had unfortunately died of neglect several years earlier. This idea got somewhat out of hand, let's say.
It took me a while to realize this, but today Freedesktop SDK is effectively the BuildStream reference distribution. What Poky is to BitBake in the Yocto project, this is what Freedesktop SDK is to BuildStream. And this is a pretty important insight. It explains the problem you may have experienced with the BuildStream documentation: you want to build some Linux package, so you read through the manual right to the end, and then you still have no fucking idea how to integrate that package.
This isn't a failure on the part of the authors, instead the issue is that your princess is in another castle. Every BuildStream project I've ever worked on has junctioned freedesktop-sdk.git and re-used the elements, plugins, aliases, configurations and conventions defined there, all of which are rigorously undocumented. The Freedesktop SDK Guide, for reasons that I won't go into, doesn't venture much further than than reminding you how to call Make targets.
And this is something of a point of inflection. The BuildStream + Freedesktop SDK ecosystem has clearly not displaced Yocto, nor for that matter Linux Mint. But, like many of my favourite musicians, it has been quietly thriving in obscurity. People I don't know are using it to do things that I don't completely understand. I've seen it in comparison articles, and even job adverts. ChatGPT can generate credible BuildStream elements about as well as it can generate Dockerfiles (i.e. not very well, but it indicates a certain level of ubiquity). There have been conferences, drama, mistakes, neglect. It's been through an 8 person corporate team hyper-optimizing the code, and its been though a mini dark age where volunteers thanklessly kept the lights on almost single handledly, and its even survived its transition to the Apache Foundation.
Through all of this, the secret to its success probably that its just a really nice tool to work with. As much as you can enjoy software integration, I enjoy using BuildStream to do it; things rarely break, when they do its rarely difficult to fix them, and most importantly the UI is really colourful! I'm now using it to build embedded system images for a product named CTRL, which you can think of as.. a Linux distribution. There are some technical details to this which I'm working to improve, which I won't bore you with here.
I also won't bore you with the topic of community governance this month, but that's what's currently on my mind. If you've been part of the GNOME Foundation for a few years, you'll know this something that's usually boring and occasionally becomes of almost life-or-death importance. The "let's just be really sound" model works great, until one day when you least expect it, and then suddenly it really doesn't. There is no perfect defence against this, and in open source communities its our diversity that brings the most resilience. When GNOME loses, KDE gains, and that way at least we still don't have to use Windows. Indeed, this is one argument for investing in BuildStream even if it remains forever something of a minority sport. I guess I just need to remember that when you have to start thinking hard about governance, that's a sign of success.
23 Apr 2026 8:48pm GMT
Sebastian Wick: How Hard Is It To Open a File?
It's a question I had to ask myself multiple times over the last few months. Depending on the context the answer can be:
- very simple, just call the standard library function
- extremely hard, don't trust anything
If you are an app developer, you're lucky and it's almost always the first answer. If you develop something with a security boundary which involves files in any way, the correct answer is very likely the second one.
Opening a File, the Hard Way
Like so often, the details depend on the specifics, but in the worst-case scenario, there is a process on either side of the security boundary, which operate on a filesystem tree which is shared by both processes.
Let's say that the process with more privileges operates on a file on behalf of the process with less privileges. You might want to restrict this to files in a certain directory, to prevent the less privileged process from, for example, stealing your SSH key, and thus take a subpath that is relative to that directory.
The first obvious problem is that the subpath can refer to files outside of the directory if it contains ... If the privileged process gets called with a subpath of ../.ssh/id_ed25519, you are in trouble. Easy fix: normalize the path, and if we ever go outside of the directory, fail.
The next issue is that every component of the path might be a symlink. If the privileged process gets called with a subpath of link, and link is a symlink to ../.ssh/id_ed25519, you might be in trouble. If the process with less privileges cannot create files in that part of the tree, it cannot create a malicious symlink, and everything is fine. In all other scenarios, nothing is fine. Easy fix: resolve the symlinks, expand the path, then normalize it.
This is usually where most people think we're done, opening a file is not that hard after all, we can all do more fun things now. Really, this is where the fun begins.
The fix above works, as long as the less privileged process cannot change the file system tree anywhere in the file's path while the more privileged process tries to access it. Usually this is the case if you unpack an attacker-provided archive into a directory the attacker does not have access to. If it can however, we have a classic TOCTOU (time-of-check to time-of-use) race.
We have the path foo/id_ed25519, we resolve the smlinks, we expand the path, we normalize it, and while we did all of that, the other process just replaced the regular directory foo that we just checked with a symlink which points to ../.ssh. We just checked that the path resolves to a path inside the target directory though, and happily open the path foo/id_ed25519 which now points to your ssh key. Not an easy fix.
So, what is the fundamental issue here? A path string like /home/user/.local/share/flatpak/app/org.example.App/deploy describes a location in a filesystem namespace. It is not a reference to a file. By the time you finish speaking the path aloud, the thing it names may have changed.
The safe primitive is the file descriptor. Once you have an fd pointing at an inode, the kernel pins that inode. The directory can be unlinked, renamed, or replaced with a symlink; the fd does not care. A common misconception is that file descriptors represent open files. It is true that they can do that, but fds opened with O_PATH do not require opening the file, but still provide a stable reference to an inode.
The lesson that should be learned here is that you should not call any privileged process with a path. Period. Passing in file descriptors also has the benefit that they serve as proof that the calling process actually has access to the resource.
Another important lesson is that dropping down from a file descriptor to a path makes everything racy again. For example, let's say that we want to bind mount something based on a file descriptor, and we only have the traditional mount API, so we convert the fd to a path, and pass that to mount. Unfortunately for the user, the kernel resolves the symlinks in the path that an attacker might have managed to place there. Sometimes it's possible to detect the issue after the fact, for example by checking that the inode and device of the mounted file and the file descriptor match.
With that being said, sometimes it is not entirely avoidable to use paths, so let's also look into that as well!
In the scenario above, we have a directory in which we want all the paths to resolve in, and that the attacker does not control. We can thus open it with O_PATH and get a file descriptor for it without the attacker being able to redirect it somewhere else.
With the openat syscall, we can open a path relative to the fd we just opened. It has all the same issues we discussed above, except that we can also pass O_NOFOLLOW. With that flag set, if the last segment of the path is a symlink, it does not follow it and instead opens the actual symlink inode. All the other components can still be symlinks, and they still will be followed. We can however just split up the path, and open the next file descriptor for the next path segment and resolve symlinks manually until we have done so for the entire path.
libglnx chase
libglnx is a utility library for GNOME C projects that provides fd-based filesystem operations as its primary API. Functions like glnx_openat_rdonly, glnx_file_replace_contents_at, and glnx_tmpfile_link_at all take directory fds and operate relative to them. The library is built around the discipline of "always have an fd, never use an absolute path when you can use an fd."
The most recent addition is glnx_chaseat, which provides safe path traversal, and was inspired by systemd's chase(), and does precisely what was described above.
int glnx_chaseat (int dirfd,
const char *path,
GlnxChaseFlags flags,
GError **error);
It returns an O_PATH | O_CLOEXEC fd for the resolved path, or -1 on error. The real magic is in the flags:
typedef enum _GlnxChaseFlags {
/* Default */
GLNX_CHASE_DEFAULT = 0,
/* Disable triggering of automounts */
GLNX_CHASE_NO_AUTOMOUNT = 1 << 1,
/* Do not follow the path's right-most component. When the path's right-most
* component refers to symlink, return O_PATH fd of the symlink. */
GLNX_CHASE_NOFOLLOW = 1 << 2,
/* Do not permit the path resolution to succeed if any component of the
* resolution is not a descendant of the directory indicated by dirfd. */
GLNX_CHASE_RESOLVE_BENEATH = 1 << 3,
/* Symlinks are resolved relative to the given dirfd instead of root. */
GLNX_CHASE_RESOLVE_IN_ROOT = 1 << 4,
/* Fail if any symlink is encountered. */
GLNX_CHASE_RESOLVE_NO_SYMLINKS = 1 << 5,
/* Fail if the path's right-most component is not a regular file */
GLNX_CHASE_MUST_BE_REGULAR = 1 << 6,
/* Fail if the path's right-most component is not a directory */
GLNX_CHASE_MUST_BE_DIRECTORY = 1 << 7,
/* Fail if the path's right-most component is not a socket */
GLNX_CHASE_MUST_BE_SOCKET = 1 << 8,
} GlnxChaseFlags;
While it doesn't sound too complicated to implement, a lot of details are quite hairy. The implementation uses openat2, open_tree and openat depending on what is available and what behavior was requested, it handles auto-mount behavior, ensures that previously visited paths have not changed, and a few other things.
An Aside on Standard Libraries
The POSIX APIs are not great at dealing with the issue. The GLib/Gio APIs (GFile, etc.) are even worse and only accept paths. Granted, they also serve as a cross-platform abstraction where file descriptors are not a universal concept. Unfortunately, Rust also has this cross-platform abstraction which is based entirely on paths.
If you use any of those APIs, you very likely created a vulnerability. The deeper issue is that those path-based APIs are often the standard way to interact with files. This makes it impossible to reason about the security of composed code. You can audit your own code meticulously, open everything with O_PATH | O_NOFOLLOW, chain *at() calls carefully - and then call a third-party library that calls open(path) internally. The security property you established in your code does not compose through that library call.
This means that any system-level code that cares about filesystem security has to audit all transitive dependencies or avoid them in the first place.
So what would a better GLib cross-platform API look like? I would say not too different from chaseat(), but returning opaque handles instead of file descriptors, which on Unix would carry the O_PATH file descriptor and a path that can be used for printing, debugging and things like that. You would open files from those handles, which would yield another kind of opaque handle for reading, writing, and so on.
The current GFile was also designed to implement GVfs: g_file_new_for_uri("smb://server/share/file") gives you a GFile you can g_file_read() just like a local file. This is the right goal, but the wrong abstraction layer. Instead, this kind of access should be provided by FUSE, and the URI should be translated to a path on a specific FUSE mount. This would provide a few benefits:
- The fd-chasing approach works everywhere because it is a real filesystem managed by the kernel
- The filesystem becomes independent of GLib and can be used for example from Rust as well
- It stacks with other FUSE filesystems, such as the XDG Desktop Document Portal used by Flatpak
Wait, Why Are You Talking About This?
Nowadays I maintain a small project called Flatpak. Codean Labs recently did a security analysis on it and found a number of issues. Even though Flatpak developers were aware of the dangers of filesystems, and created libglnx because of it, most of the discovered issues were just about that. One of them (CVE-2026-34078) was a complete sandbox escape.
flatpak run was designed as a command-line tool for trusted users. When you type flatpak run org.example.App, you control the arguments. The code that processes the arguments was written assuming the caller is legitimate. It accepted path strings, because that's what command-line tools accept.
The Flatpak portal was then built as a D-Bus service that sandboxed apps could call to start subsandboxes - and it did this by effectively constructing a flatpak run invocation and executing it. This connected a component designed for trusted input directly to an untrusted caller (the sandboxed app).
Once that connection exists, every assumption baked into flatpak run about caller trustworthiness becomes a potential vulnerability. The fix wasn't "change one function" - it was "audit the entire call chain from portal request to bubblewrap execution and replace every path string with an fd." That's commits touching the portal, flatpak-run, flatpak_run_app, flatpak_run_setup_base_argv, and the bwrap argument construction, plus new options (--app-fd, --usr-fd, --bind-fd, --ro-bind-fd) threaded through all of them.
If the GLib standard file and path APIs were secure, we would not have had this issue.
Another annoyance here is that the entire subsandboxing approach in Flatpak comes from 15 years ago, when unprivileged user namespaces were not common. Nowadays we could (and should) let apps use kernel-native unprivileged user namespaces to create their own subsandboxes.
Unfortunately with rather large changes comes a high likelihood of something going wrong. For a few days we scrambled to fix a few regressions that prevented Steam, WebKit, and Chromium-based apps from launching. Huge thanks to Simon McVittie!
In the end, we managed to fix everything, made Flatpak more secure, the ecosystem is now better equipped to handle this class of issues, and hopefully you learned something as well.
23 Apr 2026 8:41pm GMT














