18 Apr 2026

feedPlanet GNOME

Matthias Klumpp: Hello old new β€œProjects” directory!

If you have recently installed a very up-to-date Linux distribution with a desktop environment, or upgraded your system on a rolling-release distribution, you might have noticed that your home directory has a new folder: "Projects"

Why?

With the recent 0.20 release of xdg-user-dirs we enabled the "Projects" directory by default. Support for this has already existed since 2007, but was never formally enabled. This closes a more than 11 year old bug report that asked for this feature.

The purpose of the Projects directory is to give applications a default location to place project files that do not cleanly belong into one of the existing categories (Documents, Music, Pictures, Videos). Examples of this are software engineering projects, scientific projects, 3D printing projects, CAD design or even things like video editing projects, where project files would end up in the "Projects" directory, with output video being more at home in "Videos".

By enabling this by default, and subsequently in the coming months adding support to GLib, Flatpak, desktops and applications that want to make use of it, we hope to give applications that do operate in a "project-centric" manner with mixed media a better default storage location. As of now, those tools either default to the home directory, or will clutter the "Documents" folder, both of which is not ideal. It also gives users a default organization structure, hopefully leading to less clutter overall and better storage layouts.

This sucks, I don't like it!

As usual, you are in control and can modify your system's behavior. If you do not like the "Projects" folder, simply delete it! The xdg-user-dirs utility will not try to create it again, and instead adjust the default location for this directory to your home directory. If you want more control, you can influence exactly what goes where by editing your ~/.config/user-dirs.dirs configuration file.

If you are a system administrator or distribution vendor and want to set default locations for the default XDG directories, you can edit the /etc/xdg/user-dirs.defaults file to set global defaults that affect all users on the system (users can still adjust the settings however they like though).

What else is new?

Besides this change, the 0.20 release of xdg-user-dirs brings full support for the Meson build system (dropping Automake), translation updates, and some robustness improvements to its code. We also fixed the "arbitrary code execution from unsanitized input" bug that the Arch Linux Wiki mentions here for the xdg-user-dirs utility, by replacing the shell script with a C binary.

Thanks to everyone who contributed to this release!

18 Apr 2026 8:06am GMT

17 Apr 2026

feedPlanet GNOME

Allan Day: GNOME Foundation Update, 2026-04-17

Welcome to another update about everything that's been happening at the GNOME Foundation. It's been four weeks since my last post, due to a vacation and public holidays, so there's lots to cover. This period included a major announcement, but there's also been a lot of other notable work behind the scenes.

Fellowship & Fundraising

The really big news from the last four weeks was the launch of our new Fellowship program. This is something that the Board has been discussing for quite some time, so we were thrilled to be able to make the program a reality. We are optimistic that it will make a significant difference to the GNOME project.

If you didn't see it already, check out the announcement for details. Also, if you want to apply to be our first Fellow, you have just three days until the application deadline on 20th April!

donate.gnome.org has been a great success for the GNOME Foundation, and it is only through the support of our existing donors that the Fellowship was possible. Despite these amazing contributions, the GNOME Foundation needs to grow our donations if we are going to be able to support future Fellowship rounds while simultaneously sustaining the organisation.

To this end, there's an effort happening to build our marketing and fundraising effort. This is primarily taking place in the GNOME Engagement Team, and we would love help from the community to help boost our outbound comms. If you are interested, please join the Engagement space and look out for announcements.

Also, if you haven't already, and are able to do so: please donate!

Conferences

We have two major events coming up, with Linux App Summit in May and GUADEC in July, so right now is a busy time for conferences.

The schedules for both of these upcoming events are currently being worked on, and arrangements for catering, photographers, and audio visual services are all in the process of being finalized.

The Travel Committee has also been busy handling GUADEC travel requests, and has sent out the first batch of approvals. There are some budget pressures right now due to rising flight prices, but budget has been put aside for more GUADEC travel, so please apply if you want to attend and need support.

April 2026 Board Meeting

This week was the Board's regular monthly meeting for April. Highlights from the meeting included:

Infrastructure

As usual, plenty has been happening on the infrastructure side over the past month. This has included:

Admin & Finance

On the accounting side, the team has been busy catching up on regular work that got put to one side during last month's audit. There were some significant delays to our account process as a result of this, but we are now almost up to date.

Reorganisation of many of our finance processes has also continued over the past four weeks. Progress has included a new structure and cadence for our internal accounting calls, continued configuration of our new payments platform, and new forms for handling reimbursement requests.

Finally, we have officially kicked off the process of migrating to our new physical mail service. Work on this is ongoing and will take some time to complete. Our new address is on the website, if anyone needs it.

That's it for this report! Thanks for reading, and feel free to use the comments if you have questions!

17 Apr 2026 3:22pm GMT

Andrea Veri: GNOME GitLab Git traffic caching

Table of Contents

Introduction

One of the most visible signs that GNOME's infrastructure has grown over the years is the amount of CI traffic that flows through gitlab.gnome.org on any given day. Hundreds of pipelines run in parallel, most of them starting with a git clone or git fetch of the same repository, often at the same commit. All that traffic was landing directly on GitLab's webservice pods, generating redundant load for work that was essentially identical.

GNOME's infrastructure runs on AWS, which generously provides credits to the project. Even so, data transfer is one of the largest cost drivers we face, and we have to operate within a defined budget regardless of those credits. The bandwidth costs associated with this Git traffic grew significant enough that for a period of time we redirected unauthenticated HTTPS Git pulls to our GitHub mirrors as a short-term cost mitigation. That measure bought us some breathing room, but it was never meant to be permanent: sending users to a third-party platform for what is essentially a core infrastructure operation is not a position we wanted to stay in. The goal was always to find a proper solution on our own infrastructure.

This post documents the caching layer we built to address that problem. The solution sits between the client and GitLab, intercepts Git fetch traffic, and routes it through Fastly's CDN so that repeated fetches of the same content are served from cache rather than generating a fresh pack every time.

The problem

The Git smart HTTP protocol uses two endpoints: info/refs for capability advertisement and ref discovery, and git-upload-pack for the actual pack generation. The second one is the expensive one. When a CI job runs git fetch origin main, GitLab has to compute and send the entire pack for that fetch negotiation. If ten jobs run the same fetch within a short window, GitLab does that work ten times.

The tricky part is that git-upload-pack is a POST request with a binary body that encodes what the client already has (have lines) and what it wants (want lines). Traditional HTTP caches ignore POST bodies entirely. Building a cache that actually understands those bodies and deduplicates identical fetches requires some work at the edge.

For a fresh clone the body contains only want lines - one per ref the client is requesting:

0032want 7d20e995c3c98644eb1c58a136628b12e9f00a78
0032want 93e944c9f728a4b9da506e622592e4e3688a805c
0032want ef2cbad5843a607236b45e5f50fa4318e0580e04
...

For an incremental fetch the body is a mix of want lines (what the client needs) and have lines (commits the client already has locally), which the server uses to compute the smallest possible packfile delta:

00a4want 51a117587524cbdd59e43567e6cbd5a76e6a39ff
0000
0032have 8282cff4b31dce12e100d4d6c78d30b1f4689dd3
0032have be83e3dae8265fdc4c91f11d5778b20ceb4e2479
0032have 7d46abdf9c5a3f119f645c8de6d87efffe3889b8
...

The leading four hex characters on each line are the pkt-line length prefix. The server walks back through history from the wanted commits until it finds a common ancestor with the have set, then packages everything in between into a packfile. Two CI jobs running the same pipeline at the same commit will produce byte-for-byte identical request bodies and therefore identical responses - exactly the property a cache can help with.

Architecture overview

The overall setup involves four components:

flowchart TD
client["Git client / CI runner"]
gitlab_gnome["gitlab.gnome.org (Nginx reverse proxy)"]
nginx["OpenResty Nginx"]
lua["Lua: git_upload_pack.lua"]
cdn_origin["/cdn-origin internal location"]
fastly_cdn["Fastly CDN"]
origin["gitlab.gnome.org via its origin (second pass)"]
gitlab["GitLab webservice"]
valkey["Valkey denylist"]
webhook["gitlab-git-cache-webhook"]
gitlab_events["GitLab project events"]
client --> gitlab_gnome
gitlab_gnome --> nginx
nginx --> lua
lua -- "check denylist" --> valkey
lua -- "private repo: BYPASS" --> gitlab
lua -- "public/internal: internal redirect" --> cdn_origin
cdn_origin --> fastly_cdn
fastly_cdn -- "HIT" --> cdn_origin
fastly_cdn -- "MISS: origin fetch" --> origin
origin --> gitlab
gitlab_events --> webhook
webhook -- "SET/DEL git:deny:" --> valkey

The request path for a public or internal repository looks like this:

  1. The Git client runs git fetch or git clone. Git's smart HTTP protocol translates this into two HTTP requests: a GET /Namespace/Project.git/info/refs?service=git-upload-pack for ref discovery, followed by a POST /Namespace/Project.git/git-upload-pack carrying the negotiation body. It is that second request - the expensive pack-generating one - that the cache targets.
  2. It arrives at gitlab.gnome.org's Nginx server, which acts as the reverse proxy in front of GitLab's webservice.
  3. The git-upload-pack location runs a Lua script that parses the repo path, reads the request body, and SHA256-hashes it. The hash is the foundation of the cache key: because the body encodes the exact set of want and have SHAs the client is negotiating, two jobs fetching the same commit from the same repository will produce byte-for-byte identical bodies and therefore the same hash - making the cached packfile safe to reuse.
  4. Lua checks Valkey: is this repo in the denylist? If yes, the request is proxied directly to GitLab with no caching.
  5. For public/internal repos, Lua strips the Authorization header, builds a cache key, converts the POST to a GET, and does an internal redirect to /cdn-origin. The POST-to-GET conversion is necessary because Fastly does not apply consistent hashing to POST requests - each of the hundreds of nodes within a POP maintains its own independent cache storage, so the same POST request hitting different nodes will always be a miss. By converting to a GET, Fastly's consistent hashing kicks in and routes requests with the same cache key to the same node, which means the cache is actually shared across all concurrent jobs hitting that POP.
  6. The /cdn-origin location proxies to the Fastly git cache CDN with the X-Git-Cache-Key header set.
  7. Fastly's VCL sees the key and does a cache lookup. On a HIT it returns the cached pack. On a MISS it fetches from gitlab.gnome.org directly via its origin (bypassing the CDN to avoid a loop) - the same Nginx instance - and caches the response for 30 days.
  8. On that second pass (origin fetch), Nginx detects the X-Git-Cache-Internal header, decodes the original POST body from X-Git-Original-Body, restores the request method, and proxies to GitLab.

The Nginx and Lua layer

The Nginx configuration exposes two relevant locations. The first is the internal one used for the CDN proxy leg:

location ^~ /cdn-origin/ {
 internal;
 rewrite ^/cdn-origin(/.*)$ $1 break;
 proxy_pass $cdn_upstream;
 proxy_ssl_server_name on;
 proxy_ssl_name <cdn-hostname>;
 proxy_set_header Host <cdn-hostname>;
 proxy_set_header Accept-Encoding "";
 proxy_http_version 1.1;
 proxy_buffering on;
 proxy_request_buffering on;
 proxy_connect_timeout 10s;
 proxy_send_timeout 60s;
 proxy_read_timeout 60s;

 header_filter_by_lua_block {
 ngx.header["X-Git-Cache-Key"] = ngx.req.get_headers()["X-Git-Cache-Key"]
 ngx.header["X-Git-Body-Hash"] = ngx.req.get_headers()["X-Git-Body-Hash"]

 local xcache = ngx.header["X-Cache"] or ""
 if xcache:find("HIT") then
 ngx.header["X-Git-Cache-Status"] = "HIT"
 else
 ngx.header["X-Git-Cache-Status"] = "MISS"
 end
 }
}

The header_filter_by_lua_block here is doing something specific: it reads X-Cache from the response Fastly returns and translates it into a clean X-Git-Cache-Status header for observability. The X-Git-Cache-Key and X-Git-Body-Hash are also passed through so that callers can see what cache entry was involved.

The second location is git-upload-pack itself, which delegates all the logic to a Lua file:

location ~ /git-upload-pack$ {
 client_body_buffer_size 5m;
 client_max_body_size 5m;

 access_by_lua_file /etc/nginx/lua/git_upload_pack.lua;

 header_filter_by_lua_block {
 local key = ngx.req.get_headers()["X-Git-Cache-Key"]
 if key then
 ngx.header["X-Git-Cache-Key"] = key
 end
 }

 proxy_pass http://gitlab-webservice;
 proxy_http_version 1.1;
 proxy_set_header Host gitlab.gnome.org;
 proxy_set_header X-Real-IP $http_fastly_client_ip;
 proxy_set_header X-Forwarded-For $http_fastly_client_ip;
 proxy_set_header X-Forwarded-Proto https;
 proxy_set_header X-Forwarded-Port 443;
 proxy_set_header X-Forwarded-Ssl on;
 proxy_set_header Connection "";
 proxy_buffering on;
 proxy_request_buffering on;
 proxy_connect_timeout 10s;
 proxy_send_timeout 60s;
 proxy_read_timeout 60s;
}

The access_by_lua_file directive runs before the request is proxied. If the Lua script calls ngx.exec("/cdn-origin" .. uri), Nginx performs an internal redirect to the CDN location and the proxy_pass to GitLab is never reached. If the script returns normally (for private repos or non-fetch commands), the request falls through to the proxy_pass.

Building the cache key

The full Lua script that runs in access_by_lua_file handles both passes of the request. The first pass (client β†’ nginx) does the heavy lifting:

local resty_sha256 = require("resty.sha256")
local resty_str = require("resty.string")
local redis_helper = require("redis_helper")

local redis_host = os.getenv("REDIS_HOST") or "localhost"
local redis_port = os.getenv("REDIS_PORT") or "6379"

-- Second pass: request arriving from CDN origin fetch.
-- Decode the original POST body from the header and restore the method.
if ngx.req.get_headers()["X-Git-Cache-Internal"] then
 local encoded_body = ngx.req.get_headers()["X-Git-Original-Body"]
 if encoded_body then
 ngx.req.read_body()
 local body = ngx.decode_base64(encoded_body)
 ngx.req.set_method(ngx.HTTP_POST)
 ngx.req.set_body_data(body)
 ngx.req.set_header("Content-Length", tostring(#body))
 ngx.req.clear_header("X-Git-Original-Body")
 end
 return
end

The second-pass guard is at the top of the script. When Fastly's origin fetch arrives, it will carry X-Git-Cache-Internal: 1. The script detects that, reconstructs the POST body from the base64-encoded header, restores the POST method, and returns - allowing Nginx to proxy the real request to GitLab.

For the first pass, the script parses the repo path from the URI, reads and buffers the full request body, and computes a SHA256 over it:

-- Only cache "fetch" commands; ls-refs responses are small, fast, and
-- become stale on every push (the body hash is constant so a long TTL
-- would serve outdated ref listings).
if not body:find("command=fetch", 1, true) then
 ngx.header["X-Git-Cache-Status"] = "BYPASS"
 return
end

-- Hash the body
local sha256 = resty_sha256:new()
sha256:update(body)
local body_hash = resty_str.to_hex(sha256:final())

-- Build cache key: cache_versioning + repo path + body hash
local cache_key = "v2:" .. repo_path .. ":" .. body_hash

A few things worth noting here. The ls-refs command is explicitly excluded from caching. The reason is that ls-refs is used to list references and its request body is essentially static (just a capability advertisement). If we cached it with a 30-day TTL, a push to the repository would not invalidate the cache - the key would be the same - and clients would get stale ref listings. Fetch bodies, on the other hand, encode exactly the SHAs the client wants and already has. The same set of want/have lines always maps to the same pack, which makes them safe to cache for a long time.

The v2: prefix is a cache version string. It makes it straightforward to invalidate all existing cache entries if we ever need to change the key scheme, without touching Fastly's purge API.

The POST-to-GET conversion

This is probably the most unusual part of the design:

-- Carry the POST body as a base64 header and convert to GET so that
-- Fastly's intra-POP consistent hashing routes identical cache keys
-- to the same server (Fastly only does this for GET, not POST).
ngx.req.set_header("X-Git-Original-Body", ngx.encode_base64(body))
ngx.req.set_method(ngx.HTTP_GET)
ngx.req.set_body_data("")

return ngx.exec("/cdn-origin" .. uri)

Fastly's shield feature routes cache misses through a designated intra-POP "shield" node before going to origin. When two different edge nodes both get a MISS for the same cache key simultaneously, the shield node collapses them into a single origin request. This is important for us because without it, a burst of CI jobs fetching the same commit would all miss, all go to origin in parallel, and GitLab would end up generating the same pack multiple times anyway.

The catch is that Fastly's consistent hashing and shield routing only works for GET requests. POST requests always go straight to origin. Fastly does provide a way to force POST responses into the cache - by returning pass in vcl_recv and setting beresp.cacheable in vcl_fetch - but it is a blunt instrument: there is no consistent hashing, no shield collapsing, and no guarantee that two nodes in the same POP will ever share the cached result. By converting the POST to a GET and encoding the body in a header, we get consistent hashing and shield-level request collapsing for free.

The VCL on the Fastly side uses the X-Git-Cache-Key header (not the URL or method) as the cache key, so the GET conversion is invisible to the caching logic.

Protecting private repositories

We cannot route private repository traffic through an external CDN - that would mean sending authenticated git content to a third-party cache. The way we prevent this is a denylist stored in Valkey. Before doing anything else, the Lua script checks whether the repository is listed there:

local denied, err = redis_helper.is_denied(redis_host, redis_port, repo_path)

if err then
 ngx.log(ngx.ERR, "git-cache: Redis error for ", repo_path, ": ", err,
 " - cannot verify project visibility, bypassing CDN")
 ngx.header["X-Git-Cache-Status"] = "BYPASS"
 return
end

if denied then
 ngx.header["X-Git-Cache-Status"] = "BYPASS"
 ngx.header["X-Git-Body-Hash"] = body_hash:sub(1, 12)
 return
end

-- Public/internal repo: strip credentials before routing through CDN
ngx.req.clear_header("Authorization")

If Valkey is unreachable, the script logs an error and bypasses the CDN entirely, treating the repository as if it were private. This is the safe default: the cost of a Redis failure is slightly increased load on GitLab, not the risk of routing private repository content through an external cache. In practice, Valkey runs alongside Nginx on the same node, so true availability failures are uncommon.

The denylist is maintained by gitlab-git-cache-webhook, a small FastAPI service. It listens for GitLab system hooks on project_create and project_update events:

HANDLED_EVENTS = {"project_create", "project_update"}

@router.post("/webhook")
async def webhook(request: Request, ...) -> Response:
 ...
 event = body.get("event_name", "")
 if event not in HANDLED_EVENTS:
 return Response(status_code=204)

 project = body.get("project", {})
 path = project.get("path_with_namespace", "")
 visibility_level = project.get("visibility_level")

 if visibility_level == 0:
 await deny_repo(path)
 else:
 removed = await allow_repo(path)
 return Response(status_code=204)

GitLab's visibility_level is 0 for private, 10 for internal, and 20 for public. Internal repositories are intentionally treated the same as public ones here: they are accessible to any authenticated user on the instance, so routing them through the CDN is acceptable. Only truly private repositories go into the denylist.

The key format in Valkey is git:deny:<path_with_namespace>. The Lua redis_helper module does an EXISTS check on that key. The webhook service also ships a reconciliation command (python -m app.reconcile) that does a full resync of all private repositories via the GitLab API, which is useful to run on first deployment or after any extended Valkey downtime.

The Fastly VCL

On the Fastly side, three VCL subroutines carry the relevant logic. In vcl_recv:

if (req.url ~ "/info/refs") {
return(pass);
}
if (req.http.X-Git-Cache-Key) {
set req.backend = F_Host_1;
if (req.restarts == 0) {
set req.backend = fastly.try_select_shield(ssl_shield_iad_va_us, F_Host_1);
}
return(lookup);
}

/info/refs is always passed through uncached - it is the capability advertisement step and caching it would cause problems with protocol negotiation. Requests carrying X-Git-Cache-Key get an explicit lookup directive and are routed through the shield. Everything else falls through to Fastly's default behaviour.

In vcl_hash, the cache key overrides the default URL-based key:

if (req.http.X-Git-Cache-Key) {
set req.hash += req.http.X-Git-Cache-Key;
return(hash);
}

And in vcl_fetch, responses are marked cacheable when they come back with a 200 and a non-empty body:

if (req.http.X-Git-Cache-Key && beresp.status == 200) {
if (beresp.http.Content-Length == "0") {
set beresp.ttl = 0s;
set beresp.cacheable = false;
return(deliver);
}
set beresp.cacheable = true;
set beresp.ttl = 30d;
set beresp.http.X-Git-Cache-Key = req.http.X-Git-Cache-Key;
unset beresp.http.Cache-Control;
unset beresp.http.Pragma;
unset beresp.http.Expires;
unset beresp.http.Set-Cookie;
return(deliver);
}

The 30-day TTL is deliberately long. Git pack data is content-addressed: a pack for a given set of want/have lines will always be the same. As long as the objects exist in the repository, the cached pack is valid. The only case where a cached pack could be wrong is if objects were deleted (force-push that drops history, for instance), which is rare and, on GNOME's GitLab, made even rarer by the Gitaly custom hooks we run to prevent force-pushes and history rewrites on protected namespaces. In those cases the cache version prefix would force a key change rather than relying on TTL expiry.

Empty responses (Content-Length: 0) are explicitly not cached. GitLab can return an empty body in edge cases and caching that would break all subsequent fetches for that key.

Conclusions

The system has been running in production for a few days now and the cache hit rate on fetch traffic has been overall consistently high (over 80%). If something goes wrong with the cache layer, the worst case is that requests fall back to BYPASS and GitLab handles them directly, which is how things worked before. This also means we don't redirect any traffic to github.com anymore.

That should be all for today, stay tuned!

17 Apr 2026 2:00pm GMT

Jussi Pakkanen: Multi merge sort, or when optimizations aren't

In our previous episode we wrote a merge sort implementation that runs a bit faster than the one in stdlibc++. The question then becomes, could it be made even faster. If you go through the relevant literature one potential improvement is to do a multiway merge. That is, instead of merging two arrays into one, you merge four into one using, for example, a priority queue.

This seems like a slam dunk for performance.

Implementing multimerge was conceptually straightforward but getting all the gritty details right took a fair bit of time. Once I got it working the end result was slower. And not by a little, either, but more than 30% slower. Trying some optimizations made it a bit faster but not noticeably so.

Why is this so? Maybe there are bugs that cause it to do extra work? Assuming that is not the case, what actually is? Measuring seems to indicate that a notable fraction of the runtime is spent in the priority queue code. Beyond that I got very little to nothing.

The best hypotheses I could come up with has to with the number of comparisons made. A classical merge sort does two if statements per output elements. One to determine which of the two lists has a smaller element at the front and one to see whether removing the element exhausted the list. The former is basically random and the latter is always false except when the last element is processed. This amounts to 0.5 mispredicted branches per element per round.

A priority queue has to do a bunch more work to preserve the heap property. The first iteration needs to check the root and its two children. That's three comparisons for value and two checks whether the children actually exist. Those are much less predictable than the comparisons in merge sort. Computers are really efficient at doing simple things, so it may be that the additional bookkeeping is so expensive that it negates the advantage of fewer rounds.

Or maybe it's something else. Who's to say? Certainly not me. If someone wants to play with the code, the implementation is here. I'll probably delete it at some point as it does not have really any advantage over the regular merge sort.

17 Apr 2026 10:41am GMT

This Week in GNOME: #245 Infinite Ranges

Update on what happened across the GNOME project in the week from April 10 to April 17.

GNOME Core Apps and Libraries

Libadwaita β†—

Building blocks for modern GNOME apps using GTK4.

Alice (she/her) πŸ³οΈβ€βš§οΈπŸ³οΈβ€πŸŒˆ reports

AdwAboutDialog's Other Apps section title can now be overridden to say something other than "Other Apps by developer-name"

Alice (she/her) πŸ³οΈβ€βš§οΈπŸ³οΈβ€πŸŒˆ announces

AdwEnumListModel has been deprecated in favor of the recently added GtkEnumList. They work identically and so migrating should be as simple as find-and-replace

Maps β†—

Maps gives you quick access to maps all across the world.

mlundblad announces

Maps now shows track/stop location for boarding and disembarking stations/stops on public transit journeys (when available in upstream data)

GNOME Circle Apps and Libraries

Graphs β†—

Plot and manipulate data

Sjoerd Stendahl says

After two years without a major feature-update, we are happy to announce Graphs 2.0. It's by far our biggest update yet. We are targeting a stable release next month, but in the meantime we are running an official beta testing period. We are very happy for any feedback, especially in this period!

The upcoming Graphs 2.0, features some major long-requested changes: equations now span an infinite range and can be edited and manipulated analytically, the style editor has been redesigned with a live preview, we revamped the import dialog, and imported data now supports error bars. Equations with infinite values in them such as y=tan(x) now also render properly with values being drawn all the way to infinity and without having a line going from plus to minus infinity. We've also added support for spreadsheet and SQLite database files, drag-and-drop importing, improved curve fitting with residuals and better confidence bands, and now have proper mobile support.

These are just some highlights, a more complete list of changes, including a description of how to get the beta version, can be found here: https://blogs.gnome.org/sstendahl/2026/04/14/announcing-the-upcoming-graphs-2-0/

Gaphor β†—

A simple UML and SysML modeling tool.

Arjan announces

Mareike Keil of the University of Mannheim published her article "NEST‑UX: Neurodivergent and Neurotypical Style Guide for Enhanced User Experience". The paper explores how user interfaces can be designed to be accessible for both neurotypical and neurodivergent users, including people with autism, ADHD or giftedness.

The Gaphor team worked together with Mareike to implement suggestions she found during her research, allowing us to test how well these ideas work in practice.

The article can be found at https://academic.oup.com/iwc/advance-article-abstract/doi/10.1093/iwc/iwag011/8571596.

Mareike's LinkedIn announcement can be found at https://www.linkedin.com/feed/update/urn:li:activity:7447176733759352832/.

Third Party Projects

Bilal Elmoussaoui announces

Now that most of the basic features work as expected, I would like to publicly introduce you to Goblin, a GObject Linter, for C codebases. You can read more about it at https://belmoussaoui.com/blog/23-goblin-linter/

Anton Isaiev says

RustConn (connection manager for SSH, RDP, VNC, SPICE, Telnet, Serial, Kubernetes, MOSH, and Zero Trust protocols)

Versions 0.10.15-0.10.22 bring a week of polish across the UI, security, and terminal experience.

Terminal got better. Font zoom (Ctrl+Scroll, Ctrl+Plus/Minus) and optional copy-on-select landed. The context menu now works properly - VTE's native API replaced the custom popover that was stealing focus and breaking clipboard actions. On X11 sessions (MATE, XFCE) where GTK4's NGL renderer caused blank popovers, RustConn auto-detects and falls back to Cairo.

Sidebar and navigation. Groups expand/collapse on double-click anywhere on the row. The Local Shell button moved to the header bar so it's always visible. Protocol filter bar is now optional and togglable. Tab groups show as a [GroupName] prefix in the tab title, and a new "Close All in Group" action cleans up grouped tabs at once. A tab group chooser dialog with clickable pill buttons replaces manual retyping.

RDP fixes. Multiple shared folders now map correctly in embedded IronRDP mode - previously only the first path was used. SSH Port Forwarding UI, which had silently disappeared from the connection dialog, is back.

Security hardened. Machine key encryption dropped the predictable hostname+username fallback; the /etc/machine-id path now uses HKDF-SHA256 with app-specific salt. Context menu labels and sidebar accessible labels are localized for screen readers.

Ctrl+K no longer hijacks the terminal - it was removed from the global search shortcut, so nano and other terminal apps get it back. Terminal auto-focus after connection means you can type immediately.

Export and import. Export dialog gained a group filter, and RustConn Native (.rcn) is now the default format in both import and export dialogs.

Project: https://github.com/totoshko88/RustConn Flatpak: https://flathub.org/en/apps/io.github.totoshko88.RustConn

Mufeed Ali reports

Wordbook 1.0.0 was released

Wordbook is now a fully offline application with no in-app downloads. Pronunciation data is now sourced from WordNet where possible, allowing better grouping of definitions in homonyms like "bass". In general, many UI/UX improvements and bug fixes were also made. The community also helped by localizing the app for a total of 6 new languages.

Try it on Flathub.

Pods β†—

Keep track of your podman containers.

marhkb says

Pods 3.0.0 is out!

This major release introduces a brand-new container engine abstraction layer allowing for greater flexibility.

Based on this new layer, Pods now features initial Docker support, making it easier for users to manage their containers regardless of their preferred backend.

Check it out on Flathub.

That's all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

17 Apr 2026 12:00am GMT

16 Apr 2026

feedPlanet GNOME

Thibault Martin: TIL that Pagefind does great client-side search

I post more and more content on my website. What was visible at glance then is now more difficult to look for. I wanted to implement search, but it is a static website. It means that everything is built once, and then published somewhere as final, immutable pages. I can't send a request for search and get results in return.

Or that's what I thought! Pagefind is a neat javascript library that does two things:

  1. It produces an index of the content right after building the static site.
  2. It provides 2 web components to insert in my pages: <pagefind-modal> that is the search modal itself, hidden by default, and <pagefind-modal-trigger> that looks like a search field and opens the modal.

The pagefind-modal component looks up the index when the user types a request. The index is a static file, so there is not need for a backend that processes queries. Of course this only works for basic queries, but it's a great rool already!

Pagefind is also easy to customize via a list of CSS variables. Adding it to this website was very straightforward.

16 Apr 2026 10:00am GMT

15 Apr 2026

feedPlanet GNOME

Thibault Martin: I realized that Niri can have gorgeous animation

I was a huge fan of Niri already. It's a scrolling tiling window manager. Roughly:

It means that windows always take the optimal amount of space, and they're very neatly organized. It's extremely pleasant to use and keyboard friendly.

Don't mind the apparent slowness: this was recorded on a 10 year old laptop, opening OBS is enough to make its CPU go brr. When OBS is not running, Niri is buttery smooth.

But now I've learned that Niri supports user-provided GLSL shaders for several animations. Roughly: you can animate how windows appear and disappear (and other events, but let's keep things simple).

Some people out there have created collections of shaders that work wonderfully for Niri:

My personal favorite is the glitchy one.

In a world of uniform UIs, these frivolous, unnecessary and creative ways to interact with users are a breath of fresh air! Those animations are healing my inner 14 year old.

15 Apr 2026 6:30am GMT

14 Apr 2026

feedPlanet GNOME

Steven Deobald: End of 10 Handout

There was a silly little project I'd tried to encourage many folks to attempt last summer. Sri picked it up back in September and after many months, I decided to wrap it up and publish what's there.

The intention is a simple, 2-sided A4 that folks can print and give out at repair cafes, like the End of 10 event series. Here's the original issue, if you'd like to look at the initial thought process.

When I hear fairly technical folks talk about Linux in 2026, I still consistently hear things like "I don't want to use the command line." The fact that Spotify, Discord, Slack, Zoom, and Steam all run smoothly on Linux is far removed from these folks' conception of the Linux desktop they might have formed back in 2009. Most people won't come to Linux because it's free of ✨shlop✨ and ads - they're accustomed to choking on that stuff. They'll come to Linux because they can open a spreadsheet for free, play Slay The Spire 2, or install Slack even though they promised themselves they wouldn't use their personal computer for work.

The GNOME we all know and love is one we take for granted… and the benefits of which we assume everyone wants. But the efficiency, the privacy, the universality, the hackability, the gorgeous design, and the lack of ads? All these things are the icing on the cake. The cake, like it or not, is installing Discord so you can join the Sunday book club.

Here's the A4. And here's a snippet:

An A4 snippet including "where's the start menu?", "where are my exes?", and "how do I install programs?"

If you try this out at a local repair cafe, I'd love to know which bits work and which don't. Good luck! ❀

14 Apr 2026 9:28pm GMT

Sjoerd Stendahl: Announcing the upcoming Graphs 2.0

It's been a while since we last shared a major update of Graphs. We've had a few minor releases, but the last time we had a substantial feature update was over two years ago.

This does not mean that development has stalled, to the contrary. But we've been working hard on some major changes that took some time to get completely right. Now after a long development cycle, we're finally getting close enough to a release to be able to announce an official beta period. In this blog, I'll try to summarize most of the changes in this release.

New data types

In previous version of Graphs, all data types are treated equally. This means that an equation is actually just regular data that is generated when loading. Which is fine, but it also means that the span of the equation is limited, the equation cannot be changed afterward, and operations on the equation will not be reflected in the equation name. In Graphs 2.0, we have three distinct data types: Datasets, Generated Datasets and Equations.

Datasets are the regular, imported data that you all know and love. Nothing really has changed here. Generated Datasets are essentially the same as regular datasets, but the difference is that these datasets are generated from an equation. They work the same as regular datasets, but for generated datasets you can change the equation, step size and the limits after creating the item. Finally, the major new addition is the concept of equations. As the name implies, equations are generated based on an equation you enter, but they span an infinite range. Furthermore, operations you perform on equations are done analytically. Meaning if you translate the equation `y = 2x + 3` with 3 in the y-direction, it will change to `y = 2x + 6`. If you perform a derivative, the equation will change to `y = 2x` etcetera. This is a long-requested feature, and has been made possible thanks to the magic of sympy and some trickery on the canvas. Below, there's a video that demonstrates these three data types.

Revamped Style Editor

We have redesigned the style editor, where we now show a live preview of the edited styles. This has been a pain point in the past, when you edit styles you cannot see how it actually affects the canvas. Now the style editor immediately tells you how it will affect a canvas, making it much easier to change the style exactly to your preferences.

We have also added the ability to import styles. Since Graphs styles are based on matplotlib styles, most features from a matplotlib style generally work. Similarly, you can now export your styles as well making it easier to share your style or simply to send it to a different machine. Finally, the style editor can be opened independently of Graphs. By opening a Graphs style from your file explorer, you can change the style without having to open Graphs.

We also added some new options, such as the ability to style the new error bars. But also the option to draw tick labels (so the values) on all axes that have ticks.

A screenshot of the Graphs style editor, on the left you can see the different settings as in the previous version. On the right you can see the live preview
The revamped style editor

Improved data import

We have completely reworked the way data is imported. Under the hood, our modules are completely modular making it possible to add new parsers without having to mess with the code. Thanks to this rework, we have added support for spreadsheets (LibreOffice .ods and Microsoft Office .xlxs) and for sqlite databases files. The UI automatically updates accordingly. For example for spreadsheets, columns are imported by the column name (alphabetical letter) instead of an index, while sqlite imports show the tables present in the database.

The new import dialog for Graphs. You can see how multiple different types of items are about the be imported, as well as new settings
The new import dialog

Furthermore, the import dialog has been improved. It is not possible to add multiple files at once, or import multiple datasets from the same file. Settings can be adjusted for each dataset individually. And you can even import just from a single column. We also added the ability to import error-bars on either axes, and added some pop-up buttons that explain certain settings.

Error bars

I mentioned this in the previous paragraph, but as it's a feature that's been requested multiple times I thought it'd be good to state this explicitly as well. We now added support for error bars. Error bars can easily be set on the import dialog, and turned on and off for each axis when editing the item.

Singularity handling

The next version of Graph will also finally handle singularities properly, so equations that have infinite values in them will be rendered as they should be. What was happening in the old version, was that for equations with values that go to infinity and then flip sign, that the line was drawn from the maximum value to the minimum value. Even though there are no values in between. Furthermore, since we render a finite amount of datapoints, the lines don't go up to infinity either, giving misleading Graphs.

This is neatly illustrated in the pictures below. The values go all the way up to infinity like they should, and Graphs neatly knows that the line is not continuous, so it does not try to draw a straight line going from plus to minus infinity.

The old version of Graphs trying to render tan(x). Lines don't go all the way to plus/minus infinity, and they also draw a line between the high and low values.
The old version of Graphs trying to render tan(x). Lines don't go all the way to plus/minus infinity, and they also draw a line between the high and low values.
The upcoming version of Graphs, were equations such as tan(x) are drawn properly.

Reworked Curve fitting

The curve fitting has been reworked completely under the hood. While the changes may not be that obvious as a user, the code has basically been completely replaced. The most important change is that the confidence band is now calculated completely correctly using the delta-method. Previously a naive approach was used where the limits were calculated using the standard deviation each parameter. This does not hold up well in most cases though. The parameter values that are given are also no longer rounded in the new equation names (e.g. 421302 used to be rounded to 421000). More useful error messages are provided when things go wrong, custom equations now have an apply button which improves smoothness when entering new equations, the root mean squared error is added as a second goodness-of-fit measure, you can now check out the residuals of your fit. The residuals can be useful to check if your fit is physically correct. A good fit will show residuals scattered randomly around zero with no visible pattern. A systematic pattern in the residuals, such as a curve or a trend suggests that the chosen model may not be appropriate for the data.

The old version of Graphs with the naive calculation of the confidence band
The new version of Graphs with the proper calculation of the confidence band.

UI changes

We've tweaked the UI a bit all over the place. But one particular change that is worth to highlight, is that we have moved the item and figure settings to the sidebar. The reason for this, is that the settings are typically used to affect the canvas so you don't want to lose sight of how your setting affects the canvas while you're updating. For example, when setting the axes limits, you want to see how your graph looks with the new limit, having a window obstructing the view does not help.

Another nice addition is that you can now simply click on a part of the canvas, such as the limits, and it will immediately bring you to the figure settings with the relevant field highlighted. See video below.

Mobile screen support

With the upcoming release, we finally have full support for mobile devices. See here a quick demonstration on an old OnePlus 6:

Figure exporting

One nice addition is the improved figure export. Instead of simply taking the same canvas as you see on the screen, you can now explicitly set a certain resolution. This is vital if you have a lot of figures in the same work, or need to publish your figures in academic journals, and you need consistency both in size and in font sizes. Of course, you can still use the previous setting and have the same size as in the application.

The new export figure dialog

More quality of life changes

The above are just a highlight of some major feature updates. But there's a large amount of features that we added. Here's a rapid-fire list of other niceties that we added:

And a whole bunch of bug-fixes, under-the-hood changes, and probably some features I have forgotten about. Overall, it's our biggest update yet by far, and I am excited to finally be able to share the update soon.

As always, thanks to everyone who has been involved in this version. Graphs is not a one-person project. The bulk of the maintenance is done by me and Christoph, the other maintainer. And of course, we should thank the entire community. Both within GNOME projects (such as help from the design team, and the translation team), as well as outsiders that come with feedback, report or plain suggestions.

Getting the beta

This release is still in beta while we are ironing out the final issues. The expected release date is somewhere in the second week of may. In the meantime, feel free to test the beta. We are very happy for any feedback, especially in this period!

You can get the beta directly from flathub. First you need to add the flathub beta remote:

flatpak remote-add --if-not-exists flathub-beta https://flathub.org/beta-repo/flathub-beta.flatpakrepo

Then, you can install the application:
flatpak install flathub-beta se.sjoerd.Graphs

To run the beta version by default, the following command can be used:

sudo flatpak make-current se.sjoerd.Graphs beta

Note that the sudo is neccesary here, as it sets the current branch on the system level. To install this on a per-user basis, the flag -user can be used in the previous commands. To switch back to the stable version simply run the above command replacing beta with stable.

The beta branch on update should get updated somewhat regularly. If you don't feel like using the flathub-beta remote, or want the latest build. You can also get the release from the GitLab page, and build it in GNOME Builder.

14 Apr 2026 10:33am GMT

Jakub Steiner: 120+ Icons and Counting

Back in 2019, we undertook a radical overhaul of how GNOME app icons work. The old Tango-era style required drawing up to seven separate sizes per icon and a truckload of detail. A task so demanding that only a handful of people could do it. The "new" style is geometric, colorful, but mainly achievable. Redesigning the system was just the first step. We needed to actually get better icons into the hands of app developers, as those should be in control of their brand identity. That's where app-icon-requests came in.

As of today, the project has received over a hundred icon requests. Each one represents a collaboration between a designer and a developer, and a small but visible improvement to the Linux desktop.

How It Works

Ideally if a project needs a quick turnaround and direct control over the result, the best approach remains doing it in-house or commission a designer.

But if you're not in a rush, and aim to be a well designed GNOME app in particular, you can make use of the idle time of various GNOME designers. The process is simple. If you're building an app that follows the GNOME Human Interface Guidelines, you can open an icon request. A designer from the community picks up the issue, starts sketching ideas, and works with you until the icon is ready to ship. If your app is part of GNOME Circle or is aiming to join, you're far more likely to get a designer's attention quickly.

The sketching phase is where the real creative work happens. Finding the right metaphor for what an app does, expressed in a simple geometric shape. It's the part I enjoy most, and why I've been sharing my Sketch Friday process on Mastodon for over two years now (part 2). But the project isn't about one person's sketches. It's a team effort, and the more designers join, the faster the backlog shrinks.

Highlights

Here are a few of the icons that came through the pipeline. Each started as a GitLab issue and ended up as pixels on someone's desktop.

Alpaca Bazaar Field Monitor Dev Toolbox Exhibit Plots Gradia Millisecond Orca Flatseal Junction Carburetor

Alpaca, an AI chat client, went through several rounds of sketching to find just the right llama. Bazaar, an alternative to GNOME Software, took eight months and 16 comments to go from a shopping basket concept through a price tag to the final market stall. Millisecond, a system tuning tool for low-latency audio, needed several rounds to land on the right combination of stopwatch and waveform. Field Monitor shows how multiple iterations narrow down the concept. And Exhibit, the 3D model viewer, is one of my personal favorites.

You can browse all 127 completed icons to see the full range - from core GNOME apps to niche tools on Flathub.

Papers: From Sketch to Ship

To give a sense of what the process looks like up close, here's Papers - the GNOME document viewer. The challenge was finding an icon that says "documents" without being yet another generic file icon.

Papers concept sketch with magnifying glass Papers concept sketch width stacked papers Papers concept sketch with reading glasses Papers final icon

The early sketches explored different angles - a magnifying glass over stacked pages, reading glasses resting on a document. The final icon kept the reading glasses and the stack of colorful papers, giving it personality while staying true to what the app does. The whole thing played out in the GitLab issue, with the developer and designer going back and forth until both were happy.

While the new icon style is far easier to execute than the old high-detail GNOME icons, that doesn't mean every icon is quick. The hard part was never pushing pixels - it's nailing the metaphor. The icon needs to make sense to a new user at a glance, sit well next to dozens of other icons, and still feel like this app to the person who built it. Getting that right is a conversation between the designer's aesthetic judgment and the maintainer's sense of identity and purpose, and sometimes that conversation takes a while.

Bazaar is a good example.

Bazaar early concept - shopping basket Bazaar concept - price tag Bazaar concept - market stall Bazaar final icon

The app was already shipping with the price tag icon when Tobias Bernard - who reviews apps for GNOME Circle - identified its shortcomings and restarted the process. That kind of quality gate is easy to understate, but it's a big part of why GNOME apps look as consistent as they do. Tobias is also a prolific icon designer himself, frequently contributing icons to key projects across the ecosystem. In this case, the sketches went from a shopping basket through the price tag to a market stall with an awning - a proper bazaar. Sixteen comments and eight months later, the icon shipped.

Get Involved

There are currently 20 open icon requests waiting for a designer. Recent ones like Kotoba (a Japanese dictionary), Simba (a Samba manager), and Slop Finder haven't had much activity yet and could use a designer's attention.

If you're a designer, or want to become one, this is a great place to start contributing to Free software. The GNOME icon style was specifically designed to be approachable: bold shapes, a defined color palette, clear guidelines. Tools like Icon Preview and Icon Library make the workflow smooth. Pick a request, start with a pencil sketch on paper, and iterate from there. There's also a dedicated Matrix room #appicondesign:gnome.org where icon work is discussed - it's invite-only due to spam, but feel free to poke me in #gnome-design or #gnome for an invitation. If you're new to Matrix, the GNOME Handbook explains how to get set up.

If you're an app developer, don't despair shipping with a placeholder icon. Follow the HIG, open a request, and a designer will help you out. If you're targeting GNOME Circle, a proper icon is part of the deal anyway.

A good icon is one of those small things that makes an app feel real - finished, polished, worth installing. Now that we actually have a place to browse apps, an app icon is either the fastest way to grab attention or make people skip. If you've got some design chops and a few hours to spare, pick an issue and start sketching.

Need a Fast Track?

If you need a faster turnaround or just want to work with someone who's been helping out with GNOME's visual identity for as long as I can remember - Hylke Bons offers app icon design for open source projects through his studio, Planet Peanut. Hylke has been a core contributor to GNOME's icon work for well over a decade. You'll be in great hands.

His service has a great freebie for FOSS projects - funded by community sponsors. You get three sketches to choose from, a final SVG, and a symbolic variant, all following the GNOME icon guidelines. If your project uses an OSI-approved license and is intended to be distributed through Flathub, you're eligible. Consider sponsoring his work if you can - even a small amount helps keep the pipeline going.

Previously, Previously.

14 Apr 2026 12:00am GMT

13 Apr 2026

feedPlanet GNOME

Adrien Plazas: Monster World IV: Disassembly and Code Analysis

This winter I was bored and needed something new, so I spent lots of my free time disassembling and analysing Monster World IV for the SEGA Mega Drive. More specifically, I looked at the 2008 Virtual Console revision of the game, which adds an English translation to the original 1994 release.

My long term goal would be to fully disassemble and analyse the game, port it to C or Rust as I do, and then port it to the Game Boy Advance. I don't have a specific reason to do that, I just think it's a charming game from a dated but charming series, and I think the Monaster World series would be a perfect fit on the Game Boy Advance. Since a long time, I also wanted to experiment with disassembling or decompiling code, understanding what doing so implies, understanding how retro computing systems work, and understanding the inner workings of a game I enjoy. Also, there is not publicly available disassembly of this game as far as I know.

As Spring is coming, I sense my focus shifting to other projets, but I don't want this work to be gone forever and for everyone, especially not for future me. Hence, I decided to publish what I have here, so I can come back to it later or so it can benefit someone else.

First, here is the Ghidra project archive. It's the first time I used Ghidra and I'm certain I did plenty of things wrong, feedback is happily welcome! While I tried to rename things as my understanding of the code grew, it is still quite a mess of clashing name conventions, and I'm certain I got plenty of things wrong.

Then, here is the Rust-written data extractor. It documents how some systems work, both as code and actual documentation. It mainly extracts and documents graphics and their compression methods, glyphs and their compression methods, character encodings, and dialog scripts. Similarly, I'm not a Rust expert, I did my best but I'm certain there is area for improvement, and everything was constantly changing anyway.

There is more information that isn't documented and is just floating in my head, such as how the entity system works, but I yet have to refine my understanding of it. Same goes for the optimimzations allowed by coding in assembly, such as using specific registers for commonly used arguments. Hopefully I will come back to this project and complete it, at least when it comes to disassembling and documenting the game's code.

13 Apr 2026 10:00pm GMT

Felipe Borges: RHEL 10 (GNOME 47) Accessibility Conformance Report

Red Hat just published the Accessibility Conformance Report (ACR) for Red Hat Enterprise Linux 10.

Accessibility Conformance Reports basically document how our software measures up against accessibility standards like WCAG and Section 508. Since RHEL 10 is built on GNOME 47, this report is a good look at how our stack handles various accessibility things from screen readers to keyboard navigation.

Getting a desktop environment to meet these requirements is a huge task and it's only possible because of the work done by our community in projects like: Orca, GTK, Libadwaita, Mutter, GNOME Shell, core apps, etc…

Kudos to everyone in the GNOME project that cares about improving accessibility. We all know there's a long way to go before desktop computing is fully accessible to everyone, but we are surely working on that.

If you're curious about the state of accessibility in the 47 release or how these audits work, you can find the full PDF here.

13 Apr 2026 10:00am GMT

Peter Hutterer: Huion devices in the desktop stack

This post attempts to explain how Huion tablet devices currently integrate into the desktop stack. I'll touch a bit on the Huion driver and the OpenTablet driver but primarily this explains the intended integration[1]. While I have access to some Huion devices and have seen reports from others, there are likely devices that are slightly different. Huion's vendor ID is also used by other devices (UCLogic and Gaomon) so this applies to those devices as well.

This post was written without AI support, so any errors are organic artisian hand-crafted ones. Enjoy.

The graphics tablet stack

First, a short overview of the ideal graphics tablet stack in current desktops. At the bottom is the physical device which contains a significant amount of firmware. That device provides something resembling the HID protocol over the wire (or bluetooth) to the kernel. The kernel typically handles this via the generic HID drivers [2] and provides us with an /dev/input/event evdev node, ideally one for the pen (and any other tool) and one for the pad (the buttons/rings/wheels/dials on the physical tablet). libinput then interprets the data from these event nodes, passes them on to the compositor which then passes them via Wayland to the client. Here's a simplified illustration of this:

Unlike the X11 api, libinput's API works both per-tablet and per-tool basis. In other words, when you plug in a tablet you get a libinput device that has a tablet tool capability and (optionally) a tablet pad capability. But the tool will only show up once you bring it into proximity. Wacom tools have sufficient identifiers that we can a) know what tool it is and b) get a unique serial number for that particular device. This means you can, if you wanted to, track your physical tool as it is used on multiple devices. No-one [3] does this but it's possible. More interesting is that because of this you can also configure the tools individually, different pressure curves, etc. This was possible with the xf86-input-wacom driver in X but only with some extra configuration, libinput provides/requires this as the default behaviour.

The most prominent case for this is the eraser which is present on virtually all pen-like tools though some will have an eraser at the tail end and others (the numerically vast majority) will have it hardcoded on one of the buttons. Changing to eraser mode will create a new tool (the eraser) and bring it into proximity - that eraser tool is logically separate from the pen tool and can thus be configured differently. [4]

Another effect of this per-tool behaviour is also that we know exactly what a tool can do. If you use two different styli with different capabilities (e.g. one with tilt and 2 buttons, one without tilt and 3 buttons), they will have the right bits set. This requires libwacom - a library that tells us, simply: any tool with id 0x1234 has N buttons and capabilities A, B and C. libwacom is just a bunch of static text files with a C library wrapped around those. Without libwacom, we cannot know what any individual tool can do - the firmware and kernel always expose the capability set of all tools that can be used on any particular tablet. For example: wacom's devices support an airbrush tool so any tablet plugged in will announce the capabilities for an airbrush even though >99% of users will never use an airbrush [5].

The compositor then takes the libinput events, modifies them (e.g. pressure curve handling is done by the compositor) and passes them via the Wayland protocol to the client. That protocol is a pretty close mirror of the libinput API so it works mostly the same. From then on, the rest is up to the application/toolkit.

Notably, libinput is a hardware abstraction layer and conversion of hardware events into others is generally left to the compositor. IOW if you want a button to generate a key event, that's done either in the compositor or in the application/toolkit. But the current versions of libinput and the Wayland protocol do support all hardware features we're currently aware of: the various stylus types (including Wacom's lens cursor and mouse-like "puck" devices) and buttons, rings, wheels/dials, and touchstrips on pads. We even support the rather once-off Dell Canvas Totem device.

Huion devices

Huion's devices are HID compatible which means they "work" out of the box but they come in two different modes, let's call them firmware mode and tablet mode. Each tablet device pretends to be three HID devices on the wire and depending on the mode some of those devices won't send events.

Firmware mode

This is the default mode after plugging the device in. Two of the HID devices exposed look like a tablet stylus and a keyboard. The tablet stylus is usually correct (enough) to work OOTB with the generic kernel drivers, it exports the buttons, pressure, tilt, etc. The buttons and strips/wheels/dials on the tablet are configured to send key events. For example, the Inspiroy 2S I have sends b/i/e/Ctrl+S/space/Ctrl+Alt+z for the buttons and the roller wheel sends Ctrl-/Ctrl= depending on direction. The latter are often interpreted as zoom in/out so hooray, things work OOTB. Other Huion devices have similar bindings, there is quite some overlap but not all devices have exactly the same key assignments for each button. It does of course get a lot more interesting when you want a button to do something different - you need to remap the key event (ideally without messing up your key map lest you need to type an 'e' later).

The userspace part is effectively the same, so here's a simplified illustration of what happens in kernel land:

Any vendor-specific data is discarded by the kernel (but in this mode that HID device doesn't send events anyway).

Tablet mode

If you read a special USB string descriptor from the English language ID, the device switches into tablet mode. Once in tablet mode, the HID tablet stylus and keyboard devices will stop sending events and instead all events from the device are sent via the third HID device which consists of a single vendor-specific report descriptor (read: 11 bytes of "here be magic"). Those bits represent the various features on the device, including the stylus features and all pad features as buttons/wheels/rings/strips (and not key events!). This mode is the one we want to handle the tablet properly. The kernel's hid-uclogic driver switches into tablet mode for supported devices, in userspace you can use e.g. huion-switcher. The device cannot be switched back to firmware mode but will return to firmware mode once unplugged.

Once we have the device in tablet mode, we can get true tablet data and pass it on through our intended desktop stack. Alas, like ogres there are layers.

hid-uclogic and udev-hid-bpf

Historically and thanks in large parts to the now-discontinued digimend project, the hid-uclogic kernel driver did do the switching into tablet mode, followed by report descriptor mangling (inside the kernel) so that the resulting devices can be handled by the generic HID drivers. The more modern approach we are pushing for is to use udev-hid-bpf which is quite a bit easer to develop for. But both do effectively the same thing: they overlay the vendor-specific data with a normal HID report descriptor so that the incoming data can be handled by the generic HID kernel drivers. This will look like this:

Notable here: the stylus and keyboard may still exist and get event nodes but never send events[6] but the uclogic/bpf-enabled device will be proper stylus/pad event nodes that can be handled by libinput (and thus the rest), with raw hardware data where buttons are buttons.

Challenges

Because in true manager speak we don't have problems, just challenges. And oh boy, we collect challenges as if we'd be organising the olypmics.

hid-uclogic and libinput

First and probably most embarrassing is that hid-uclogic has a different way of exposing event nodes than what libinput expects. This is largely my fault for having focused on Wacom devices and internalized their behaviour for long years. The hid-uclogic driver exports the wheels and strips on separate event nodes - libinput doesn't handle this correctly (or at all). That'd be fixable but the compositors also don't really expect this so there's a bit more work involved but the immediate effect is that those wheels/strips will likely be ignored and not work correctly. Buttons and pens work.

udev-hid-bpf and huion-switcher

hid-uclogic being a kernel driver has access to the underlying USB device. The HID-BPF hooks in the kernel currently do not, so we cannot switch the device into tablet mode from a BPF, we need it in tablet mode already. This means a userspace tool (read: huion-switcher) triggered via udev on plug-in and before the udev-hid-bpf udev rules trigger. Not a problem but it's one more moving piece that needs to be present (but boy, does this feel like the unix way...).

Huion's precious product IDs

By far the most annoying part about anything Huion is that until relatively recently (I don't have a date but maybe until 2 years ago) all of Huion's devices shared the same few USB product IDs. For most of these devices we worked around it by matching on device names but there were devices that had the same product id and device name. At some point libwacom and the kernel and huion-switcher had to implement firmware ID extraction and matching so we could differ between devices with the same 0256:006d usb IDs. Luckily this seems to be in the past now with modern devices now getting new PIDs for each individual device. But if you have an older device, expect difficulties and, worse, things to potentially break after firmware updates when/if the firmware identification string changes. udev-hid-bpf (and uclogic) rely on the firmware strings to identify the device correctly.

edit: and of course less than 24h after posting this I process a bug report about two completely different new devices sharing one of the product IDs

udev-hid-bpf and hid-uclogic

Because we have a changeover from the hid-uclogic kernel driver to the udev-hid-bpf files there are rough edges on "where does this device go". The general rule is now: if it's not a shared product ID (see above) it should go into udev-hid-bpf and not the uclogic driver. Easier to maintain, much more fire-and-forget. Devices already supported by udev-hid-bpf will remain there, we won't implement BPFs for those (older) devices, doubly so because of the aforementioned libinput difficulties with some hid-uclogic features.

Reverse engineering required

The newer tablets are always slightly different so we basically need to reverse-engineer each tablet to get it working. That's common enough for any device but we do rely on volunteers to do this. Mind you, the udev-hid-bpf approach is much simpler than doing it in the kernel, much of it is now copy-paste and I've even had quite some success to get e.g. Claude Code to spit out a 90% correct BPF on its first try. At least the advantage of our approach to change the report descriptor means once it's done it's done forever, there is no maintenance required because it's a static array of bytes that doesn't ever change.

Plumbing support into userspace

Because we're abstracting the hardware, userspace needs to be fully plumbed. This was a problem last year for example when we (slowly) got support for relative wheels into libinput, then wayland, then the compositors, then the toolkits to make it available to the applications (of which I think none so far use the wheels). Depending on how fast your distribution moves, this may mean that support is months and years off even when everything has been implemented. On the plus side these new features tend to only appear once every few years. Nonetheless, it's not hard to see why the "just sent Ctrl=, that'll do" approach is preferred by many users over "probably everything will work in 2027, I'm sure".

So, what stylus is this?

A currently unsolved problem is the lack of tool IDs on all Huion tools. We cannot know if the tool used is the two-button + eraser PW600L or the three-button-one-is-an-eraser-button PW600S or the two-button PW550 (I don't know if it's really 2 buttons or 1 button + eraser button). We always had this problem with e.g. the now quite old Wacom Bamboo devices but those pens all had the same functionality so it just didn't matter. It would matter less if the various pens would only work on the device they ship with but it's apparently quite possible to use a 3 button pen on a tablet that shipped with a 2 button pen OOTB. This is not difficult to solve (pretend to support all possible buttons on all tools) but it's frustrating because it removes a bunch of UI niceties that we've had for years - such as the pen settings only showing buttons that actually existed. Anyway, a problem currently in the "how I wish there was time" basket.

Summary

Overall, we are in an ok state but not as good as we are for Wacom devices. The lack of tool IDs is the only thing not fixable without Huion changing the hardware[7]. The delay between a new device release and driver support is really just dependent on one motivated person reverse-engineering it (our BPFs can work across kernel versions and you can literally download them from a successful CI pipeline). The hid-uclogic split should become less painful over time and the same as the devices with shared USB product IDs age into landfill and even more so if libinput gains support for the separate event nodes for wheels/strips/... (there is currently no plan and I'm somewhat questioning whether anyone really cares). But other than that our main feature gap is really the ability for much more flexible configuration of buttons/wheels/... in all compositors - having that would likely make the requirement for OpenTabletDriver and the Huion tablet disappear.

OpenTabletDriver and Huion's own driver

The final topic here: what about the existing non-kernel drivers?

Both of these are userspace HID input drivers which all use the same approach: read from a /dev/hidraw node, create a uinput device and pass events back. On the plus side this means you can do literally anything that the input subsystem supports, at the cost of a context switch for every input event. Again, a diagram on how this looks like (mostly) below userspace:

Note how the kernel's HID devices are not exercised here at all because we parse the vendor report, create our own custom (separate) uinput device(s) and then basically re-implement the HID to evdev event mapping. This allows for great flexibility (and control, hence the vendor drivers are shipped this way) because any remapping can be done before you hit uinput. I don't immediately know whether OpenTabletDriver switches to firmware mode or maps the tablet mode but architecturally it doesn't make much difference.

From a security perspective: having a userspace driver means you either need to run that driver daemon as root or (in the case of OpenTabletDriver at least) you need to allow uaccess to /dev/uinput, usually via udev rules. Once those are installed, anything can create uinput devices, which is a risk but how much is up for interpretation.

[1] As is so often the case, even the intended state does not necessarily spark joy
[2] Again, we're talking about the intended case here...
[3] fsvo "no-one"
[4] The xf86-input-wacom driver always initialises a separate eraser tool even if you never press that button
[5] For historical reasons those are also multiplexed so getting ABS_Z on a device has different meanings depending on the tool currently in proximity
[6] In our udev-hid-bpf BPFs we hide those devices so you really only get the correct event nodes, I'm not immediately sure what hid-uclogic does
[7] At which point Pandora will once again open the box because most of the stack is not yet ready for non-Wacom tool ids

13 Apr 2026 6:47am GMT

Jakub Steiner: release.gnome.org refactor

After successfully moving this blog to Zola, doubts got suppressed and I couldn't resist porting the GNOME Release Notes too.

The Proof

The blog port worked better than expected. Fighting CI github action was where most enthusiasm was lost. The real test though was whether Zola could handle a site way more important than my little blog - one hosting release notes for GNOME.

What Changed

The main work was porting the templates from Liquid to Tera, the same exercise as the blog. That included structural change to shift releases from Jekyll pages to proper Zola posts. This enabled two things that weren't possible before:

The Payoff

The site now has a working RSS feed - years of broken promises finally fulfilled. The full archive from GNOME 2.x through 50 is available. And perhaps best of all: zero dependency management and supporting people who "just want to write a bit of markdown". Just a single binary.

I'd say it's another success story and if I were a Jekyll project in the websites team space, I'd start to worry.

13 Apr 2026 12:00am GMT

11 Apr 2026

feedPlanet GNOME

Bilal Elmoussaoui: goblint: A Linter for GObject C Code

Over the past week, I've been building goblint, a linter specifically designed for GObject-based C codebases.

If you know Rust's clippy or Go's go vet, think of goblint as the same thing for GObject/GLib.

Why this exists

A large part of the Linux desktop stack (GTK, Mutter, Pango, NetworkManager) is built on GObject. These projects have evolved over decades and carry a lot of patterns that predate newer GLib helpers, are easy to misuse, or encode subtle lifecycle invariants that nothing verifies.

This leads to issues like missing dispose/finalize/constructed chain-ups (memory leaks or undefined behavior), incorrect property definitions, uninitialized GError* variables, or function declarations with no implementation.

These aren't theoretical. This GTK merge request recently fixed several missing chain-ups in example code.

Despite this, the C ecosystem lacks a linter that understands GObject semantics. goblint exists to close that gap.

What goblint checks

goblint ships with 35 rules across different categories:

23 out of 35 rules are auto-fixable. You should apply fixes one rule at a time to review the changes:

goblint --fix --only use_g_strcmp0
goblint --fix --only use_clear_functions

CI/CD Integration

goblint fits into existing pipelines.

GitHub Actions

- name: Run goblint
  run: goblint --format sarif > goblint.sarif

- name: Upload SARIF results
  uses: github/codeql-action/upload-sarif@v3
  with:
    sarif_file: goblint.sarif

Results show up in the Security tab under "Code scanning" and inline on pull requests.

GitLab CI

goblint:
  image: ghcr.io/bilelmoussaoui/goblint:latest
  script:
    - goblint --format sarif > goblint.sarif
  artifacts:
    reports:
      sast: goblint.sarif

Results appear inline in merge requests.

Configuration

Rules default to warn, and can be tuned via goblint.toml:

min_glib_version = "2.40"  # Auto-disable rules for newer versions

[rules]
g_param_spec_static_name_canonical = "error"  # Make critical
use_g_strcmp0 = "warn"  # Keep as warning
use_g_autoptr_inline_cleanup = "ignore"  # Disable

# Per-rule ignore patterns
missing_implementation = { level = "error", ignore = ["src/backends/**"] }

You can adopt it gradually without fixing everything at once.

Try it

# Run via container
podman run --rm -v "$PWD:/workspace:Z" ghcr.io/bilelmoussaoui/goblint:latest

# Install locally
cargo install --git https://github.com/bilelmoussaoui/goblint goblint

# Usage
goblint              # Lint current directory
goblint --fix        # Apply automatic fixes
goblint --list-rules # Inspect available rules

The project is early, so feedback is especially valuable (false positives, missing checks, workflow issues, etc.).


Note: The project was originally named "goblin" but was renamed to "goblint" to avoid conflicts with the existing goblin crate for parsing binary formats.

11 Apr 2026 12:00am GMT

10 Apr 2026

feedPlanet GNOME

This Week in GNOME: #244 Recognizing Hieroglyphs

Update on what happened across the GNOME project in the week from April 03 to April 10.

GNOME Core Apps and Libraries

Blueprint β†—

A markup language for app developers to create GTK user interfaces.

James Westman reports

blueprint-compiler is now available on PyPI. You can install it with pip install blueprint-compiler.

GNOME Circle Apps and Libraries

Hieroglyphic β†—

Find LaTeX symbols

FineFindus reports

Hieroglyphic 2.3 is out now. Thanks to the exciting work done by Bnyro, Hieroglyphic can now also recognize Typst symbols (a modern alternative to LaTeX). Hardware-acceleration will now be preferred, when available, reducing power-consumption.

Download the latest version from FlatHub.

Amberol β†—

Plays music, and nothing else.

Emmanuele Bassi says

Amberol 2026.1 is out, using the GNOME 50 run time! This new release fixes a few issues when it comes to loading music, and has some small quality of life improvements in the UI, like: a more consistent visibility of the playlist panel when adding songs or searching; using the shortcuts dialog from libadwaita; and being able to open the file manager in the folder containing the current song. You can get Amberol on Flathub.

Third Party Projects

Alexander Vanhee says

A new version of Bazaar is out now. It features the ability to filter search results via a new popover and reworks the add-ons dialog to include a page that shows more information about a specific entry. If you try to open an add-on via the AppStream scheme, it will now display this page, which is useful when you want to redirect users to install an add-on from within your app.

Also, please take a look at the statistics dialog - it now features a cool gradient.

Check it out on Flathub

dabrain34 reports

GstPipelineStudio 0.5.1 is out now. It's a great pleasure to announce this new version allowing to deal with DOT files directly. Check the project web page for more information or the following blog post for more details about the release.

Anton Isaiev announces

RustConn (connection manager for SSH, RDP, VNC, SPICE, Telnet, Serial, Kubernetes, MOSH, and Zero Trust protocols)

Versions 0.10.9-0.10.14 landed with a solid round of usability, security, and performance work.

Staying connected got easier. If an SSH session drops unexpectedly, RustConn now polls the host and reconnects on its own as soon as it's back. Wake-on-LAN works the same way: send the magic packet and RustConn connects automatically once the machine boots. You can also right-click any connection to check if the host is online, and a new "Connect All" option opens every connection in a folder at once. For RDP there's a Mouse Jiggler that keeps idle sessions alive.

Terminal Activity Monitor is a new per-session feature that watches for output activity or silence, which is handy for long-running jobs. You get notifications as tab icons, toasts, and desktop alerts when the window is in the background.

Security got a lot of attention. RDP now defaults to trust-on-first-use certificate validation instead of blindly accepting everything. Credentials for Bitwarden and 1Password are no longer visible in the process list. VNC passwords are zeroized on drop. Export files are written with owner-only permissions. Dangerous custom arguments are blocked for both VNC and FreeRDP viewers.

Hoop.dev joins as the 11th Zero Trust provider. There's also a new custom SSH agent socket setting that lets Flatpak users connect through KeePassXC, Bitwarden, or GPG-based SSH agents, something the Flatpak sandbox previously made difficult.

Smoother on HiDPI and 4K. RDP frame rendering skips a 33 MB per-frame copy when the data is already in the right format. Highlight rules, search, and log sanitization patterns are compiled once instead of on every keystroke or terminal line.

GNOME HIG polish. Success notifications now use non-blocking toasts instead of modal dialogs. Sidebar context menus are native PopoverMenus with keyboard navigation and screen reader support. Translations completed for all 15 languages.

Project: https://github.com/totoshko88/RustConn Flatpak: https://flathub.org/en/apps/io.github.totoshko88.RustConn

Phosh β†—

A pure wayland shell for mobile devices.

Guido announces

Phosh 0.54 is out:

There's now a notification when an app fails to start, the status bar can be extended via plugins, and the location quick toggle has a status page to set the maximum allowed accuracy.

On the compositor side we improved X11 support, making docked mode (aka convergence) with applications like emacs or ardour more fun to use.

The on screen keyboard Stevia now supports Japanese and Chinese input via UIM, has a new us+workman layout and automatic space handling can be disabled.

There's more - see the full details here.

Documentation

Emmanuele Bassi announces

The GNOME User documentation project has been ported to use Meson for its configuration, build, and installation. The User documentation contains the desktop help and the system administration guide, and gets published on the user help website, as well as being available locally through the Help browser. The switch to Meson improved build times, and moved the tests and validation in the build system. There's a whole new contribution guideline as well. If you want to help writing the GNOME documentation, join us in the Docs room on Matrix!

Shell Extensions

Weather O'Clock β†—

Display the current weather inside the pill next to the clock.

Cleo Menezes Jr. reports

Weather O'Clock 50 released with fluffier animations: smooth fades between loading, weather and offline states; instant temperature updates; first-fetch spinner; offline indicator; GNOME Shell 45-50 support; and various bug fixes.

Get it on GNOME Extensions

Follow development

That's all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

10 Apr 2026 12:00am GMT