21 Apr 2026
Planet GNOME
Michael Meeks: 2026-04-21 Tuesday
- Up early, off to HCL Engage in a football stadium for Richard's keynote, Jason's flashy Domino / AI demo, product management bits, and of course Collabora Online integration announced.
- Gave talk on COOL, handed out huge numbers of beavers, quick-start guides, stickers and more. Great to talk to lots of excited people engaged with Sovereign alternatives.
- Dinner in the evening, met more interesting people.
21 Apr 2026 9:00pm GMT
Jussi Pakkanen: CapyPDF is approaching feature sufficiency
In the past I have written many blog posts on implementing various PDF features in CapyPDF. Typically they explain the feature being implemented, how confusing the documentation is, what perverse undocumented quirks one has to work around to get things working and so on. To save the effort of me writing and you reading yet another post of the same type, let me just say that you can now use CapyPDF to generate PDF forms that have widgets like text fields and radio buttons.
What makes this post special is that forms and widget annotations were pretty much the last major missing PDF feature Does that mean that it supports everything? No. Of course not. There is a whole bunch of subtlety to consider. Let's start with the fact that the PDF spec is massive, close to 1000 pages. Among its pages are features that are either not used or have been replaced by other features and deprecated.
The implementation principle of CapyPDF thus far has been "implement everything that needs special tracking, but only to the minimal level needed". This seems complicated but is in fact quite simple. As an example the PDF spec defines over 20 different kinds of annotations. Specifying them requires tracking each one and writing out appropriate entries in the document metadata structures. However once you have implemented that for one annotation type, the same code will work for all annotation types. Thus CapyPDF has only implemented a few of the most common annotations and the rest can be added later when someone actually needs them.
Many objects have lots of configuration options which are defined by adding keys and values to existing dictionaries. Again, only the most common ones are implemented, the rest are mostly a matter of adding functions to set those keys. There is no cross-referencing code that needs to be updated or so on. If nobody ever needs to specify the color with which a trim box should be drawn in a prepress preview application, there's no point in spending effort to make it happen.
The API should be mostly done, especially for drawing operations. The API for widgets probably needs to change. Especially since form submission actions are not done. I don't know if anything actually uses those, though. That work can be done based on user feedback.
21 Apr 2026 8:09pm GMT
Thibault Martin: TIL that Minikube mounts volumes as root
When I have to play with a container image I have never met before, I like to deploy it on a test cluster to poke and prod it. I usually did that on a k3s cluster, but recently I've moved to Minikube to bring my test cluster with me when I'm on the go.
Minikube is a tiny one-node Kubernetes cluster meant to run on development machines. It's useful to test Deployments or StatefulSets with images you are not familiar with and build proper helm charts from them.
It provides volumes of the hostPath type by default. The major caveat of hostPath volumes is that they're mounted as root by default.
I usually handle mismatched ownership with a securityContext like the following to instruct the container to run with a specific UID and GID, and to make the volume owned by a specific group.
Typically in a StatefulSet it looks like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp
# [...]
spec:
# [...]
template:
# [...]
spec:
securityContext:
runAsUser: 10001
runAsGroup: 10001
fsGroup: 10001
containers:
- name: myapp
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
# [...]
In this configuration:
- Processes in the Pod
myappwill run with UID 10001 and GID 10001. - The
/datadirectory mounted from thedatavolume will belong to group 10001 as well.
The securityContext usually solves the problem, but that's not how hostPath works. For hostPath volumes, the securityContext.fsGroup property is silently ignored.
[!success] Init Container to the Rescue!
The solution in this specific case is to use an initContainer as root to
chownthe volume mounts to the unprivileged user.
In practice it will look like this.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp
# [...]
spec:
# [...]
template:
# [...]
spec:
securityContext:
runAsUser: 10001
runAsGroup: 10001
fsGroup: 10001
initContainers:
- name: fix-perms
image: busybox
command:
["sh", "-c", "chown -R 10001:10001 /data"]
securityContext:
runAsUser: 0
volumeMounts:
- name: data
mountPath: /data
containers:
- name: myapp
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
# [...]
It took me a little while to figure it out, because I was used to testing my StatefulSets on k3s. K3s uses a local path provisioner, which gives me local volumes, not hostPath ones like Minikube.
In production I don't need the initContainer to fix permissions since I'm deploying this on an EKS cluster.
21 Apr 2026 7:00am GMT
20 Apr 2026
Planet GNOME
Andy Wingo: on hayek's bastards
After wrapping up a four-part series on free trade and the left, I thought I was done with neoliberalism. I had come to the conclusion that neoliberals were simply not serious people: instead of placing value in literally any human concern, they value only a network of trade, and as such, cannot say anything of value. They should be ignored in public debate; we can find economists elsewhere.
I based this conclusion partly on Quinn Slobodian's Globalists (2020), which describes Friedrich Hayek's fascination with cybernetics in the latter part of his life. But Hayek himself died before the birth of the WTO, NAFTA, all the institutions "we" fought in Seattle; we fought his ghost, living on past its time.
Well, like I say, I thought I was done, but then a copy of Slobodian's Hayek's Bastards (2025) arrived in the post. The book contests the narrative that the right-wing "populism" that we have seen in the last couple decades is an exogenous reaction to elite technocratic management under high neoliberalism, and that actually it proceeds from a faction of the neoliberal project. It's easy to infer a connection when we look at, say, Javier Milei's background and cohort, but Slobodian delicately unpicks the weft to expose the tensile fibers linking the core neoliberal institutions to the alt-right. Tonight's note is a book review of sorts.
after hayek
Let's back up a bit. Slobodian's argument in Globalists was that neoliberalism is not really about laissez-faire as such: it is a project to design institutions of international law to encase the world economy, to protect it from state power (democratic or otherwise) in any given country. It is paradoxical, because such an encasement requires state power, but it is what it is.
Hayek's Bastards is also about encasement, but instead of protection from the state, the economy was to be protected from debasement by the unworthy. (Also there is a chapter on goldbugs, but that's not what I want to talk about.)
The book identifies two major crises that push a faction of neoliberals to ally themselves with a culturally reactionary political program. The first is the civil rights movement of the 1960s and 1970s, together with decolonization. To put it crudely, whereas before, neoliberal economists could see themselves as acting in everyone's best interest, having more black people in the polity made some of these white economists feel like their project was being perverted.
Faced with this "crisis", at first the reactionary neoliberals reached out to race: the infant post-colonial nations were unfit to participate in the market because their peoples lacked the cultural advancement of the West. Already Globalists traced a line through Wilhelm Röpke's full-throated defense of apartheid, but the subjects of Hayek's Bastards (Lew Rockwell, Charles Murray, Murray Rothbard, et al) were more subtle: instead of directly stating that black people were unfit to govern, Murray et al argued that intelligence was the most important quality in a country's elite. It just so happened that they also argued, clothed in the language of evolutionary psychology and genetics, that black people are less intelligent than white people, and so it is natural that they not occupy these elite roles, that they be marginalized.
Before proceeding, three parentheses:
-
Some words have a taste. Miscegenation tastes like the juice at the bottom of a garbage bag left out in the sun: to racists, because of the visceral horror they feel at the touch of the other, and to the rest of us, because of the revulsion the very idea provokes.
-
I harbor an enmity to Silvia Plath because of The Bell Curve. She bears no responsibility; her book was The Bell Jar. I know this in my head but my heart will not listen.
-
I do not remember the context, but I remember a professor in university telling me that the notion of "race" is a social construction without biological basis; it was an offhand remark that was new to me then, and one that I still believe now. Let's make sure the kids now hear the good word now too; stories don't tell themselves.
The second crisis of neoliberalism was the fall of the Berlin Wall: some wondered if the negative program of deregulation and removal of state intervention was missing a positive putty with which to re-encase the market. It's easy to stand up on a stage with a chainsaw, but without a constructive program, neoliberal wins in one administration are fragile in the next.
The reactionary faction of neoliberalism's turn to "family values" responds to this objective need, and dovetails with the reaction to the civil rights movement: to protect the market from the unworthy, neo-reactionaries worked to re-orient the discourse, and then state policy, away from "equality" and the idea that idea that We Should Improve Society, Somewhat. Moldbug's neofeudalism is an excessive rhetorical joust, but one that has successfully moved the window of acceptable opinions. The "populism" of the AfD or the recent Alex Karp drivel is not a reaction, then, to neoliberalism, but a reaction by a faction of neoliberals to the void left after communism. (And when you get down to it, what is the difference between Moldbug nihilistically rehashing Murray's "black people are low-IQ" and Larry Summers' "countries in Africa are vastly UNDER-polluted"?)
thots
Slobodian shows remarkable stomach: his object of study is revolting. He has truly done the work.
For all that, Hayek's Bastards left me with a feeling of indigestion: why bother with the racism? Hayek himself had a thesis of sorts, woven through his long career, that there is none of us that is smarter than the market, and that in many (most?) cases, the state should curb its hubris, step back, and let the spice flow. Prices are a signal, axons firing in an ineffable network of value, sort of thing. This is a good thesis! I'm not saying it's right, but it's interesting, and I'm happy to engage with it and its partisans.
So why do Hayek's bastards reach to racism? My first thought is that they are simply not worthy: Charles Murray et al are intellectually lazy and moreover base. My lip curls to think about them in any serious way. I can't help but recall the DARVO tactic of abusers; neo-reactionaries blame "diversity" for "debasing the West", but it is their ignorant appeals to "race science" that is without basis.
Then I wonder: to what extent is this all an overworked intellectual retro-justification for something they wanted all along? When Mises rejoiced in the violent defeat of the 1927 strike, he was certainly not against state power per se; but was he for the market, or was he just against a notion of equality?
I can only conclude that things are confusing. "Mathematical" neoliberals exist, and don't need to lean on racism to support their arguments. There are also the alt-right/neo-reactionaries, who grew out from neoliberalism, not in opposition to it: no seasteader is a partisan of autarky. They go to the same conferences. It is a baffling situation.
While it is all more the more reason to ignore them both, intellectually, Slobodian's book shows that politically we on the left have our work set out for us both in deconstructing the new racism of the alt-right, and in advocating for a positive program of equality to take its place.
20 Apr 2026 9:35pm GMT
19 Apr 2026
Planet GNOME
Juan Pablo Ugarte: Casilda 1.2.4 Released!
I am very happy to announce a new version of Casilda!
A simple Wayland compositor widget for Gtk 4.
This release comes with several new features, bug fixes and extra polish that it is making it start to feel like a proper compositor.
It all started with a quick 1.2 release to port it to wlroots 0.19 because 0.18 was removed from Debian, while doing this on my new laptop I was able to reproduce a texture leak crash which lead to 1.2.1 and a fix in Gtk by Benjamin to support Vulkan drivers that return dmabufs with less fd than planes.
At this point I was invested to I decided to fix the rest of issues in the backlog…
Fractional scale
Casilda only supported integer scales not fractional scale so you could set your display scale to 200% but not 125%.
For reference this is how gtk4-demo looks like at 100% or scale 1 where 1 application/logical pixel corresponds to one device/display pixel.
*** Keep in mind its preferable to see all the following images without fractional scale itself and at full size ***

Clients would render at the next round scale if the application was started with a fractional scale set…

Or the client would render at scale 1 and look blurry if you switched from 1 to a fractional scale.

In both cases the input did not matched with the renderer window making the application really broken.
So if the client application draws a 4 logical pixel border, it will be 5 pixels in the backing texture this means that 1 logical pixel correspond to 1.25 device pixels. So in order for things to look sharp CasildaCompositor needs to make sure the coordinates it uses for position the client window will match to the device pixel grid.
My first attempt was to do
((int)x * scale) / scale
but that still looked blurry, and that is because I assumed window coordinate 0,0 was the same as its backing surface coordinates 0,0 but that is not the case because I forgot about the window shadow. Luckily there is API to get the offset, then all you have to do is add the logical position of the compositor widget and you get the surface origin coordinates
gtk_native_get_surface_transform (GTK_NATIVE (root), &surface_origin_x, &surface_origin_y);
/* Add widget offset */
if (gtk_widget_compute_point (self, GTK_WIDGET (root), &GRAPHENE_POINT_INIT (0, 0), &out_point))
{
surface_origin_x += out_point.x;
surface_origin_y += out_point.y;
}
Once I had that I could finally calculate the right position
/* Snap logical coordinates to device pixel grid */
if (scale > 1.0)
{
x = floorf ((x + surface_origin_x) * scale) / scale - surface_origin_x;
y = floorf ((y + surface_origin_y) * scale) / scale - surface_origin_y;
}
And this is how it looks now with 1.25 fractional scale.

Keyboard layouts
Another missing feature was support for different keyboard layouts so switching layouts would work on clients too. Not really important for Cambalache but definitely necessary for a generic compositor.

Popups positioners
Casilda now send clients all the necessary information for positioning popups in a place where they do not get cut out of the display area which is a nice thing to have.

Cursor shape protocol
Current versions of Gtk 4 requires cursor shape protocol on wayland otherwise it fallback to 32×32 pixel size cursors which might not be the same size of your system cursors and look blurry with fractional scales.
In this case the client send an cursor id instead of a pixel buffer when it wants to change the cursor.
This was really easy to implement as all I had to do is call
gtk_widget_set_cursor_from_name (compositor, wlr_cursor_shape_v1_name (event->shape));
Greetings
As usual this would not be possible without the help of the community, special thanks to emersion, Matthias and Benjamin for their help and support.
Release Notes
- Add fractional scale support
- Add viewporter support
- Add support for cursor shape
- Forward keyboard layout changes to clients.
- Improve virtual size calculation
- Fix maximized/fullscreen auto resize on compositor size allocation
- Add support for popups reposition
- Fix GdkTexture leak
Fixed Issues
- #5 "Track keymap layout changes"
- #12 "Support for wlroots-0.19"
- #13 "Wrong cursor size on client windows"
- #14 "Support for fractional scaling snap to device grid"
- #19 Add support for popups reposition
- #16 Firefox GTK backdrop/shadow not scaled correctly
Where to get it?
Source code lives on GNOME gitlab here
git clone https://gitlab.gnome.org/jpu/casilda.git
Matrix channel
Have any question? come chat with us at #cambalache:gnome.org
Mastodon
Follow me in Mastodon @xjuan to get news related to Casilda and Cambalache development.
Happy coding!
19 Apr 2026 8:07pm GMT
18 Apr 2026
Planet GNOME
Matthias Klumpp: Hello old new “Projects” directory!
If you have recently installed a very up-to-date Linux distribution with a desktop environment, or upgraded your system on a rolling-release distribution, you might have noticed that your home directory has a new folder: "Projects"
Why?
With the recent 0.20 release of xdg-user-dirs we enabled the "Projects" directory by default. Support for this has already existed since 2007, but was never formally enabled. This closes a more than 11 year old bug report that asked for this feature.
The purpose of the Projects directory is to give applications a default location to place project files that do not cleanly belong into one of the existing categories (Documents, Music, Pictures, Videos). Examples of this are software engineering projects, scientific projects, 3D printing projects, CAD design or even things like video editing projects, where project files would end up in the "Projects" directory, with output video being more at home in "Videos".
By enabling this by default, and subsequently in the coming months adding support to GLib, Flatpak, desktops and applications that want to make use of it, we hope to give applications that do operate in a "project-centric" manner with mixed media a better default storage location. As of now, those tools either default to the home directory, or will clutter the "Documents" folder, both of which is not ideal. It also gives users a default organization structure, hopefully leading to less clutter overall and better storage layouts.
This sucks, I don't like it!

As usual, you are in control and can modify your system's behavior. If you do not like the "Projects" folder, simply delete it! The xdg-user-dirs utility will not try to create it again, and instead adjust the default location for this directory to your home directory. If you want more control, you can influence exactly what goes where by editing your ~/.config/user-dirs.dirs configuration file.
If you are a system administrator or distribution vendor and want to set default locations for the default XDG directories, you can edit the /etc/xdg/user-dirs.defaults file to set global defaults that affect all users on the system (users can still adjust the settings however they like though).
What else is new?
Besides this change, the 0.20 release of xdg-user-dirs brings full support for the Meson build system (dropping Automake), translation updates, and some robustness improvements to its code. We also fixed the "arbitrary code execution from unsanitized input" bug that the Arch Linux Wiki mentions here for the xdg-user-dirs utility, by replacing the shell script with a C binary.
Thanks to everyone who contributed to this release!
18 Apr 2026 8:06am GMT
17 Apr 2026
Planet GNOME
Allan Day: GNOME Foundation Update, 2026-04-17
Welcome to another update about everything that's been happening at the GNOME Foundation. It's been four weeks since my last post, due to a vacation and public holidays, so there's lots to cover. This period included a major announcement, but there's also been a lot of other notable work behind the scenes.
Fellowship & Fundraising
The really big news from the last four weeks was the launch of our new Fellowship program. This is something that the Board has been discussing for quite some time, so we were thrilled to be able to make the program a reality. We are optimistic that it will make a significant difference to the GNOME project.
If you didn't see it already, check out the announcement for details. Also, if you want to apply to be our first Fellow, you have just three days until the application deadline on 20th April!
donate.gnome.org has been a great success for the GNOME Foundation, and it is only through the support of our existing donors that the Fellowship was possible. Despite these amazing contributions, the GNOME Foundation needs to grow our donations if we are going to be able to support future Fellowship rounds while simultaneously sustaining the organisation.
To this end, there's an effort happening to build our marketing and fundraising effort. This is primarily taking place in the GNOME Engagement Team, and we would love help from the community to help boost our outbound comms. If you are interested, please join the Engagement space and look out for announcements.
Also, if you haven't already, and are able to do so: please donate!
Conferences
We have two major events coming up, with Linux App Summit in May and GUADEC in July, so right now is a busy time for conferences.
The schedules for both of these upcoming events are currently being worked on, and arrangements for catering, photographers, and audio visual services are all in the process of being finalized.
The Travel Committee has also been busy handling GUADEC travel requests, and has sent out the first batch of approvals. There are some budget pressures right now due to rising flight prices, but budget has been put aside for more GUADEC travel, so please apply if you want to attend and need support.
April 2026 Board Meeting
This week was the Board's regular monthly meeting for April. Highlights from the meeting included:
- I gave a general report on the Foundation's activities, and we discussed progress on programs and initiatives, including the new Fellowship program and fundraising.
- Deepa gave a finance report for October to December 2025.
- Andrea Veri joined us to give an update on the Membership & Elections Committee, as well as the Infrastructure team. Andrea has been doing this work for a long time and has been instrumental in helping to keep the Foundation running, so this was a great opportunity to thank him for his work.
- One key takeaway from this month's discussion was the very high level of support that GNOME receives from our infrastructure partners, particularly AWS and also Fastly. We are hugely appreciative of this support, which represents a major financial contribution to GNOME, and want to make sure that these partners get positive exposure from us and feel appreciated.
- We reviewed the timeline for the upcoming 2026 board elections, which we are tweaking a little this year, in order to ensure that there is opportunity to discuss every candidacy, and reduce some unnecessary delay in final result.
Infrastructure
As usual, plenty has been happening on the infrastructure side over the past month. This has included:
- Ongoing work to tune our Fastly configuration and managing the resource usage of GNOME's infra.
- Deployment of a LiberaForms instance on GNOME infrastructure. This is hooked up to GNOME's SSO, so is available to anyone with an account who wants to use it - just head over to forms.gnome.org to give it a try.
- Changes to the Foundation's internal email setup, to allow easier management of the generic contact email addresses, as well as better organisation of the role-based email addresses that we have.
- New translation support for donate.gnome.org.
- Ongoing work in Flathub, around OAuth and flat-manager.
Admin & Finance
On the accounting side, the team has been busy catching up on regular work that got put to one side during last month's audit. There were some significant delays to our account process as a result of this, but we are now almost up to date.
Reorganisation of many of our finance processes has also continued over the past four weeks. Progress has included a new structure and cadence for our internal accounting calls, continued configuration of our new payments platform, and new forms for handling reimbursement requests.
Finally, we have officially kicked off the process of migrating to our new physical mail service. Work on this is ongoing and will take some time to complete. Our new address is on the website, if anyone needs it.
That's it for this report! Thanks for reading, and feel free to use the comments if you have questions!
17 Apr 2026 3:22pm GMT
Andrea Veri: GNOME GitLab Git traffic caching
Table of Contents
- Table of Contents
- Introduction
- The problem
- Architecture overview
- The VCL layer
- The POST-to-GET conversion
- Protecting private repositories
- The Lua layer
- Debugging the rollout
- How we got here
- Conclusions
Introduction
One of the most visible signs that GNOME's infrastructure has grown over the years is the amount of CI traffic that flows through gitlab.gnome.org on any given day. Hundreds of pipelines run in parallel, most of them starting with a git clone or git fetch of the same repository, often at the same commit. All that traffic was landing directly on GitLab's webservice pods, generating redundant load for work that was essentially identical.
GNOME's infrastructure runs on AWS, which generously provides credits to the project. Even so, data transfer is one of the largest cost drivers we face, and we have to operate within a defined budget regardless of those credits. The bandwidth costs associated with this Git traffic grew significant enough that for a period of time we redirected unauthenticated HTTPS Git pulls to our GitHub mirrors as a short-term cost mitigation. That measure bought us some breathing room, but it was never meant to be permanent: sending users to a third-party platform for what is essentially a core infrastructure operation is not a position we wanted to stay in. The goal was always to find a proper solution on our own infrastructure.
This post documents the caching layer we built to address that problem. The solution sits between the client and GitLab, intercepts Git fetch traffic, and routes it through Fastly's CDN so that repeated fetches of the same content are served from cache rather than generating a fresh pack every time. The design went through several iterations - this post presents the final architecture first, then walks through how we got here for readers interested in the evolution.
The problem
The Git smart HTTP protocol uses two endpoints: info/refs for capability advertisement and ref discovery, and git-upload-pack for the actual pack generation. The second one is the expensive one. When a CI job runs git fetch origin main, GitLab has to compute and send the entire pack for that fetch negotiation. If ten jobs run the same fetch within a short window, GitLab does that work ten times.
The tricky part is that git-upload-pack is a POST request with a binary body that encodes what the client already has (have lines) and what it wants (want lines). Traditional HTTP caches ignore POST bodies entirely. Building a cache that actually understands those bodies and deduplicates identical fetches requires some work at the edge.
For a fresh clone the body contains only want lines - one per ref the client is requesting:
0032want 7d20e995c3c98644eb1c58a136628b12e9f00a78
0032want 93e944c9f728a4b9da506e622592e4e3688a805c
0032want ef2cbad5843a607236b45e5f50fa4318e0580e04
...
For an incremental fetch the body is a mix of want lines (what the client needs) and have lines (commits the client already has locally), which the server uses to compute the smallest possible packfile delta:
00a4want 51a117587524cbdd59e43567e6cbd5a76e6a39ff
0000
0032have 8282cff4b31dce12e100d4d6c78d30b1f4689dd3
0032have be83e3dae8265fdc4c91f11d5778b20ceb4e2479
0032have 7d46abdf9c5a3f119f645c8de6d87efffe3889b8
...
The leading four hex characters on each line are the pkt-line length prefix. The server walks back through history from the wanted commits until it finds a common ancestor with the have set, then packages everything in between into a packfile. Two CI jobs running the same pipeline at the same commit will produce byte-for-byte identical request bodies and therefore identical responses - exactly the property a cache can help with.
Architecture overview
The current architecture has two components:
- Fastly as the user-facing CDN for
gitlab.gnome.org, with custom VCL that interceptsgit-upload-packtraffic, hashes the request body, converts the POST to a GET, and caches the response at edge POPs worldwide - OpenResty (Nginx + LuaJIT) running as the origin server, with a minimal Lua script that restores the original POST and signals cacheability back to Fastly
flowchart TD client["Git client / CI runner"] edge["Fastly Edge POP (nearest)"] shield["Fastly Shield POP (IAD)"] nginx["OpenResty Nginx (origin)"] lua["Lua: git_upload_pack.lua"] gitlab["GitLab webservice"] client -- "POST /git-upload-pack" --> edge edge -- "authenticated? → return(pass)" --> nginx edge -- "HIT → serve from edge" --> client edge -- "MISS → forward to shield" --> shield shield -- "HIT → return to edge (edge caches)" --> edge shield -- "MISS → fetch from origin" --> nginx nginx --> lua lua -- "restore POST, proxy" --> gitlab gitlab -- "packfile response" --> nginx nginx -- "X-Git-Cacheable: 1" --> shield
The request flow:
- The
POST /git-upload-packarrives at the nearest Fastly edge POP. - VCL checks for authentication headers (
Authorization,PRIVATE-TOKEN,Job-Token). If present, the request is sent directly to origin with credentials intact - private repos and CI runner clones never enter the cache path. - VCL checks the body: if
Content-Lengthexceeds 8 KB (the limit of what Fastly can read fromreq.body), or the body does not containcommand=fetch, the request is passed through uncached. - For cacheable requests, VCL hashes the body with SHA256 to build the cache key, base64-encodes the body into
X-Git-Original-Body, converts the request to GET, and doesreturn(lookup). - On a cache hit at the edge, the packfile is served immediately.
- On a miss, the request routes to the IAD shield POP. If the shield has it cached, it returns the object and the edge caches it locally.
- On a shield miss, the request reaches Nginx at the origin. Lua detects
X-Git-Original-Body, restores the POST body, and proxies to GitLab. - The response flows back through the shield (which caches it) and the edge (which also caches it). Subsequent requests from the same region are served directly from the edge.
The VCL layer
The vcl_recv snippet runs at priority 9, before the existing enable_segmented_caching snippet at priority 10 which would otherwise return(pass) for non-asset URLs:
# Snippet git-cache-vcl-recv : 9
# Edge: convert POST to GET, hash body, encode body in header
if (req.url ~ "/git-upload-pack$" && req.request == "POST") {
# Authenticated requests bypass cache entirely (CI runners, private repos)
if (req.http.Authorization || req.http.PRIVATE-TOKEN || req.http.Job-Token) {
return(pass);
}
if (std.atoi(req.http.Content-Length) > 8192) {
return(pass);
}
if (req.body !~ "command=fetch") {
return(pass);
}
set req.http.X-Git-Cache-Key = "v3:" digest.hash_sha256(req.body);
set req.http.X-Git-Original-Body = digest.base64(req.body);
set req.request = "GET";
set req.backend = F_Host_1;
if (req.restarts == 0) {
set req.backend = fastly.try_select_shield(ssl_shield_iad_va_us, F_Host_1);
}
return(lookup);
}
# Shield: request already converted to GET by the edge
if (req.http.X-Git-Cache-Key) {
set req.backend = F_Host_1;
return(lookup);
}
The auth check at the top is the first guard. GitLab CI runners authenticate with Authorization: Basic <gitlab-ci-token:TOKEN>, API clients use PRIVATE-TOKEN or Job-Token. Any request carrying these headers is sent straight to origin with credentials intact - it never enters the cache path, never has its body encoded, and never touches the Lua script. This is how private repositories are protected (see Protecting private repositories).
The command=fetch filter means only Git protocol v2 fetch commands are cached. The ls-refs command is excluded because its request body is essentially static - caching it with a long TTL would serve stale ref listings after a push. Fetch bodies encode exactly the SHAs the client wants and already has, making them safe to cache indefinitely.
The v3: prefix is a cache version string. Bumping it invalidates all existing cache entries without touching Fastly's purge API.
The second if block handles the shield. When a cache miss at the edge forwards the request to the shield POP, the shield runs vcl_recv again. At that point the request is already a GET (the edge converted it), so the first block's req.request == "POST" check will not match. Without the second block, the request would fall through to the enable_segmented_caching snippet, which returns pass for any URL that is not an artifact or archive - effectively preventing the shield from ever caching git traffic.
The vcl_hash snippet overrides the default URL-based hash when a cache key is present:
# Snippet git-cache-vcl-hash : 10
if (req.http.X-Git-Cache-Key) {
set req.hash += req.http.X-Git-Cache-Key;
return(hash);
}
The vcl_fetch snippet caches 200 responses that carry the X-Git-Cacheable signal from Nginx:
# Snippet git-cache-vcl-fetch : 100
if (req.http.X-Git-Cache-Key) {
if (beresp.status == 200 && beresp.http.X-Git-Cacheable == "1") {
set beresp.http.Surrogate-Key = "git-cache " regsub(req.url.path, "/git-upload-pack$", "");
set beresp.cacheable = true;
set beresp.ttl = 30d;
set beresp.http.X-Git-Cache-Key = req.http.X-Git-Cache-Key;
unset beresp.http.Cache-Control;
unset beresp.http.Pragma;
unset beresp.http.Expires;
unset beresp.http.Set-Cookie;
return(deliver);
}
set beresp.ttl = 0s;
set beresp.cacheable = false;
return(deliver);
}
The Surrogate-Key line tags each cached object with both a global git-cache key and the repository path. This enables targeted purging - a single repository's cache can be flushed with fastly purge --key "/GNOME/glib", or all git cache at once with fastly purge --key "git-cache".
The 30-day TTL is deliberately long. Git pack data is content-addressed: a pack for a given set of want/have lines will always be the same. As long as the objects exist in the repository, the cached pack is valid. The only case where a cached pack could be wrong is if objects were deleted (force-push that drops history, for instance), which is rare and, on GNOME's GitLab, made even rarer by the Gitaly custom hooks we run to prevent force-pushes and history rewrites on protected namespaces. In those cases the cache version prefix would force a key change rather than relying on TTL expiry.
The X-Git-Cacheable header is intentionally not unset in vcl_fetch. This is important for the shielding architecture: when the shield caches the object, the stored headers include X-Git-Cacheable: 1. When the edge later fetches this object from the shield, the edge's own vcl_fetch sees the header and knows it is safe to cache locally. If vcl_fetch stripped the header, the edge would never cache - every request would be a local miss that has to travel back to the shield.
The cleanup happens in vcl_deliver, which runs last before the response reaches the client:
# Snippet git-cache-vcl-deliver : 100
if (req.http.X-Git-Cache-Key) {
set resp.http.X-Git-Cache-Status = if(fastly_info.state ~ "HIT(?:-|\z)", "HIT", "MISS");
unset resp.http.X-Git-Original-Body;
if (!req.http.Fastly-FF) {
unset resp.http.X-Git-Cacheable;
unset resp.http.X-Git-Cache-Key;
}
}
The Fastly-FF check distinguishes between inter-POP traffic (shield-to-edge) and the final client response. Fastly-FF is set when the request comes from another Fastly node. On the shield, where the request came from the edge, internal headers like X-Git-Cacheable and X-Git-Cache-Key are preserved - the edge's vcl_fetch needs them. On the edge, where the request came from the actual client, those headers are stripped from the final response. Only X-Git-Cache-Status is exposed to clients for observability.
The POST-to-GET conversion
This is probably the most unusual part of the design. Fastly's consistent hashing and shield routing only works for GET requests. POST requests always go straight to origin. Fastly does provide a way to force POST responses into the cache - by returning pass in vcl_recv and setting beresp.cacheable in vcl_fetch - but it is a blunt instrument: there is no consistent hashing, no shield collapsing, and no guarantee that two nodes in the same POP will ever share the cached result.
By converting the POST to a GET in VCL, encoding the body in a header (X-Git-Original-Body), and using a body-derived SHA256 as the cache key, we get consistent hashing and shield-level request collapsing for free. The VCL uses the X-Git-Cache-Key header (not the URL or method) as the cache key, so the GET conversion is invisible to the caching logic.
Fastly's shield feature routes cache misses through a designated shield node before going to origin. When two different edge nodes both get a MISS for the same cache key simultaneously, the shield node collapses them into a single origin request. This is important because without it, a burst of CI jobs fetching the same commit would all miss, all go to origin in parallel, and GitLab would end up generating the same pack multiple times.
Protecting private repositories
Private repository traffic must never enter the cache - that would mean sending authenticated git content through a third-party cache. The VCL handles this with a single check at the top of vcl_recv, before any body processing:
if (req.http.Authorization || req.http.PRIVATE-TOKEN || req.http.Job-Token) {
return(pass);
}
Authenticated requests (CI runners, API clients, private repo clones) are sent directly to GitLab with credentials intact, completely bypassing the cache path. Unauthenticated requests are, by definition, accessing public repositories - the only kind that should be cached.
This approach follows the same trust model GitLab itself uses: credentials are the boundary between private and public. It requires no external state, cannot drift out of sync, and has no failure modes beyond Fastly itself.
An earlier iteration used a Valkey (Redis) denylist to track private repositories and a webhook service to keep it synchronized with GitLab - see How we got here for why that was replaced.
The Lua layer
With the VCL handling body hashing, the POST-to-GET conversion, and the auth bypass for private repos, the Lua script's role is reduced to the bare minimum. Every request that reaches Lua is guaranteed to be an unauthenticated clone of a public repository - the VCL already filtered out everything else. The script's only responsibilities are:
- Detect that the request arrived from Fastly with an encoded body (the
X-Git-Original-Bodyheader). - Decode and restore the original POST.
- Signal back to Fastly that the response is safe to cache.
local encoded_body = ngx.req.get_headers()["X-Git-Original-Body"]
if not encoded_body then
return
end
local body = ngx.decode_base64(encoded_body)
ngx.req.read_body()
ngx.req.set_method(ngx.HTTP_POST)
ngx.req.set_body_data(body)
ngx.req.set_header("Content-Length", tostring(#body))
ngx.req.clear_header("X-Git-Original-Body")
ngx.req.clear_header("Authorization")
ngx.ctx.git_cacheable = true
The ngx.ctx.git_cacheable flag is picked up by the header_filter_by_lua_block in the Nginx configuration, which translates it into the X-Git-Cacheable: 1 response header that vcl_fetch checks:
location ~ /git-upload-pack$ {
client_body_buffer_size 5m;
client_max_body_size 5m;
access_by_lua_file /etc/nginx/lua/git_upload_pack.lua;
header_filter_by_lua_block {
if ngx.ctx.git_cacheable then
ngx.header["X-Git-Cacheable"] = "1"
end
}
proxy_pass http://gitlab-webservice;
...
}
Debugging the rollout
The rollout surfaced a few issues worth documenting for anyone building a similar setup on Fastly.
Shielding introduces a second vcl_recv execution. When the edge forwards a cache miss to the shield, the shield runs the entire VCL pipeline from scratch. The POST-to-GET conversion in vcl_recv checks for req.request == "POST", but on the shield the request is already a GET. Without the fallback if (req.http.X-Git-Cache-Key) block, the shield's vcl_recv would fall through to the segmented caching snippet and return(pass) - making the shield unable to cache anything.
Response headers must survive the shield-to-edge hop. vcl_fetch and vcl_deliver both run on each node independently. If vcl_fetch on the shield strips a header after caching the object, the stored object will not have that header. When the edge fetches from the shield, the edge's vcl_fetch will not see it. The solution is to only strip internal headers in vcl_deliver on the final client response, using Fastly-FF to distinguish inter-POP traffic from client traffic.
Fastly's req.body is limited to 8 KB. VCL can only inspect the first 8192 bytes of a request body. For the vast majority of git fetch negotiations - especially shallow clones and CI pipelines fetching recent commits - the body is well under this limit. Requests with larger bodies (deep fetches with many have lines) fall through to return(pass) and are handled directly by GitLab without caching. This is an acceptable tradeoff: those large-body requests are typically unique negotiations that would not benefit from caching anyway.
Git protocol v1 clients are not cached. The VCL filters on command=fetch, which is a Git protocol v2 construct. Protocol v1 uses a different body format (want/have lines without the command= prefix). Since protocol v2 has been the default since git 2.26 (March 2020), the vast majority of traffic benefits from caching. Protocol v1 clients still work correctly - they simply bypass the cache.
Authenticated requests must bypass cache before body processing. The initial edge VCL converted all git-upload-pack POSTs to cacheable GETs, including authenticated requests from CI runners. The Lua denylist was supposed to catch private repos, but CI runners authenticate with Authorization: Basic <gitlab-ci-token:TOKEN> - a header the Lua script unconditionally stripped for any repo not on the denylist. This broke private repository CI builds with 401 errors. The fix was adding the auth header check as the very first guard in vcl_recv, before any body hashing or request conversion. This also made the entire denylist infrastructure unnecessary, since the auth boundary naturally separates private from public traffic.
How we got here
The current architecture is the result of three iterations. The sections above describe the final design; this section documents the path we took to get there.
Iteration 1: Separate CDN service with Lua-driven caching
The first version used a separate Fastly CDN service (cdn.gitlab.gnome.org) as the cache layer, with Nginx doing most of the heavy lifting in Lua:
flowchart TD client["Git client / CI runner"] gitlab_gnome["gitlab.gnome.org (Nginx reverse proxy)"] nginx["OpenResty Nginx"] lua["Lua: git_upload_pack.lua"] cdn_origin["/cdn-origin internal location"] fastly_cdn["Fastly CDN"] origin["gitlab.gnome.org via its origin (second pass)"] gitlab["GitLab webservice"] valkey["Valkey denylist"] webhook["gitlab-git-cache-webhook"] gitlab_events["GitLab project events"] client --> gitlab_gnome gitlab_gnome --> nginx nginx --> lua lua -- "check denylist" --> valkey lua -- "private repo: BYPASS" --> gitlab lua -- "public/internal: internal redirect" --> cdn_origin cdn_origin --> fastly_cdn fastly_cdn -- "HIT" --> cdn_origin fastly_cdn -- "MISS: origin fetch" --> origin origin --> gitlab gitlab_events --> webhook webhook -- "SET/DEL git:deny:" --> valkey
In this design, the Lua script did everything: read the POST body, SHA256-hash it to build a cache key, check a Valkey denylist to exclude private repositories, convert the POST to a GET, encode the body in a header, and perform an internal redirect to a /cdn-origin location that proxied to the CDN. On a cache miss, the CDN would fetch from gitlab.gnome.org directly (the "second pass"), where Lua would detect the origin fetch, decode the body, restore the POST, and proxy to GitLab.
Private repositories were protected by a denylist stored in Valkey. A small FastAPI webhook service (gitlab-git-cache-webhook) listened for GitLab system hooks on project_create and project_update events, maintaining git:deny:<path> keys for private repositories (visibility level 0). Internal repositories (level 10) were treated the same as public (level 20) since they are accessible to any authenticated user on the instance.
The Lua script for this design was substantially more complex:
local resty_sha256 = require("resty.sha256")
local resty_str = require("resty.string")
local redis_helper = require("redis_helper")
local redis_host = os.getenv("REDIS_HOST") or "localhost"
local redis_port = os.getenv("REDIS_PORT") or "6379"
-- Second pass: request arriving from CDN origin fetch.
if ngx.req.get_headers()["X-Git-Cache-Internal"] then
local encoded_body = ngx.req.get_headers()["X-Git-Original-Body"]
if encoded_body then
ngx.req.read_body()
local body = ngx.decode_base64(encoded_body)
ngx.req.set_method(ngx.HTTP_POST)
ngx.req.set_body_data(body)
ngx.req.set_header("Content-Length", tostring(#body))
ngx.req.clear_header("X-Git-Original-Body")
end
return
end
And on the first pass, it handled hashing, denylist checks, and the CDN redirect:
if not body:find("command=fetch", 1, true) then
ngx.header["X-Git-Cache-Status"] = "BYPASS"
return
end
local sha256 = resty_sha256:new()
sha256:update(body)
local body_hash = resty_str.to_hex(sha256:final())
local cache_key = "v2:" .. repo_path .. ":" .. body_hash
local denied, err = redis_helper.is_denied(redis_host, redis_port, repo_path)
if denied then return end
ngx.req.clear_header("Authorization")
ngx.req.set_header("X-Git-Original-Body", ngx.encode_base64(body))
ngx.req.set_method(ngx.HTTP_GET)
ngx.req.set_body_data("")
return ngx.exec("/cdn-origin" .. uri)
The CDN's VCL was relatively simple - it used X-Git-Cache-Key for the hash, routed through a shield, and cached 200 responses for 30 days.
This architecture worked, but it had a significant limitation.
Iteration 2: Moving caching to the edge
The problem with the separate CDN service was that Nginx runs in AWS us-east-1. From Fastly's perspective, the only client of the CDN service was that single Nginx instance in Virginia. Every request entered the CDN through the IAD (Ashland, Virginia) POP, which meant the CDN's edge POPs around the world were never used. The shield node in IAD cached the objects, but the edge POPs never got a chance to build up their own local caches.
A CI runner in Europe would have its request travel from a European Fastly POP to IAD (the gitlab.gnome.org service), then to Nginx in AWS, then back to Fastly IAD (the CDN service), and then all the way back. Every single request for a cached object still had to cross the Atlantic twice.
The fix was to eliminate the separate CDN service entirely and move all the caching logic into the gitlab.gnome.org Fastly service itself. The key insight was that the POST-to-GET conversion and body hashing could happen in Fastly's VCL rather than in Lua - Fastly provides digest.hash_sha256() and digest.base64() functions that operate directly on req.body. By doing the conversion at the CDN edge, every POP in the network became a potential cache node for git traffic.
This iteration still used the Valkey denylist and webhook to protect private repositories, with Lua checking the denylist and signaling cacheability via X-Git-Cacheable.
Iteration 3: VCL auth bypass, denylist removed
The denylist approach had a fundamental flaw that surfaced once all git-upload-pack traffic flowed through the VCL cache path: authenticated requests from CI runners cloning private repositories were being converted to cacheable GETs. The Lua script would strip their Authorization header (if the repo was not on the denylist, or if the denylist was incomplete), and GitLab would reject the request with a 401.
The fix was adding the auth header check as the very first guard in vcl_recv - three lines of VCL that made the entire denylist infrastructure unnecessary. Authenticated requests go straight to origin. Unauthenticated requests are, by definition, public. The auth header is the correct boundary, and it requires no external state.
With this change, the Valkey instance, the redis_helper.lua module, and the gitlab-git-cache-webhook service were all decommissioned. The Lua script went from ~50 lines with Redis dependencies to 12 lines with no external dependencies.
Conclusions
The system has been running in production since April 2026. Packfiles are cached at Fastly edge POPs worldwide - a CI runner in Europe gets a cache hit served from a European POP rather than making a round trip to the US East coast. The Lua script is twelve lines. The only moving parts are Fastly's VCL and Nginx.
The cache hit rate on fetch traffic has been consistently high (over 80%). If something goes wrong with the cache layer, requests fall through to GitLab directly - the same path they took before caching existed. There is no failure mode where caching breaks git operations. This also means we don't redirect any traffic to github.com anymore.
That should be all for today, stay tuned!
17 Apr 2026 2:00pm GMT
Jussi Pakkanen: Multi merge sort, or when optimizations aren't
In our previous episode we wrote a merge sort implementation that runs a bit faster than the one in stdlibc++. The question then becomes, could it be made even faster. If you go through the relevant literature one potential improvement is to do a multiway merge. That is, instead of merging two arrays into one, you merge four into one using, for example, a priority queue.
This seems like a slam dunk for performance.
- Doubling the number of arrays to merge at a time halves the number of total passes needed
- The priority queue has a known static maximum size, so it can be put on the stack, which is guaranteed to be in the cache all the time
- Processing an element takes only log(#lists) comparisons
Implementing multimerge was conceptually straightforward but getting all the gritty details right took a fair bit of time. Once I got it working the end result was slower. And not by a little, either, but more than 30% slower. Trying some optimizations made it a bit faster but not noticeably so.
Why is this so? Maybe there are bugs that cause it to do extra work? Assuming that is not the case, what actually is? Measuring seems to indicate that a notable fraction of the runtime is spent in the priority queue code. Beyond that I got very little to nothing.
The best hypotheses I could come up with has to with the number of comparisons made. A classical merge sort does two if statements per output elements. One to determine which of the two lists has a smaller element at the front and one to see whether removing the element exhausted the list. The former is basically random and the latter is always false except when the last element is processed. This amounts to 0.5 mispredicted branches per element per round.
A priority queue has to do a bunch more work to preserve the heap property. The first iteration needs to check the root and its two children. That's three comparisons for value and two checks whether the children actually exist. Those are much less predictable than the comparisons in merge sort. Computers are really efficient at doing simple things, so it may be that the additional bookkeeping is so expensive that it negates the advantage of fewer rounds.
Or maybe it's something else. Who's to say? Certainly not me. If someone wants to play with the code, the implementation is here. I'll probably delete it at some point as it does not have really any advantage over the regular merge sort.
17 Apr 2026 10:41am GMT
This Week in GNOME: #245 Infinite Ranges
Update on what happened across the GNOME project in the week from April 10 to April 17.
GNOME Core Apps and Libraries
Libadwaita ↗
Building blocks for modern GNOME apps using GTK4.
Alice (she/her) 🏳️⚧️🏳️🌈 reports
AdwAboutDialog's Other Apps section title can now be overridden to say something other than "Other Apps by developer-name"
Alice (she/her) 🏳️⚧️🏳️🌈 announces
AdwEnumListModelhas been deprecated in favor of the recently added GtkEnumList. They work identically and so migrating should be as simple as find-and-replace
Maps ↗
Maps gives you quick access to maps all across the world.
mlundblad announces
Maps now shows track/stop location for boarding and disembarking stations/stops on public transit journeys (when available in upstream data)
GNOME Circle Apps and Libraries
Graphs ↗
Plot and manipulate data
Sjoerd Stendahl says
After two years without a major feature-update, we are happy to announce Graphs 2.0. It's by far our biggest update yet. We are targeting a stable release next month, but in the meantime we are running an official beta testing period. We are very happy for any feedback, especially in this period!
The upcoming Graphs 2.0, features some major long-requested changes: equations now span an infinite range and can be edited and manipulated analytically, the style editor has been redesigned with a live preview, we revamped the import dialog, and imported data now supports error bars. Equations with infinite values in them such as
y=tan(x)now also render properly with values being drawn all the way to infinity and without having a line going from plus to minus infinity. We've also added support for spreadsheet and SQLite database files, drag-and-drop importing, improved curve fitting with residuals and better confidence bands, and now have proper mobile support.These are just some highlights, a more complete list of changes, including a description of how to get the beta version, can be found here: https://blogs.gnome.org/sstendahl/2026/04/14/announcing-the-upcoming-graphs-2-0/
Gaphor ↗
A simple UML and SysML modeling tool.
Arjan announces
Mareike Keil of the University of Mannheim published her article "NEST‑UX: Neurodivergent and Neurotypical Style Guide for Enhanced User Experience". The paper explores how user interfaces can be designed to be accessible for both neurotypical and neurodivergent users, including people with autism, ADHD or giftedness.
The Gaphor team worked together with Mareike to implement suggestions she found during her research, allowing us to test how well these ideas work in practice.
The article can be found at https://academic.oup.com/iwc/advance-article-abstract/doi/10.1093/iwc/iwag011/8571596.
Mareike's LinkedIn announcement can be found at https://www.linkedin.com/feed/update/urn:li:activity:7447176733759352832/.
Third Party Projects
Bilal Elmoussaoui announces
Now that most of the basic features work as expected, I would like to publicly introduce you to Goblin, a GObject Linter, for C codebases. You can read more about it at https://belmoussaoui.com/blog/23-goblin-linter/
Anton Isaiev says
RustConn (connection manager for SSH, RDP, VNC, SPICE, Telnet, Serial, Kubernetes, MOSH, and Zero Trust protocols)
Versions 0.10.15-0.10.22 bring a week of polish across the UI, security, and terminal experience.
Terminal got better. Font zoom (Ctrl+Scroll, Ctrl+Plus/Minus) and optional copy-on-select landed. The context menu now works properly - VTE's native API replaced the custom popover that was stealing focus and breaking clipboard actions. On X11 sessions (MATE, XFCE) where GTK4's NGL renderer caused blank popovers, RustConn auto-detects and falls back to Cairo.
Sidebar and navigation. Groups expand/collapse on double-click anywhere on the row. The Local Shell button moved to the header bar so it's always visible. Protocol filter bar is now optional and togglable. Tab groups show as a [GroupName] prefix in the tab title, and a new "Close All in Group" action cleans up grouped tabs at once. A tab group chooser dialog with clickable pill buttons replaces manual retyping.
RDP fixes. Multiple shared folders now map correctly in embedded IronRDP mode - previously only the first path was used. SSH Port Forwarding UI, which had silently disappeared from the connection dialog, is back.
Security hardened. Machine key encryption dropped the predictable hostname+username fallback; the /etc/machine-id path now uses HKDF-SHA256 with app-specific salt. Context menu labels and sidebar accessible labels are localized for screen readers.
Ctrl+K no longer hijacks the terminal - it was removed from the global search shortcut, so nano and other terminal apps get it back. Terminal auto-focus after connection means you can type immediately.
Export and import. Export dialog gained a group filter, and RustConn Native (.rcn) is now the default format in both import and export dialogs.
Project: https://github.com/totoshko88/RustConn Flatpak: https://flathub.org/en/apps/io.github.totoshko88.RustConn
Mufeed Ali reports
Wordbook 1.0.0 was released
Wordbook is now a fully offline application with no in-app downloads. Pronunciation data is now sourced from WordNet where possible, allowing better grouping of definitions in homonyms like "bass". In general, many UI/UX improvements and bug fixes were also made. The community also helped by localizing the app for a total of 6 new languages.
Try it on Flathub.
Pods ↗
Keep track of your podman containers.
marhkb says
Pods 3.0.0 is out!
This major release introduces a brand-new container engine abstraction layer allowing for greater flexibility.
Based on this new layer, Pods now features initial Docker support, making it easier for users to manage their containers regardless of their preferred backend.
Check it out on Flathub.
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
17 Apr 2026 12:00am GMT
16 Apr 2026
Planet GNOME
Thibault Martin: TIL that Pagefind does great client-side search
I post more and more content on my website. What was visible at glance then is now more difficult to look for. I wanted to implement search, but it is a static website. It means that everything is built once, and then published somewhere as final, immutable pages. I can't send a request for search and get results in return.
Or that's what I thought! Pagefind is a neat javascript library that does two things:
- It produces an index of the content right after building the static site.
- It provides 2 web components to insert in my pages:
<pagefind-modal>that is the search modal itself, hidden by default, and<pagefind-modal-trigger>that looks like a search field and opens the modal.
The pagefind-modal component looks up the index when the user types a request. The index is a static file, so there is not need for a backend that processes queries. Of course this only works for basic queries, but it's a great rool already!
Pagefind is also easy to customize via a list of CSS variables. Adding it to this website was very straightforward.
16 Apr 2026 10:00am GMT
14 Apr 2026
Planet GNOME
Steven Deobald: End of 10 Handout
There was a silly little project I'd tried to encourage many folks to attempt last summer. Sri picked it up back in September and after many months, I decided to wrap it up and publish what's there.
The intention is a simple, 2-sided A4 that folks can print and give out at repair cafes, like the End of 10 event series. Here's the original issue, if you'd like to look at the initial thought process.
When I hear fairly technical folks talk about Linux in 2026, I still consistently hear things like "I don't want to use the command line." The fact that Spotify, Discord, Slack, Zoom, and Steam all run smoothly on Linux is far removed from these folks' conception of the Linux desktop they might have formed back in 2009. Most people won't come to Linux because it's free of
shlop
and ads - they're accustomed to choking on that stuff. They'll come to Linux because they can open a spreadsheet for free, play Slay The Spire 2, or install Slack even though they promised themselves they wouldn't use their personal computer for work.
The GNOME we all know and love is one we take for granted… and the benefits of which we assume everyone wants. But the efficiency, the privacy, the universality, the hackability, the gorgeous design, and the lack of ads? All these things are the icing on the cake. The cake, like it or not, is installing Discord so you can join the Sunday book club.
Here's the A4. And here's a snippet:

If you try this out at a local repair cafe, I'd love to know which bits work and which don't. Good luck! 
14 Apr 2026 9:28pm GMT
Sjoerd Stendahl: Announcing the upcoming Graphs 2.0
It's been a while since we last shared a major update of Graphs. We've had a few minor releases, but the last time we had a substantial feature update was over two years ago.
This does not mean that development has stalled, to the contrary. But we've been working hard on some major changes that took some time to get completely right. Now after a long development cycle, we're finally getting close enough to a release to be able to announce an official beta period. In this blog, I'll try to summarize most of the changes in this release.
New data types
In previous version of Graphs, all data types are treated equally. This means that an equation is actually just regular data that is generated when loading. Which is fine, but it also means that the span of the equation is limited, the equation cannot be changed afterward, and operations on the equation will not be reflected in the equation name. In Graphs 2.0, we have three distinct data types: Datasets, Generated Datasets and Equations.
Datasets are the regular, imported data that you all know and love. Nothing really has changed here. Generated Datasets are essentially the same as regular datasets, but the difference is that these datasets are generated from an equation. They work the same as regular datasets, but for generated datasets you can change the equation, step size and the limits after creating the item. Finally, the major new addition is the concept of equations. As the name implies, equations are generated based on an equation you enter, but they span an infinite range. Furthermore, operations you perform on equations are done analytically. Meaning if you translate the equation `y = 2x + 3` with 3 in the y-direction, it will change to `y = 2x + 6`. If you perform a derivative, the equation will change to `y = 2x` etcetera. This is a long-requested feature, and has been made possible thanks to the magic of sympy and some trickery on the canvas. Below, there's a video that demonstrates these three data types.
Revamped Style Editor
We have redesigned the style editor, where we now show a live preview of the edited styles. This has been a pain point in the past, when you edit styles you cannot see how it actually affects the canvas. Now the style editor immediately tells you how it will affect a canvas, making it much easier to change the style exactly to your preferences.
We have also added the ability to import styles. Since Graphs styles are based on matplotlib styles, most features from a matplotlib style generally work. Similarly, you can now export your styles as well making it easier to share your style or simply to send it to a different machine. Finally, the style editor can be opened independently of Graphs. By opening a Graphs style from your file explorer, you can change the style without having to open Graphs.
We also added some new options, such as the ability to style the new error bars. But also the option to draw tick labels (so the values) on all axes that have ticks.
Improved data import
We have completely reworked the way data is imported. Under the hood, our modules are completely modular making it possible to add new parsers without having to mess with the code. Thanks to this rework, we have added support for spreadsheets (LibreOffice .ods and Microsoft Office .xlxs) and for sqlite databases files. The UI automatically updates accordingly. For example for spreadsheets, columns are imported by the column name (alphabetical letter) instead of an index, while sqlite imports show the tables present in the database.
Furthermore, the import dialog has been improved. It is not possible to add multiple files at once, or import multiple datasets from the same file. Settings can be adjusted for each dataset individually. And you can even import just from a single column. We also added the ability to import error-bars on either axes, and added some pop-up buttons that explain certain settings.
Error bars
I mentioned this in the previous paragraph, but as it's a feature that's been requested multiple times I thought it'd be good to state this explicitly as well. We now added support for error bars. Error bars can easily be set on the import dialog, and turned on and off for each axis when editing the item.
Singularity handling
The next version of Graph will also finally handle singularities properly, so equations that have infinite values in them will be rendered as they should be. What was happening in the old version, was that for equations with values that go to infinity and then flip sign, that the line was drawn from the maximum value to the minimum value. Even though there are no values in between. Furthermore, since we render a finite amount of datapoints, the lines don't go up to infinity either, giving misleading Graphs.
This is neatly illustrated in the pictures below. The values go all the way up to infinity like they should, and Graphs neatly knows that the line is not continuous, so it does not try to draw a straight line going from plus to minus infinity.
Reworked Curve fitting
The curve fitting has been reworked completely under the hood. While the changes may not be that obvious as a user, the code has basically been completely replaced. The most important change is that the confidence band is now calculated completely correctly using the delta-method. Previously a naive approach was used where the limits were calculated using the standard deviation each parameter. This does not hold up well in most cases though. The parameter values that are given are also no longer rounded in the new equation names (e.g. 421302 used to be rounded to 421000). More useful error messages are provided when things go wrong, custom equations now have an apply button which improves smoothness when entering new equations, the root mean squared error is added as a second goodness-of-fit measure, you can now check out the residuals of your fit. The residuals can be useful to check if your fit is physically correct. A good fit will show residuals scattered randomly around zero with no visible pattern. A systematic pattern in the residuals, such as a curve or a trend suggests that the chosen model may not be appropriate for the data.
UI changes
We've tweaked the UI a bit all over the place. But one particular change that is worth to highlight, is that we have moved the item and figure settings to the sidebar. The reason for this, is that the settings are typically used to affect the canvas so you don't want to lose sight of how your setting affects the canvas while you're updating. For example, when setting the axes limits, you want to see how your graph looks with the new limit, having a window obstructing the view does not help.
Another nice addition is that you can now simply click on a part of the canvas, such as the limits, and it will immediately bring you to the figure settings with the relevant field highlighted. See video below.
Mobile screen support
With the upcoming release, we finally have full support for mobile devices. See here a quick demonstration on an old OnePlus 6:
Figure exporting
One nice addition is the improved figure export. Instead of simply taking the same canvas as you see on the screen, you can now explicitly set a certain resolution. This is vital if you have a lot of figures in the same work, or need to publish your figures in academic journals, and you need consistency both in size and in font sizes. Of course, you can still use the previous setting and have the same size as in the application.
More quality of life changes
The above are just a highlight of some major feature updates. But there's a large amount of features that we added. Here's a rapid-fire list of other niceties that we added:
- Multiple instances of Graphs can now be open at the same time
- Data can now be imported by drag-and-drop
- The subtitle finally shows the full file path, even in the isolated Flatpak
- Custom transformations have gotten more powerful with the addition of new variables to use
- Graphs now inhibits the session when unsaved data is still open
- Added support for base-2 logarithmic scaling
- Warnings are now displayed when trying to open a project from a beta version
And a whole bunch of bug-fixes, under-the-hood changes, and probably some features I have forgotten about. Overall, it's our biggest update yet by far, and I am excited to finally be able to share the update soon.
As always, thanks to everyone who has been involved in this version. Graphs is not a one-person project. The bulk of the maintenance is done by me and Christoph, the other maintainer. And of course, we should thank the entire community. Both within GNOME projects (such as help from the design team, and the translation team), as well as outsiders that come with feedback, report or plain suggestions.
Getting the beta
This release is still in beta while we are ironing out the final issues. The expected release date is somewhere in the second week of may. In the meantime, feel free to test the beta. We are very happy for any feedback, especially in this period!
You can get the beta directly from flathub. First you need to add the flathub beta remote:
flatpak remote-add --if-not-exists flathub-beta https://flathub.org/beta-repo/flathub-beta.flatpakrepo
Then, you can install the application:
flatpak install flathub-beta se.sjoerd.Graphs
To run the beta version by default, the following command can be used:
sudo flatpak make-current se.sjoerd.Graphs beta
Note that the sudo is neccesary here, as it sets the current branch on the system level. To install this on a per-user basis, the flag -user can be used in the previous commands. To switch back to the stable version simply run the above command replacing beta with stable.
The beta branch on update should get updated somewhat regularly. If you don't feel like using the flathub-beta remote, or want the latest build. You can also get the release from the GitLab page, and build it in GNOME Builder.
14 Apr 2026 10:33am GMT
Jakub Steiner: 120+ Icons and Counting
Back in 2019, we undertook a radical overhaul of how GNOME app icons work. The old Tango-era style required drawing up to seven separate sizes per icon and a truckload of detail. A task so demanding that only a handful of people could do it. The "new" style is geometric, colorful, but mainly achievable. Redesigning the system was just the first step. We needed to actually get better icons into the hands of app developers, as those should be in control of their brand identity. That's where app-icon-requests came in.
As of today, the project has received over a hundred icon requests. Each one represents a collaboration between a designer and a developer, and a small but visible improvement to the Linux desktop.
How It Works
Ideally if a project needs a quick turnaround and direct control over the result, the best approach remains doing it in-house or commission a designer.
But if you're not in a rush, and aim to be a well designed GNOME app in particular, you can make use of the idle time of various GNOME designers. The process is simple. If you're building an app that follows the GNOME Human Interface Guidelines, you can open an icon request. A designer from the community picks up the issue, starts sketching ideas, and works with you until the icon is ready to ship. If your app is part of GNOME Circle or is aiming to join, you're far more likely to get a designer's attention quickly.
The sketching phase is where the real creative work happens. Finding the right metaphor for what an app does, expressed in a simple geometric shape. It's the part I enjoy most, and why I've been sharing my Sketch Friday process on Mastodon for over two years now (part 2). But the project isn't about one person's sketches. It's a team effort, and the more designers join, the faster the backlog shrinks.
Highlights
Here are a few of the icons that came through the pipeline. Each started as a GitLab issue and ended up as pixels on someone's desktop.
Alpaca, an AI chat client, went through several rounds of sketching to find just the right llama. Bazaar, an alternative to GNOME Software, took eight months and 16 comments to go from a shopping basket concept through a price tag to the final market stall. Millisecond, a system tuning tool for low-latency audio, needed several rounds to land on the right combination of stopwatch and waveform. Field Monitor shows how multiple iterations narrow down the concept. And Exhibit, the 3D model viewer, is one of my personal favorites.
You can browse all 127 completed icons to see the full range - from core GNOME apps to niche tools on Flathub.
Papers: From Sketch to Ship
To give a sense of what the process looks like up close, here's Papers - the GNOME document viewer. The challenge was finding an icon that says "documents" without being yet another generic file icon.
The early sketches explored different angles - a magnifying glass over stacked pages, reading glasses resting on a document. The final icon kept the reading glasses and the stack of colorful papers, giving it personality while staying true to what the app does. The whole thing played out in the GitLab issue, with the developer and designer going back and forth until both were happy.
While the new icon style is far easier to execute than the old high-detail GNOME icons, that doesn't mean every icon is quick. The hard part was never pushing pixels - it's nailing the metaphor. The icon needs to make sense to a new user at a glance, sit well next to dozens of other icons, and still feel like this app to the person who built it. Getting that right is a conversation between the designer's aesthetic judgment and the maintainer's sense of identity and purpose, and sometimes that conversation takes a while.
Bazaar is a good example.
The app was already shipping with the price tag icon when Tobias Bernard - who reviews apps for GNOME Circle - identified its shortcomings and restarted the process. That kind of quality gate is easy to understate, but it's a big part of why GNOME apps look as consistent as they do. Tobias is also a prolific icon designer himself, frequently contributing icons to key projects across the ecosystem. In this case, the sketches went from a shopping basket through the price tag to a market stall with an awning - a proper bazaar. Sixteen comments and eight months later, the icon shipped.
Get Involved
There are currently 20 open icon requests waiting for a designer. Recent ones like Kotoba (a Japanese dictionary), Simba (a Samba manager), and Slop Finder haven't had much activity yet and could use a designer's attention.
If you're a designer, or want to become one, this is a great place to start contributing to Free software. The GNOME icon style was specifically designed to be approachable: bold shapes, a defined color palette, clear guidelines. Tools like Icon Preview and Icon Library make the workflow smooth. Pick a request, start with a pencil sketch on paper, and iterate from there. There's also a dedicated Matrix room #appicondesign:gnome.org where icon work is discussed - it's invite-only due to spam, but feel free to poke me in #gnome-design or #gnome for an invitation. If you're new to Matrix, the GNOME Handbook explains how to get set up.
If you're an app developer, don't despair shipping with a placeholder icon. Follow the HIG, open a request, and a designer will help you out. If you're targeting GNOME Circle, a proper icon is part of the deal anyway.
A good icon is one of those small things that makes an app feel real - finished, polished, worth installing. Now that we actually have a place to browse apps, an app icon is either the fastest way to grab attention or make people skip. If you've got some design chops and a few hours to spare, pick an issue and start sketching.
Need a Fast Track?
If you need a faster turnaround or just want to work with someone who's been helping out with GNOME's visual identity for as long as I can remember - Hylke Bons offers app icon design for open source projects through his studio, Planet Peanut. Hylke has been a core contributor to GNOME's icon work for well over a decade. You'll be in great hands.
His service has a great freebie for FOSS projects - funded by community sponsors. You get three sketches to choose from, a final SVG, and a symbolic variant, all following the GNOME icon guidelines. If your project uses an OSI-approved license and is intended to be distributed through Flathub, you're eligible. Consider sponsoring his work if you can - even a small amount helps keep the pipeline going.
14 Apr 2026 12:00am GMT
13 Apr 2026
Planet GNOME
Adrien Plazas: Monster World IV: Disassembly and Code Analysis
This winter I was bored and needed something new, so I spent lots of my free time disassembling and analysing Monster World IV for the SEGA Mega Drive. More specifically, I looked at the 2008 Virtual Console revision of the game, which adds an English translation to the original 1994 release.
My long term goal would be to fully disassemble and analyse the game, port it to C or Rust as I do, and then port it to the Game Boy Advance. I don't have a specific reason to do that, I just think it's a charming game from a dated but charming series, and I think the Monaster World series would be a perfect fit on the Game Boy Advance. Since a long time, I also wanted to experiment with disassembling or decompiling code, understanding what doing so implies, understanding how retro computing systems work, and understanding the inner workings of a game I enjoy. Also, there is not publicly available disassembly of this game as far as I know.
As Spring is coming, I sense my focus shifting to other projets, but I don't want this work to be gone forever and for everyone, especially not for future me. Hence, I decided to publish what I have here, so I can come back to it later or so it can benefit someone else.
First, here is the Ghidra project archive. It's the first time I used Ghidra and I'm certain I did plenty of things wrong, feedback is happily welcome! While I tried to rename things as my understanding of the code grew, it is still quite a mess of clashing name conventions, and I'm certain I got plenty of things wrong.
Then, here is the Rust-written data extractor. It documents how some systems work, both as code and actual documentation. It mainly extracts and documents graphics and their compression methods, glyphs and their compression methods, character encodings, and dialog scripts. Similarly, I'm not a Rust expert, I did my best but I'm certain there is area for improvement, and everything was constantly changing anyway.
There is more information that isn't documented and is just floating in my head, such as how the entity system works, but I yet have to refine my understanding of it. Same goes for the optimimzations allowed by coding in assembly, such as using specific registers for commonly used arguments. Hopefully I will come back to this project and complete it, at least when it comes to disassembling and documenting the game's code.
13 Apr 2026 10:00pm GMT
Felipe Borges: RHEL 10 (GNOME 47) Accessibility Conformance Report
Red Hat just published the Accessibility Conformance Report (ACR) for Red Hat Enterprise Linux 10.
Accessibility Conformance Reports basically document how our software measures up against accessibility standards like WCAG and Section 508. Since RHEL 10 is built on GNOME 47, this report is a good look at how our stack handles various accessibility things from screen readers to keyboard navigation.
Getting a desktop environment to meet these requirements is a huge task and it's only possible because of the work done by our community in projects like: Orca, GTK, Libadwaita, Mutter, GNOME Shell, core apps, etc…
Kudos to everyone in the GNOME project that cares about improving accessibility. We all know there's a long way to go before desktop computing is fully accessible to everyone, but we are surely working on that.
If you're curious about the state of accessibility in the 47 release or how these audits work, you can find the full PDF here.
13 Apr 2026 10:00am GMT





