29 Jul 2025
Planet GNOME
Christian Schaller: Artificial Intelligence and the Linux Community
I have wanted to write this blog post for quite some time, but been unsure about the exact angle of it. I think I found that angle now where I will root the post in a very tangible concrete example.
So the reason I wanted to write this was because I do feel there is a palpable skepticism and negativity towards AI in the Linux community, and I understand that there are societal implications that worry us all, like how deep fakes have the potential to upend a lot of things from news disbursement to court proceedings. Or how malign forces can use AI to drive narratives in social media etc., is if social media wasn't toxic enough as it is. But for open source developers like us in the Linux community there is also I think deep concerns about tooling that deeply incurs into something that close to the heart of our community, writing code and being skilled at writing code. I hear and share all those concerns, but at the same time having spent time the last weeks using Claude.ai I do feel it is not something we can afford not to engage with. So I know people have probably used a lot of different AI tools in the last year, some being more cute than useful others being somewhat useful and others being interesting improvements to your Google search for instance. I think I shared a lot of those impressions, but using Claude this last week has opened my eyes to what AI enginers are going to be capable of going forward.
So my initial test was writing a python application for internal use at Red Hat, basically connecting to a variety of sources and pulling data and putting together reports, typical management fare. How simple it was impressed me though, I think most of us having to deal with pulling data from a new source know how painful it can be, with issues ranging from missing, outdated or hard to parse API documentation. I think a lot of us also then spend a lot of time experimenting to figure out the right API calls to make in order to pull the data we need. Well Claude was able to give me python scripts that pulled that data right away, I still had to spend some time with it to fine tune the data being pulled and ensuring we pulled the right data, but I did it in a fraction of the time I would have spent figuring that stuff out on my own. The one data source Claude struggled with Fedora's Bohdi, well once I pointed it to the URL with the latest documentation for that it figured out that it would be better to use the bohdi client library to pull data and once it had that figured out it was clear sailing.
So coming of pretty impressed by that experience I wanted to understand if Claude would be able to put together something programmatically more complex, like a GTK+ application using Vulkan. [Note: should have checked the code better, but thanks to the people who pointed this out. I told the AI to use Vulkan, which it did, but not in the way I expected, I expected it to render the globe using Vulkan, but it instead decided to ensure GTK used its Vulkan backend, an important lesson in both prompt engineering and checking the code afterwards).]So I thought what would be a good example of such an application and I also figured it would be fun if I found something really old and asked Claude to help me bring it into the current age. So I suddenly remembered xtraceroute, which is an old application orginally written in GTK1 and OpenGL showing your traceroute on a 3d Globe.

Screenshot of the original Xtraceroute application
I went looking for it and found that while it had been updated to GTK2 since last I looked at it, it had not been touched in 20 years. So I thought, this is a great testcase. So I grabbed the code and fed it into Claude, asking Claude to give me a modern GTK4 version of this application using Vulkan. Ok so how did it go? Well it ended up being an iterative effort, with a lot of back and forth between myself and Claude. One nice feature Claude has is that you can upload screenshots of your application and Claude will use it to help you debug. Thanks to that I got a long list of screenshots showing how this application evolved over the course of the day I spent on it.

This screenshot shows Claudes first attempt of transforming the 20 year old xtraceroute application into a modern one using GTK4, Vulkan and also adding a Meson build system. My prompt to create this was feeding in the old code and asking Claude to come up with a GTK4 and Vulkan equivalent. As you can see the GTK4 UI is very simple, but ok as it is. The rendered globe leaves something to be desired though. I assume the old code had some 2d fall backcode, so Claude latched onto that and focused on trying to use the Cairo API to recreate this application, despite me telling it I wanted a Vulkan application. What what we ended up with was a 2d circle that I could spin around like a wheel of fortuen. The code did have some Vulkan stuff, but defaulted to the Cairo code.

Second attempt at updating this application Anyway, I feed the screenshot of my first version back into Claude and said that the image was not a globe, it was missing the texture and the interaction model was more like a wheel of fortune. As you can see the second attempt did not fare any better, in fact we went from circle to square. This was also the point where I realized that I hadn't uploaded the textures into Claude, so I had to tell it to load the earth.png from the local file repository.

Third attempt from Claude.Ok, so I feed my second screenshot back into Claude and pointed out that it was no globe, in fact it wasn't even a circle and the texture was still missing. With me pointing out it needed to load the earth.png file from disk it came back with the texture loading. Well, I really wanted it to be a globe, so I said thank you for loading the texture, now do it on a globe.

This is the output of the 4th attempt. As you can see, it did bring back a circle, but the texture was gone again. At this point I also decided I didn't want Claude to waste anymore time on the Cairo code, this was meant to be a proper 3d application. So I told Claude to drop all the Cairo code and instead focus on making a Vulkan application.

So now we finally had something that started looking like something, although it was still a circle, not a globe and it got that weird division of 4 thing on the globe. Anyway, I could see it using Vulkan now and it was loading the texture. So I was feeling like we where making some decent forward movement. So I wrote a longer prompt describing the globe I wanted and how I wanted to interact with it and this time Claude did come back with Vulkan code that rendered this as a globe, thus I didn't end up screenshoting it unfortunately.

So with the working globe now in place, I wanted to bring in the day/night cycle from the original application. So I asked Claude to load the night texture and use it as an overlay to get that day/night effect. I also asked it to calculate the position of the sun to earth at the current time, so that it could overlay the texture in the right location. As you can see Claude did a decent job of it, although the colors was broken.

So I kept fighting with the color for a bit, Claude could see it was rendering it brown, but could not initally figure out why. I could tell the code was doing things mostly right so I also asked it to look at some other things, like I realized that when I tried to spin the globe it just twisted the texture. We got that fixed and also I got Claude to create some tests scripts that helped us figure out that the color issue was a RGB vs BRG issue, so as soon as we understood that then Claude was able to fix the code to render colors correctly. I also had a few iterations trying to get the scaling and mouse interaction behaving correctly.

So at this point I had probably worked on this for 4-5 hours, the globe was rendering nicely and I could interact with it using the mouse. Next step was adding the traceroute lines back. By default Claude had just put in code to render some small dots on the hop points, not draw the lines. Also the old method for getting the geocoordinates, but I asked Claude to help me find some current services which it did and once I picked one it on first try gave me code that was able to request the geolocation of the ip addresses it got back. To polish it up I also asked Claude to make sure we drew the lines following the globes curvature instead of just drawing straight lines.

Final version of the updated Xtraceroute application. It mostly works now, but I did realize why I always thought this was a fun idea, but less interesting in practice, you often don't get very good traceroutes back, probably due to websites being cached or hosted globally. But I felt that I had proven that with a days work Claude was able to help me bring this old GTK application into the modern world.
Conclusions
So I am not going to argue that Xtraceroute is an important application that deserved to be saved, in fact while I feel the current version works and proves my point I also lost motivation to try to polish it up due to the limitations of tracerouting, but the code is available for anyone who finds it worthwhile.
But this wasn't really about Xtraceroute, what I wanted to show here is how someone lacking C and Vulkan development skills can actually use a tool like Claude to put together a working application even one using more advanced stuff like Vulkan, which I know many more than me would feel daunting. I also found Claude really good at producing documentation and architecture documents for your application. It was also able to give me a working Meson build system and create all the desktop integration files for me, like the .desktop file, the metainfo file and so on. For the icons I ended up using Gemini as Claude do not do image generation at this point, although it was able to take a png file and create a SVG version of it (although not a perfect likeness to the original png).
Another thing I want to say is that the way I think about this, it is not that it makes coding skills less valuable, AIs can do amazing things, but you need to keep a close eye on them to ensure the code they create actually do what you want and that it does it in a sensible manner. For instance in my reporting application I wanted to embed a pdf file and Claude initial thought was to bring in webkit to do the rendering. That would have worked, but would have added a very big and complex dependency to my application, so I had to tell it that it could just use libpoppler to do it, something Claude agreed was a much better solution. The bigger the codebase the harder it also becomes for the AI to deal with it, but I think it hose circumstances what you can do is use the AI to give you sample code for the functionality you want in the programming language you want and then you can just work on incorporating that into your big application.
The other part here if course in terms of open source is how should contributors and projects deal with this? I know there are projects where AI generated CVEs or patches are drowning them and that helps nobody. But I think if we see AI as a developers tool and that the developer using the tool is responsible for the code generated, then I think that mindset can help us navigate this. So if you used an AI tool to create a patch for your favourite project, it is your responsibility to verify that patch before sending it in, and with that I don't mean just verifying the functionality it provides, but that the code is clean and readable and following the coding standards of said upstream project. Maintainers on the other hand can use AI to help them review and evaluate patches quicker and thus this can be helpful on both sides of the equation. I also found Claude and other AI tools like Gemini pretty good at generating test cases for the code they make, so this is another area where open source patch contributions can improve, by improving test coverage for the code.
I do also believe there are many areas where projects can greatly benefit from AI, for instance in the GNOME project a constant challenge for extension developers have been keeping their extensions up-to-date, well I do believe a tool like Claude or Gemini should be able to update GNOME Shell extensions quite easily. So maybe having a service which tries to provide a patch each time there is a GNOME Shell update might be a great help there. At the same time having a AI take a look at updated extensions and giving an first review of the update might help reduce the load on people doing code reviews on extensions and help flag problematic extensions.
I know for a lot of cases and situations uploading your code to a webservice like Claude, Gemini or Copilot is not something you want or can do. I know privacy is a big concern for many people in the community. My team at Red Hat has been working on a code assistant tool using the IBM Granite model, called Granite.code. What makes Granite different is that it relies on having the model run locally on your own system, so you don't send your code or data of somewhere else. This of course have great advantages in terms of improving privacy and security, but it has challenges too. The top end AI models out there at the moment, of which Claude is probably the best at the time of writing this blog post, are running on hardware with vast resources in terms of computing power and memory available. Most of us do not have those kind of capabilities available at home, so the model size and performance will be significantly lower. So at the moment if you are looking for a great open source tool to use with VS Code to do things like code completion I recommend giving Granite.code a look. If you on the other hand want to do something like I have described here you need to use something like Claude, Gemini or ChatGPT. I do recommend Claude, not just because I believe them to be the best at it at the moment, but they also are a company trying to hold themselves to high ethical standards. Over time we hope to work with IBM and others in the community to improve local models, and I am also sure local hardware will keep improving, so over time the experience you can get with a local model on your laptop at least has less of a gap than what it does today compared to the big cloud hosted models. There is also the middle of the road option that will become increasingly viable, where you have a powerful server in your home or at your workplace that can at least host a midsize model, and then you connect to that on your LAN. I know IBM is looking at that model for the next iteration of Granite models where you can choose from a wide variety of sizes, some small enough to be run on a laptop, others of a size where a strong workstation or small server can run them or of course the biggest models for people able to invest in top of the line hardware to run their AI.
Also the AI space is moving blazingly fast, if you are reading this 6 Months from now I am sure the capabilities of online and local models will have changed drastically already.
So to all my friends in the Linux community I ask you to take a look at AI and what it can do and then lets work together on improving it, not just in terms of capabilities, but trying to figure out things like societal challenges around it and sustainability concerns I also know a lot of us got.
Whats next for this code
As I mentioned I while I felt I got it to a point where I proved to myself it worked, I am not planning on working anymore on it. But I did make a cute little application for internal use that shows a spinning globe with all global Red Hat offices showing up as little red lights and where it pulls Red Hat news at the bottom. Not super useful either, but I was able to use Claude to refactor the globe rendering code from xtraceroute into this in just a few hours.

Red Hat Offices Globe and news.
29 Jul 2025 4:24pm GMT
Steven Deobald: 2025-07-25 Foundation Update
## Annual Report
The 2025 Annual Report is all-but-baked. Deepa and I would like to be completely confident in the final financial figures before publishing. The Board has seen these final numbers, during their all-day work day two days ago. I heard from multiple Board members that they're ecstatic with how Deepa presented the financial report. This was a massive amount of work for Deepa to contribute in her first month volunteering as our new Treasurer and we all really appreciate the work that she's put into this.
## GUADEC and Gratitude
I've organized large events before and I know in my bones how difficult and tiresome it can be. But I don't think I quite understood the scale of GUADEC. I had heard many times in the past three months "you just have to experience GUADEC to understand it" but I was quite surprised to find the day before the conference so intense and overwhelming that I was sick in bed for the entire first day of the conference - and that's as an attendee!
The conference takes the firehose of GNOME development and brings it all into one place. So many things happened here, I won't attempt to enumerate them all. Instead, I'd like to talk about the energy.
I have been pretty disoriented since the moment I landed in Italy but, even in my stupor, I was carried along by the energy of the conference. I could see that I wasn't an exception - everyone I talked to seemed to be sleeping four hours a night but still highly energized, thrilled to take part, to meet their old friends, and to build GNOME together. My experience of the conference was a constant stream of people coming up to me, introducing themselves, telling me their stories, and sharing their dreams for the project. There is a real warmth to everyone involved in GNOME and it radiates from people the moment you meet them. You all made this a very comfortable space, even for an introvert like me.
There is also incredible history here: folks who have been around for 5 years, 15 years, 25 years, 30 years. Lifelong friends like that are rare and it's special to witness, as an outsider.
But more important than anything I have to say about my experience of the conference, I want to proxy the gratitude of everyone I met. Everyone I spoke to, carried through the unbroken days on the energy of the space, kept telling me what a wonderful GUADEC it was. "The best GUADEC I've ever been to." / "It's so wonderful to meet the local community." / "Everything is so smooth and well organized."
If you were not here and couldn't experience it yourself, please know how grateful we all are for the hard work of the staff and volunteers. Kristi, for tirelessly managing the entire project and coordinating a thousand variables, from the day GUADEC 2024 ended until the moment she opened GUADEC 2025. Rosanna, for taking time away from all her regular work at the Foundation to give her full attention to the event. Pietro, for all the local coordination before the conference and his attention to detail throughout the conference. And the local/remote volunteer team - Maria, Deepesha, Ashmit, Aryan, Alessandro, and Syazwan - for openly and generously participating in every conceivable way.
Thank you everyone for making such an important event possible.
29 Jul 2025 2:48pm GMT
28 Jul 2025
Planet GNOME
Christian Hergert: Week 30 Status
My approach to engineering involves an engineers notebook and pen at my side almost all the time. My ADHD is so bad that without writing things down I would very much not remember what I did.
Working at large companies can have a silencing effect on engineers in the community because all our communication energy is burnt on weekly status reports. You see this all the time, and it was famously expected behavior when FOSS people joined Google.
But it is not unique to Google and I certainly suffer from it myself. So I'm going to try to experiment for a while dropping my status reports here too, at least for the things that aren't extremely specific to my employer.
Open Questions
-
What is the state-of-the-art right now for "I want to provide a completely artisan file-system to a container". For example, say I wanted to have a FUSE file-system for that build pipeline or other tooling accessed.
At least when it comes to project sources. Everything else should be read-only anyway.
It would be nice to allow tooling some read/write access but gate the writes so they are limited to the tools running and not persistent when the tool returns.
Foundry
-
A bit more testing of Foundry's replacement for Jsonrpc-GLib, which is a new libdex based
FoundryJsonrpcDriver
. It knows how to talk a few different types (HTTP-style, \0 or \n delimited, etc).LSP backend has been ported to this now along with all the JSON node creating helpers so try to put those through their paces.
-
Add pre-load/post-load to
FoundryTextDocumentAddin
so that we can add hooks for addins early in the loading process. We actually want this more for avoiding things during buffer loading. -
Found a nasty issue where creating addins was causing long running leaks do to the
GParameter
arrays getting saved for future addin creation. Need to be a bit more clever about initial property setup so that we don't create this reference cycle. -
New word-completion plugin for Foundry that takes a different approach from what we did in GtkSourceView. Instead, this runs on demand with a copy of the document buffer on a fiber on a thread. This allows using regex for word boundaries (
\w
) with JIT, no synchronization with GTK, and just generally _a lot_ faster. It also allowed for following referenced files from#include
style style headers in C/C++/Obj-C which is something VIM does (at least with plugins) that I very much wanted.It is nice knowing when a symbol comes from the local file vs an included file as well (again, VIM does this) so I implemented that as well for completeness.
Make sure it does word de-duplication while I'm at it.
-
Preserve completion activation (user initialized, automatic, etc) to propagate to the completion providers.
-
Live diagnostics tracking is much easier now. You can just create a
FoundryOnTypeDiagnostics(document)
and it will manage updating things as you go. It is also smart enough to do this withGWeakRef
so that we don't keep underlying buffers/documents/etc alive past the point they should be unloaded (as the worker runs on a fiber).You can share a single instance of the live diagnostics using
foundry_text_document_watch_diagnostics()
to avoid extra work. -
Add a Git-specific clone API in
FoundryGitClone
which handles all the annoying things like SSH authentication/etc via the use of our prompt abstraction (TTY, app dialog, etc). This also means there is a newfoundry clone ...
CLI command to test that infrastructure outside of the IDE. Should help for tracking down weird integration issues. -
To make the Git cloner API work well I had to remove the context requirement from
FoundryAuthPrompt
. You'll never have a loaded context when you want to clone (as there is not yet a project) so that requirement was nonsensical. -
Add new
foundry_vcs_list_commits_with_file()
API to get the commit history on a single file. This gives you a list model ofFoundryCommit
which should make it very easy for applications to browse through file history. One call, bridge the model to a listview and wire up some labels. -
Add
FoundryVcsTree
,FoundryVcsDiff
,FoundryVcsDelta
types and git implementations of them. Like the rest of the new Git abstractions, this all runs threaded using libdex and futures which complete when the thread returns. Still need to iterate on this a bit before the 1.0 API is finalized. -
New API to generate diffs from trees or find trees by identifiers.
-
Found out that
libgit2
does not support the bitmap index of the command line git command. That means that you have to do a lot of diffing to determine what commits contain a specific file. Maybe that will change in the future though. We could always shell out to the git command for this specific operation if it ends up too slow. -
New CTags parser that allows for read-only memory. Instead of doing the optimization in the past (insert
\0
and use strings in place) the new index keeps string offset/run for a few important parts.Then the open-coded binary search to find the nearest partial match against (then walking backward to get first potential match) can keep that in mind for
memcmp()
.We can also send all this work off to the thread pools easily now with libdex/futures.
Some work still remains if we want to use CTags for symbol resolution but I highly doubt we do.
Anyway, having CTags is really more just about having an easy test case for the completion engine than "people will actually use this".
-
Also write a new CTags miner which can build CTags files using whatever ctags engine is installed (universal-ctags, etc). The goal here is, again, to test the infrastructure in a super easy way rather than have people actually use this.
-
A new
FoundrySymbolProvider
andFoundrySymbol
API which allows for some nice ergonomics when bridging to tooling like LSPs.It also makes it a lot easier to implement features like pathbars since you can
foundry_symbol_list_to_root()
and get a future-populatedGListModel
of the symbol hierarchy. Attach that to a pathbar widget and you're done.
Foundry-GTK
-
Make
FoundrySourceView
final so that we can be a lot more careful about life-cycle tracking of related documents, buffers, and addins. -
Use
FoundryTextDocumentAddin
to implement spellchecking with libspelling as it vastly improves life-cycle tracking. We no longer rely on UB in GLib weak reference notifications to do cleanup in the right order. -
Improve the completion bridge from
FoundryCompletionProvider
toGtkSourceCompletionProvider
. Particularly start onafter
/comment
fields. We still need to getbefore
fields setup for return types.Still extremely annoyed at how LSP works in this regards. I mean really, my rage that LSP is what we have has no bounds. It's terrible in almost every way imaginable.
Builder
-
Make my Builder rewrite use new
FoundrySourceView
-
Rewrite search dialog to use
FoundrySearchEngine
so that we can use the much faster VCS-backed file-listing + fuzzy search.
GtkSourceView
-
Got a nice patch for porting space drawing to GskPath, merged it.
-
Make Ctrl+n/Ctrl+p work in VIM emulation mode.
Sysprof
-
Add support for building introspection/docs. Don't care about the introspection too much, because I doubt anyone would even use it. But it is nice to have documentation for potential contributors to look at how the APIs work from a higher level.
GUADEC
-
Couldn't attend GUADEC this year, so wrote up a talk on Foundry to share with those that are interested in where things are going. Given the number of downloads of the PDF, decided that maybe sharing my weekly status round-up is useful.
-
Watched a number of videos streamed from GUADEC. While watching Felipe demo his new boxes work, I fired up the repository with foundry and things seem to work on aarch64 (Asahi Fedora here).
That was the first time ever I've had an easy experience running a virtual machine on aarch64 Linux. Really pleasing!
foundry clone https://gitlab.gnome.org/felipeborges/boxes/ cd boxes/ foundry init foundry run
LibMKS
-
While testing Boxes on aarch64 I noticed it is using the Cairo framebuffer fallback paintable. That would be fine except I'm running on 150% here and when I wrote that code we didn't even have real fractional scaling in the Wayland protocol defined.
That means there are stitch marks showing up for this non-accelerated path. We probably want to choose a tile-size based on the scale- factor and be done with it.
The accelerated path shouldn't have this problem since it uses one DMABUF paintable and sets the damage regions for the GSK renderer to do proper damage calculation.
28 Jul 2025 8:41pm GMT
Thibault Martin: Loading credentials from Bitwarden with direnv
When working on my homelab, I regularly need to pass credentials to my tools. A naive approach is to just store the token in clear text, like for example in this opentofu snippet.
provider "proxmox" {
endpoint = "https://192.168.1.220:8006/"
api_token = "terraform@pve!provider=REDACTED"
}
You probably twitched at the idea of keeping credentials in plain text files, for a good reason. Credentials should be encrypted, and only decrypted when needed. Ideally, they should even only be decrypted when needed and discarded immediately after.
Let's see how direnv and the Bitwarden password manager's CLI can be hooked together to let me keep my infrastructure credentials safe, in a simple, sturdy setup!
Loading environment variables when I step into a directory
Environment variables 101
To create an environment variable, I just need to "export" it, and then I can use it as follows
$ export TEST=foo
$ echo $TEST
foo
Programs running on my computer can use them too, even if I don't pass them explicitly as an argument. This is only true to an extent, but we won't dive into what processes have access to which environment variables in this blog post.
Programs that require sensitive credentials often declare a list of environment variables they will monitor for credentials. In the case of my homelab, the documentation of opentofu tells me that I need to export the PROXMOX_VE_API_TOKEN
environment variable with the actual API token, and opentofu will be able to use it to do its work.
Dynamically loading environment variables with direnv
direnv is a tool that watches what is my current directory. If my current directory contains a .envrc
file, direnv will "execute" it. I can install direnv on my mac with brew
$ brew install direnv
Since I'm using fish, I follow the shell hook instructions for it
$ echo "direnv hook fish | source" >> ~/.config/fish/config.fish
And now I can create an .envrc
at the root my infra folder that will export the PROXMOX_VE_API_TOKEN
variable.
$ cd ~/Projects/infra
$ echo "export PROXMOX_VE_API_TOKEN=testvalue" > .envrc
I then need to tell direnv that I approve the last changes in this file, before it consents to using it. It's an important security measure to ensure you have reviewed what is inside the file since it last changed.
$ direnv allow .
direnv: loading ~/Projects/infra/.envrc
direnv: export +PROXMOX_VE_API_TOKEN
I can now test that the PROXMOX_VE_API_TOKEN
variable was set
$ echo $PROXMOX_VE_API_TOKEN
testvalue
If I go to the parent directory, direnv indeed unloads the environment variable and it is no longer available.
$ cd ..
direnv: unloading
$ echo $PROXMOX_VE_API_TOKEN
Managing credentials with the Bitwarden CLI
Installing bw
The official Bitwarden CLI is called bw
. In the Download and Install Section of the Bitwarden docs, there is no mention of homebrew. I don't want to install an unofficial version of the Bitwarden client and hand it my credentials, but this official blog post from 2018 mentions homebrew as an install method, and the bw formula for homebrew doesn't look suspicious.
I can install bw
with homebrew
$ brew install bw
Logging in
The first thing to do is of course to log in. Since I enabled 2FA, every time I log onto a new device Bitwarden asks me for a One Time Password (OTP) from my authenticator app.
$ bw login
? Email address: myemail@ergaster.org
? Master password: [hidden]
? Two-step login code: 123456
You are logged in!
To unlock your vault, set your session key to the `BW_SESSION` environment variable. ex:
[...]
After I logged in, it spits out a BW_SESSION
key that I can reuse if I want to access the vault again without a password. Let's lock the vault and unlock it again to confirm I don't have to enter an OTP again.
$ bw lock
Your vault is locked.
$ bw unlock
? Master password: [hidden]
Your vault is now unlocked!
To unlock your vault, set your session key to the `BW_SESSION` environment variable. ex:
[...]
Adding credentials
I like CLIs and TUIs, but I think in this particular case the GUI is much more user friendly to add credentials to the right folders. I didn't install the standalone GUI and only use the WebExtension from my browser.
To keep things neat and tidy, I created a new Infra
folder that will contain all the "technical" credentials for my infrastructure like API Keys or service account credentials.
Let's add a dummy credential called EXAMPLE_API_KEY
with a random value by clicking New
and Login
.
I name the new secret EXAMPLE_API_KEY
and only use the password field to store the actual credential. In this specific case I generated a password for it.
I now have a password I can look up using the CLI!
Looking up credentials
The first thing to do before looking up credentials is to unlock my vault, and export the BW_SESSION
environment variable so I don't have to append --session MYLONGLONGSESSIONTOKEN
to my commands.
$ bw unlock
? Master password: [hidden]
Your vault is now unlocked!
To unlock your vault, set your session key to the `BW_SESSION` environment variable.
[...]
$ export BW_SESSION="UU3JtbpKk4FoqPEzKuaIjkijASPslJLUyj3//5E//AynYYtC/BMssNg0+qTZbGEw9ioSeEk0oIIy77DMQrBOYw=="
The Bitwarden CLI has a get
command. The get command documentation tells us that it supports two modes:
- Retrieving a credential via a search term, e.g.
bw get password MY_CREDENTIAL
- Retrieving a credential via its internal specific id, e.g.
bw get password 88a52664-df43-4c8a-b33b-b32300817bb0
It would be tempting to just use bw get password MY_CREDENTIAL
. Unfortunately, that command doesn't support specifying what folders to look up into. I can't do anything like bw get password MY_CREDENTIAL --folder Infra
.
There are two major drawbacks. First there is a risk of collision with my personal passwords. Second, it means I can't have the same credential name in two distinct folders (e.g. Staging/EXAMPLE_API_KEY
and Production/EXAMPLE_API_KEY
).
The workaround for that is to use the bw list
command twice
- Once to get the
id
of the folder I want to look things up in - Once to get the
id
of the credential I searched, filtered by folder id
Since bw list
returns json, I will need the jq
utility to parse it and retrieve values from the objects I get. I can install on my mac with
$ brew install jq
Using bw list folder --search Infra
, I get a json object containing the information for that folder including its id. Piping it into jq
gives me more readable results.
$ bw list folders --search Infra | jq
[
{
"object": "folder",
"id": "3a5e2014-5f12-4379-9273-b2e300e79100",
"name": "Infra"
}
]
The result is an array that contains a single json object. I want to retrieve the value of the id
field. Let's make the bold assumption that this search query will always return a single object for now. I can use jq
to retrieve the id
more specifically
$ bw list folders --search Infra | jq -r '.[0].id'
3a5e2014-5f12-4379-9273-b2e300e79100
Awsome, I have the folder id I need! Let's use it in the second command to find the id of the secret I actually want, and pipe it to jq
to get a more human friendly result
$ bw list items --folderid 3a5e2014-5f12-4379-9273-b2e300e79100 --search EXAMPLE_API_KEY
[
{
"passwordHistory": null,
"revisionDate": "2025-07-23T07:51:26.020Z",
"creationDate": "2025-07-23T07:51:26.020Z",
"deletedDate": null,
"object": "item",
"id": "88a52664-df43-4c8a-b33b-b32300817bb0",
"organizationId": null,
"folderId": "3a5e2014-5f12-4379-9273-b2e300e79100",
"type": 1,
"reprompt": 0,
"name": "EXAMPLE_API_KEY",
"notes": null,
"favorite": false,
"login": {
"uris": [
{
"match": null,
"uri": "about:newtab"
}
],
"username": null,
"password": "ID&7Z6U03cBzpO&2%KeA@DUlxh9o",
"totp": null,
"passwordRevisionDate": null
},
"collectionIds": []
}
]
Let's once again make a bold assumption that the array will always return a single object. I see that the password
field is there, nested inside the login
object. That login
object is a field of the first (and here, only) item of my result array. Let's use jq
to unpack all that
$ bw list items --folderid 3a5e2014-5f12-4379-9273-b2e300e79100 --search EXAMPLE_API_KEY | jq -r '.[0].login.password'
ID&7Z6U03cBzpO&2%KeA@DUlxh9o
After I'm done, I need to invalidate the BW_SESSION
by locking the vault with
$ bw lock
Retrieving credentials from Bitwarden with direnv
Taking a step back, my happy path is the following:
- I
cd
into my~/Projects/Infra
directory - direnv detects I stepped into that directory. It looks up all the credentials I need, and exports them as environment variables
- I leave my
~/Projects/Infra
directory and direnv unloads all the environment variables
Let's write a utility I can reuse for several projects. I need to write a bw_to_env
bash function that takes as a parameter the folder in which to perform a lookup, and a list of environment variables to get credentials for. I'm making the assumption here that the secret in Bitwarden has the exact same name as the environment variable.
A utility to look up and export variables
Let's start a bw_to_env.sh
file to retrieve arguments first.
#!/bin/bash
for var in "$@"
do
echo $var
done
Now let's make it executable, and test it
$ ./bw_to_env.sh Infra EXAMPLE_API_KEY ANOTHER_KEY EXAMPLE_TOKEN
Infra
EXAMPLE_API_KEY
ANOTHER_KEY
EXAMPLE_TOKEN
Grand! Let's add a small test to ensure we at least have a folder name and a secret name as parameters. In other words: let's make sure we always have at least two parameters, and exit with an error message if we don't.
#!/bin/bash
if [[ "$#" -lt 2 ]]; then
echo "You must specify at least one folder and one secret name" >&2
exit 1
fi
for var in "$@"; do
echo $var
done
Now let's assign the folder name to a proper variable to make things more readable, and use shift
to "remove" it from the parameters variable.
#!/bin/bash
if [[ "$#" -lt 2 ]]; then
echo "You must specify at least one folder and one secret name" >&2
exit 1
fi
folder=$1
shift
password_names=("$@")
echo "Looking up in $folder for passwords $password_names"
for var in "$@"; do
echo $var
done
Let's test it all together
$ ./bw_to_env.sh Infra EXAMPLE_API_KEY ANOTHER_KEY EXAMPLE_TOKEN
Looking up in Infra for passwords EXAMPLE_API_KEY
EXAMPLE_API_KEY
ANOTHER_KEY
EXAMPLE_TOKEN
It's starting to take shape! Let's now add Bitwarden to the mix by unlocking the vault when the script is called, and locking it after we're done. Let's use bw unlock --raw
to only retrieve the session token, instead of the verbose message we usually get.
#!/bin/bash
if [[ "$#" -lt 2 ]]; then
echo "You must specify at least one folder and one secret name" >&2
exit 1
fi
BW_SESSION=$(bw unlock --raw)
folder=$1
shift
password_names=("$@")
echo "Looking up in $folder for passwords $password_names"
for var in "$@"; do
echo $var
done
bw lock
Testing it, it still works as intended
$ ./bw_to_env.sh Infra EXAMPLE_API_KEY ANOTHER_KEY EXAMPLE_TOKEN
? Master password: [hidden]
Looking up in Infra for passwords EXAMPLE_API_KEY
EXAMPLE_API_KEY
ANOTHER_KEY
EXAMPLE_TOKEN
Your vault is locked.
Now let's check that we actually managed to log in, and exit with a non-zero code if we failed to do so.
#!/bin/bash
if [[ "$#" -lt 2 ]]; then
echo "You must specify at least one folder and one secret name" >&2
exit 1
fi
BW_SESSION=$(bw unlock --raw)
if [[ -z $BW_SESSION ]]; then
echo "Failed to log into bitwarden. Ensure you're logged in with bw login, and check your password" >&2
exit 1
fi
folder=$1
shift
password_names=("$@")
echo "Looking up in $folder for passwords $password_names"
for var in "$@"; do
echo $var
done
bw lock
Let's retrieve the folder id, and exit with an error if it doesn't exist
#!/bin/bash
if [[ "$#" -lt 2 ]]; then
echo "You must specify at least one folder and one secret name" >&2
exit 1
fi
BW_SESSION=$(bw unlock --raw)
if [[ -z $BW_SESSION ]]; then
echo "Failed to log into bitwarden. Ensure you're logged in with bw login, and check your password" >&2
exit 1
fi
folder=$1
shift
password_names=("$@")
echo "Looking up in $folder for passwords $password_names"
# Retrieve the folder id
FOLDER_ID=$(bw list folders --search "$folder" --session "$BW_SESSION" | jq -r '.[0].id')
if [[ -z "$FOLDER_ID" || "$FOLDER_ID" = "null" ]]; then
echo "Failed to find the folder $folder. Please check if it exists and sync if needed with 'bw sync'" >&2
exit 1
fi
for var in "$@"; do
echo $var
done
bw lock
Now, let's iterate over each environment variable we're supposed to export, and look up the appropriate credential for it.
#!/bin/bash
if [[ "$#" -lt 2 ]]; then
echo "You must specify at least one folder and one secret name" >&2
exit 1
fi
BW_SESSION=$(bw unlock --raw)
if [[ -z $BW_SESSION ]]; then
echo "Failed to log into bitwarden. Ensure you're logged in with bw login, and check your password" >&2
exit 1
fi
folder=$1
shift
password_names=("$@")
echo "Looking up in $folder"
# Retrieve the folder id
FOLDER_ID=$(bw list folders --search "$folder" --session "$BW_SESSION" | jq -r '.[0].id')
if [[ -z "$FOLDER_ID" || "$FOLDER_ID" = "null" ]]; then
echo "Failed to find the folder $folder. Please check if it exists and sync if needed with 'bw sync'" >&2
exit 1
fi
for environment_variable_name in "$@"; do
CREDENTIAL=$(bw list items --folderid $FOLDER_ID --search $environment_variable_name --session "$BW_SESSION" | jq -r '.[0].login.password')
if [[ -z $CREDENTIAL || $CREDENTIAL = "null" ]]; then
echo "❌️ Failed to retrieve credential for $environment_variable_name in $folder, exiting with error" >&2
exit 1
fi
export "$environment_variable_name=$CREDENTIAL"
echo "✅️ Exported $environment_variable_name"
done
bw lock
Let's test the script
$ ./bw_to_env.sh Infra EXAMPLE_API_KEY
? Master password: [hidden]
Looking up in Infra
✅️ Exported EXAMPLE_API_KEY
Your vault is locked.
$ echo $EXAMPLE_API_KEY
Oh? My environment variable wasn't exported? It's perfectly normal: parent processes don't get variables from children, so I need to source this script first. Since fish is my default shell, I need to use bash explicitly first and then source the script.
$ bash
bash-5.3$ source bw_to_env.sh Infra EXAMPLE_API_KEY
? Master password: [hidden]
Looking up in Infra
✅️ Exported EXAMPLE_API_KEY
Your vault is locked.
bash-5.3$ echo $EXAMPLE_API_KEY
ID&7Z6U03cBzpO&2%KeA@DUlxh9o
It works! Now let's wrap it up into a reusable function I can call from elsewhere.
#!/bin/bash
function bitwarden_password_to_env() {
if [[ "$#" -lt 2 ]]; then
echo "You must specify at least one folder and one secret name" >&2
exit 1
fi
local BW_SESSION=$(bw unlock --raw)
if [[ -z $BW_SESSION ]]; then
echo "Failed to log into bitwarden. Ensure you're logged in with bw login, and check your password" >&2
exit 1
fi
local folder=$1
shift
local password_names=("$@")
echo "Looking up in $folder"
# Retrieve the folder id
local FOLDER_ID=$(bw list folders --search "$folder" --session "$BW_SESSION" | jq -r '.[0].id')
if [[ -z "$FOLDER_ID" || "$FOLDER_ID" = "null" ]]; then
echo "Failed to find the folder $folder. Please check if it exists and sync if needed with 'bw sync'" >&2
exit 1
fi
for environment_variable_name in "$@"; do
local CREDENTIAL=$(bw list items --folderid $FOLDER_ID --search $environment_variable_name --session "$BW_SESSION" | jq -r '.[0].login.password')
if [[ -z $CREDENTIAL || $CREDENTIAL = "null" ]]; then
echo "❌️ Failed to retrieve credential for $environment_variable_name in $folder, exiting with error" >&2
exit 1
fi
export "$environment_variable_name=$CREDENTIAL"
echo "✅️ Exported $environment_variable_name"
done
bw lock
}
Hooking it into direnv
Having that helper into a reusable function allows me to keep very minimal .envrc
for my projects. I can copy bw_to_env.sh
to ~/.config/direnv/lib/
, where it will be sourced.
Then, in my ~/Projects/infra/.envrc
I can add the following
bitwarden_password_to_env Infra EXAMPLE_API_KEY
I need to tell direnv that I reviewed the last changes and it can execute the file
$ cd ~/Projects/infra
$ direnv allow .
direnv: loading ~/Projects/infra/.envrc
? Master password: [input is hidden] direnv: ([/opt/homebrew/Cellar/direnv/2.37.0/bin/direnv export fish]) is taking a while to execute. Use CTRL-C to give up.
? Master password: [hidden]
direnv has a very aggressive timeout to ensure that it's not blocking the user. I updated the configuration in ~/.config/direnv/direnv.toml
to relax it a bit and wait 30s before it starts worrying
[global]
warn_timeout = "30s"
Leaving and re-entering my infra
directory, I can see it works like a charm!
$ cd ..
direnv: unloading
$ cd infra/
direnv: loading ~/Projects/infra/.envrc
? Master password: [hidden]
Looking up in Infra
✅️ Exported EXAMPLE_API_KEY
Your vault is locked.
direnv: export +EXAMPLE_API_KEY
And just like that, I am now dynamically loading secrets in environment variables when I need them and unloading them when I'm done.
With direnv and Bitwarden, I have a simple, inexpensive setup to keep my credentials secure. My credentials are safe even if my laptop fails or is stolen.
Props to @movabo for their script I drew significant inspiration from, and that makes it seamless to unlock a bitwarden vault and extract a secret from it.
28 Jul 2025 8:00am GMT
27 Jul 2025
Planet GNOME
Bastien Nocera: Digitising CDs (aka using your phone as an image scanner)
I recently found, under the rain, next to a book swap box, a pile of 90's "software magazines" which I spent my evening cleaning, drying, and sorting in the days afterwards.
Magazine cover CDs with nary a magazine
Those magazines are a peculiar thing in France, using the mechanism of "Commission paritaire des publications et des agences de presse" or "Commission paritaire" for short. This structure exists to assess whether a magazine can benefit from state subsidies for the written press (whether on paper at the time, and also the internet nowadays), which include a reduced VAT charge (2.1% instead of 20%), reduced postal rates, and tax exemptions.
In the 90s, this was used by Diamond Editions[1] (a publisher related to tech shop Pearl, which French and German computer enthusiasts probably know) to publish magazines with just enough original text to qualify for those subsidies, bundled with the really interesting part, a piece of software on CD.
If you were to visit a French newsagent nowadays, you would be able to find other examples of this: magazines bundled with music CDs, DVDs or Blu-rays, or even toys or collectibles. Some publishers (including the infamous and now shuttered Éditions Atlas) will even get you a cheap kickstart to a new collection, with the first few issues (and collectibles) available at very interesting prices of a couple of euros, before making that "magazine" subscription-only, with each issue being increasingly more expensive (article from a consumer protection association).
Other publishers have followed suite.
I guess you can only imagine how much your scale model would end up costing with that business model (50 eurocent for the first part, 4.99€ for the second), although I would expect them to have given up the idea of being categorised as "written press".
To go back to Diamond Editions, this meant the eventual birth of 3 magazines: Presqu'Offert, BestSellerGames and StratéJ. I remember me or my dad buying a few of those, an older but legit and complete version of ClarisWorks, CorelDraw or a talkie version of a LucasArt point'n'click was certainly a more interesting proposition than a cut-down warez version full of viruses when budget was tight.
3 of the magazines I managed to rescue from the rain
You might also be interested in the UK "covertape wars".
Don't stress the technique
This brings us back to today and while the magazines are still waiting for scanning, I tried to get a wee bit organised and digitising the CDs.
Some of them will have printing that covers the whole of the CD, a fair few use the foil/aluminium backing of the CD as a blank surface, which will give you pretty bad results when scanning them with a flatbed scanner: the light source keeps moving with the sensor, and what you'll be scanning is the sensor's reflection on the CD.
My workaround for this is to use a digital camera (my phone's 24MP camera), with a white foam board behind it, so the blank parts appear more light grey. Of course, this means that you need to take the picture from an angle, and that the CD will appear as an oval instead of perfectly circular.
I tried for a while to use GIMP perspective tools, and "Multimedia" Mike Melanson's MobyCAIRO rotation and cropping tool. In the end, I settled on Darktable, which allowed me to do 4-point perspective deskewing, I just had to have those reference points.
So I came up with a simple "deskew" template, which you can print yourself, although you could probably achieve similar results with grid paper.
After opening your photo with Darktable, and selecting the "darkroom" tab, go to the "rotate and perspective tool", select the "manually defined rectangle" structure, and adjust the rectangle to match the centers of the 4 deskewing targets. Then click on "horizontal/vertical fit". This will give you a squished CD, don't worry, and select the "specific" lens model and voilà.
Tools at the ready
Targets acquired
You can now export the processed image (I usually use PNG to avoid data loss at each step), open things up in GIMP and use the ellipse selection tool to remove the background (don't forget the center hole), the rotate tool to make the writing straight, and the crop tool to crop it to size.
And we're done!
The result of this example is available on Archive.org, with the rest of my uploads being made available on Archive.org and Abandonware-Magazines for those 90s magazines and their accompanying CDs.
[1]: Full disclosure, I wrote a couple of articles for Linux Pratique and Linux Magazine France in the early 2000s, that were edited by that same company.
27 Jul 2025 8:39pm GMT
26 Jul 2025
Planet GNOME
Sam Thursfield: Thoughts during GUADEC 2025
Greetings readers of the future from my favourite open technology event of the year. I am hanging out with the people who develop the GNOME platform talking about interesting stuff.
Being realistic, I won't have time to make a readable writeup of the event. So I'm going to set myself a challenge: how much can I write up of the event so far, in 15 minutes?
Let's go!
Conversations and knowledge
Conferences involve a series of talks, usually monologues on different topics, with slides and demos. A good talk leads to multi-way conversations.
One thing I love about open source is: it encourages you to understand how things work. Big tech companies want you to understand nothing about your devices beyond how to put in your credit card details and send them money. Sharing knowledge is cool, though. If you know how things work then you can fix it yourself.
Structures
Last year, I also attended the conference and was left with a big question for the GNOME project: "What is our story?" (Inspired by an excellent keynote from Ryan Sipes about the Thunderbird email app, and how it's supported by donations).
We didn't answer that directly, but I have some new thoughts.
Open source desktops are more popular than ever. Apparently we have like 5% of the desktop market share now. Big tech firms are nowadays run as huge piles of cash, whose story is that they need to make more cash, in order to give it to shareholders, so that one day you can, allegedly, have a pension. Their main goal isn't to make computers do interesting things. The modern for-profit corporation is a super complex institution, with great power, which is often abused.
Open communities like GNOME are an antidote to that. With way fewer people, they nevertheless manage to produce better software in many cases, but in a way that's demanding, fun, chaotic, mostly leaderless and which frequently burns out volunteers who contribute.
Is the GNOME project's goal to make computers do interesting things? For me, the most interesting part of the conference so far was the focus on project structure. I think we learned some things about how independent, non-profit communities can work, and how they can fail, and how we can make things better.
In a world where political structures are being heavily tested and, in many cases, are crumbling, we would do well to talk more about structures, and to introspect a bit more on what works and what doesn't. And to highlight the amazing work that the GNOME Foundation's many volunteer directors have achieved over the last 30 years to create an institution that still functions today, and in many ways functions a lot better than organizations with significantly more resources.
Relevant talks
- Stephen Deobold's keynote
- Emmanuele's talk on teams
Teams
Emmanuele Bassi tried, in a friendly way, to set fire to long-standing structures around how the GNOME community agrees and disagrees changes to the platform. Based on ideas from other successful projects that are driven by independent, non-profit communities such as the Rust and Python programming languages.
Part of this idea is to create well-defined teams of people who collaborate on different parts of the GNOME platform.
I've been contributing to GNOME in different ways for a loooong time, partly due to my day job, where I sometimes work with the technology stack, and partly because its a great group of people, we get to meet around the world once a year, and make software that's a little more independent from the excesses and the exploitation of modern capitalism, or technofuedalism.
And I think it's going to be really helpful to organize my contributions according to a team structure with a defined form.
Search
I really hope we'll have a search team.
I don't have much news about search. GNOME's indexer (localsearch) might start indexing the whole home directory soon. Carlos Garnacho continues to heroically make it work really well.
QA / Testing / Developer Experience
I did a talk at the conference (and half of another one with Martín Abente Lahaye) , about end-to-end testing using openQA.
The talks were pretty successful, they lead to some interesting conversations with new people. I hope we'll continue to grow the Linux QA call and try to keep these conversations going, and try to share knowledge and create better structures so that paid QA engineers who are testing products built with GNOME can collaborate on testing upstream.
Freeform notes
I'm 8 minutes over time already so the rest of this will be freeform notes from my notepad.
Live-coding streams aren't something I watch or create. It's an interesting way to share knowledge with the new generation of people who have grown up with internet videos as a primary knowledge source. I don't have age stats for this blog, but I'm curious how many readers under 30 have read this far down. (Leave a comment if you read this and prove me wrong! : -)
systemd-sysexts for development are going to catch on.
There should be karaoke every year.
Fedora Silverblue isn't actively developed at the moment. bootc is something to keep an eye on.
GNOME Shell Extensions are really popular and are a good "gateway drug" to get newcomers involved. Nobody figured out a good automated testing story for these yet. I wonder if there's a QA project there? I wonder if there's a low-cost way to allow extension developers to test extensions?
Legacy code is "code without tests". I'm not sure I agree with that.
"Toolkits are transient, apps are forever". That's spot-on.
There is a spectrum between being a user and a developer. It's not a black-and-white distinction.
BuildStream is still difficult to learn and the documentation isn't a helpful getting started guide for newcomers.
We need more live demos of accessibility tools. I still don't know how you use the screen reader. I'd like to have the computer read to me.
That's it for now. It took 34 minutes to empty my brain into my blog, more than planned, but a necessary step. Hope some of it was interesting. See you soon!
26 Jul 2025 5:19pm GMT
Nick Richards: Octopus Agile Prices For Linux
I'm on the Octopus Agile electricity tariff, where the price changes every half hour based on wholesale costs. This is great for saving money and using less carbon intensive energy, provided you can shift your heavy usage to cheaper times. With a family that insists on eating at a normal hour, that mostly means scheduling the dishwasher and washing machine.
The snag was not having an easy way to see upcoming prices on my Linux laptop. To scratch that itch, I built a small GTK app: Octopus Agile Energy. You can use it yourself if you're in the UK and have this electricity tarriff. Either install it directly from Flathub or download the source code and 'press play' in GNOME Builder. The app is heavily inspired by the excellent Octopus Compare for mobile but I stripped the concept back to a single job: what's the price now and for the next 24 hours? This felt right for a simple desktop utility and was achievable with a bit of JSON parsing and some hand waving.
I wrote a good chunk of the Python for it with the gemini-cli, which was a pleasant surprise. My workflow was running Gemini in a Toolbx container, building on my Silverblue desktop with GNOME Builder, and manually feeding back any errors. I kept myself in the loop, taking my own screenshots of visual issues rather than letting the model run completely free and using integrations like gnome-mcp-server to inspect itself.
It's genuinely fun to make apps with GTK 4, libadwaita, and Python. The modern stack has a much lower barrier to entry than the GTK-based frameworks I've worked on in the past. And while I have my reservations about cloud-hosted AI, using this kind of technology feels like a step towards giving users more control over their computing, not less. Of course, the 25 years of experience I have in software development helped bridge the gap between a semi-working prototype that only served one specific pricing configuration, didn't cache anything and was constantly re-rendering; and an actual app. The AI isn't quite there yet at all, but the potential is there and a locally hosted system by and for the free software ecosystem would be super handy.
I hope the app is useful. Whilst I may well make some tweaks or changes this does exactly what I want and I'd encourage anyone interested to fork the code and build something that makes them happy.
26 Jul 2025 5:01pm GMT
25 Jul 2025
Planet GNOME
Nancy Wairimu Nyambura: Outreachy Update:Understanding and Improving def-extractor.py
Introduction
Over the past couple of weeks, I have been working on understanding and improving def-extractor.py, a Python script that processes dictionary data from Wiktionary to generate word lists and definitions in structured formats. My main task has been to refactor the script to use configuration files instead of hardcoded values, making it more flexible and maintainable.
In this blog post, I'll explain:
- What the script does
- How it works under the hood
- The changes I made to improve it
- Why these changes matter
What Does the Script Do?
At a high level, this script processes huge JSONL (JSON Lines) dictionary dumps, like the ones from Kaikki.org , and filters them down into clean, usable formats.
The def-extractor.py script takes raw dictionary data (from Wiktionary) and processes it into structured formats like:
- Filtered word lists (JSONL)
- GVariant binary files (for efficient storage)
- Enum tables (for parts of speech & word tags)
It was originally designed to work with specific word lists (Wordnik, Broda, and a test list), but my goal is to make it configurable so it could support any word list with a simple config file.
How It Works (Step by Step)
1. Loading the Word List
The script starts by loading a word list (e.g., Wordnik's list of common English words). It filters out invalid words (too short, contain numbers, etc.) and stores them in a hash table for quick lookup.
2. Filtering Raw Wiktionary Data
Next, it processes a massive raw-wiktextract-data.jsonl file (theWiktionary dump) and keeps only entries that:
- Match words from the loaded word list
- Are in the correct language (e.g., English)

3. Generating Structured Outputs
After filtering, the script creates:
- Enum tables (JSON files listing parts of speech & word tags)
- GVariant files (binary files for efficient storage and fast lookup)

What Changes have I Made?
1. Added Configuration Support
Originally, the script uses hardcoded paths and settings. I modified it to read from .config files, allowing users to define:
- Source word list file
- Output directory
- Word validation rules (min/max length, allowed characters)
Before (Hardcoded):
WORDNIK_LIST = "wordlist-20210729.txt"
ALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
After (Configurable):
ini
[Word List]
Source = my-wordlist.txt
MinLength = 2
MaxLength = 20
2. Improved File Path Handling
Instead of hardcoding paths, the script now constructs them dynamically:
output_path = os.path.join(config.word_lists_dir, f"{config.id}-filtered.jsonl")
Why Do These Changes Matter?
Flexibility -Now supports any word list via config files.
Maintainability- No more editing code to change paths or rules.
Scalability -Easier to add new word lists or languages.
Consistency -All settings are in configs.
Next Steps?
1. Better Error Handling
I am working on adding checks for:
- Missing config fields
- Invalid word list files
- Incorrectly formatted data
2. Unified Word Loading Logic
There are separate functions (load_wordnik()
, load_broda()
).
I want to merged them into one load_words(config)
that would works for any word list.
3. Refactor legacy code for better structure
Try It Yourself
- Download the script: [wordlist-Gitlab]
- Create a
.
conf config file - Run:
python3 def-extractor.py --config my-wordlist.conf filtered-list
Happy coding!
25 Jul 2025 2:05pm GMT
Nancy Nyambura: Outreachy Update:Understanding and Improving def-extractor.py
Introduction
Over the past couple of weeks, I have been working on understanding and improving def-extractor.py, a Python script that processes dictionary data from Wiktionary to generate word lists and definitions in structured formats. My main task has been to refactor the script to use configuration files instead of hardcoded values, making it more flexible and maintainable.
In this blog post, I'll explain:
- What the script does
- How it works under the hood
- The changes I made to improve it
- Why these changes matter
What Does the Script Do?
At a high level, this script processes huge JSONL (JSON Lines) dictionary dumps, like the ones from Kaikki.org , and filters them down into clean, usable formats.
The def-extractor.py script takes raw dictionary data (from Wiktionary) and processes it into structured formats like:
- Filtered word lists (JSONL)
- GVariant binary files (for efficient storage)
- Enum tables (for parts of speech & word tags)
It was originally designed to work with specific word lists (Wordnik, Broda, and a test list), but my goal is to make it configurable so it could support any word list with a simple config file.
How It Works (Step by Step)
1. Loading the Word List
The script starts by loading a word list (e.g., Wordnik's list of common English words). It filters out invalid words (too short, contain numbers, etc.) and stores them in a hash table for quick lookup.
2. Filtering Raw Wiktionary Data
Next, it processes a massive raw-wiktextract-data.jsonl file (theWiktionary dump) and keeps only entries that:
- Match words from the loaded word list
- Are in the correct language (e.g., English)

3. Generating Structured Outputs
After filtering, the script creates:
- Enum tables (JSON files listing parts of speech & word tags)
- GVariant files (binary files for efficient storage and fast lookup)

What Changes have I Made?
1. Added Configuration Support
Originally, the script uses hardcoded paths and settings. I modified it to read from .config files, allowing users to define:
- Source word list file
- Output directory
- Word validation rules (min/max length, allowed characters)
Before (Hardcoded):
WORDNIK_LIST = "wordlist-20210729.txt"
ALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
After (Configurable):
ini
[Word List]
Source = my-wordlist.txt
MinLength = 2
MaxLength = 20
2. Improved File Path Handling
Instead of hardcoding paths, the script now constructs them dynamically:
output_path = os.path.join(config.word_lists_dir, f"{config.id}-filtered.jsonl")
Why Do These Changes Matter?
Flexibility -Now supports any word list via config files.
Maintainability- No more editing code to change paths or rules.
Scalability -Easier to add new word lists or languages.
Consistency -All settings are in configs.
Next Steps?
1. Better Error Handling
I am working on adding checks for:
- Missing config fields
- Invalid word list files
- Incorrectly formatted data
2. Unified Word Loading Logic
There are separate functions (load_wordnik()
, load_broda()
).
I want to merged them into one load_words(config)
that would works for any word list.
3. Refactor legacy code for better structure
Try It Yourself
- Download the script: [wordlist-Gitlab]
- Create a
.
conf config file - Run:
python3 def-extractor.py --config my-wordlist.conf filtered-list
Happy coding!
25 Jul 2025 2:05pm GMT
This Week in GNOME: #209 GUADEC 2025
Update on what happened across the GNOME project during the last two weeks from July 11 to July 25.
GUADEC 2025 is currently ongoing! You can find more details on events.gnome.org.
GNOME Core Apps and Libraries
GTK ↗
Cross-platform widget toolkit for creating graphical user interfaces.
sp1rit says
We have managed to find & circumvent the bug in the Adreno Android driver that was causing the GTK OpenGL renderer to break.
This means that GTK should now work with GL rendering for users with an Adreno GPU (i.e. Qualcomm SOC) on Android.
Calendar ↗
A simple calendar application.
Hari Rana | TheEvilSkeleton (any/all) 🇮🇳 🏳️⚧️ says
After two weeks of writing, revising, and trying to make everything as digestible as possible, I finally published "GNOME Calendar: A New Era of Accessibility Achieved in 90 Days", where I explain in detail the steps we took to turn GNOME Calendar from an app that was literally unusable with a keyboard and screen reader to an app that is (finally) accessible to keyboard and screen reader users as of GNOME 49!
https://tesk.page/2025/07/25/gnome-calendar-a-new-era-of-accessibility-achieved-in-90-days/
Third Party Projects
Alexander Vanhee reports
After coming back from vacation, I was able to make quite a bit of progress this week in Gradia, featuring two new additions.
The first is the much requested cropping tool. It took me some time to figure out exactly what I wanted from such a tool in the context of Gradia, with its background layers, annotations, and all. I finally implemented something that is (hopefully) nice to use.
The second feature is source snippets. It's designed to make it easier to share a piece of code you're particularly proud of on social media. It lets you control things like line width, padding, themes, and the like, without having to temporarily adjust those settings in your code editor.
You can download Gradia on Flathub or the Snap Store, for those who prefer that.
Phosh ↗
A pure wayland shell for mobile devices.
Guido announces
In our continued effort to make typing on phones easier and faster stevia (an on screen keyboard for phosh) can now dynamically adjust to the output scale when in portrait mode. This ensures that the on screen keyboard remains at the same physical size independent from the screen's actual mode and scale. It can also add an empty space below the actual keys to make typing easier on taller phones. The images show the OSK at scale 2.5 and 3.
Parabolic ↗
Download web video and audio.
Nick reports
Parabolic V2025.7.0 is here! This release contains some new features and bug fixes.
Here's the full changelog:
- Redesigned the Windows app using WinUI 3
- Added the ability to change the application's translation language
- Added the ability to remember video and audio formats individually for each file type
- Fixed an issue where pressing enter in the download dialog would not start the download
- Fixed an issue where configuration files were not stored properly for the portable Windows build
- Fixed an issue where downloads did not pause and resume on Windows
- Fixed an issue where there would sometimes be leftover separators in the downloads list on GNOME
- Fixed some elements of the GNOME UI as we get closer to joining GNOME Circle
- Updated yt-dlp
Flare ↗
Chat with your friends on Signal.
schmiddi says
Flare version 0.17.0 was released. This switches how Flare stores its data to sqlite. This change is not backwards compatible, you will therefore need to relink after updating Flare. This also fixes a bug where contacts were displayed as phone numbers or "Unknown Contact" instead of their name.
Shell Extensions
axet announces
New system monitor gnome shell extension. Simple UI. Compact view. Based on gnome system monitor. https://extensions.gnome.org/extension/8272/system-monitor/
GNOME Foundation
steven says
2025-07-12 Foundation Update
- new treasurers!
- pmOS joins the Advisory Board
- donate.gnome.org gets better
- Framework Computer & Slimbook, our new friends
- Annual Report and GUADEC talk
- "It's Not 1998"
- Office Hours
https://blogs.gnome.org/steven/2025/07/12/2025-07-12-foundation-update/
steven reports
2025-07-18 Foundation Report
- Annual Report
- 501c3 nonsense
- "Hackers"
- A rant about retaining capital in non-profits
- Private spaces for Community Health
- Preliminary board/officer assignments
- Banking: resilience & bookkeeping
- GUADEC cometh
https://blogs.gnome.org/steven/2025/07/21/2025-07-18-foundation-update/
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
25 Jul 2025 12:00am GMT
Hari Rana: GNOME Calendar: A New Era of Accessibility Achieved in 90 Days
Please consider supporting my effort in making GNOME apps accessible for everybody. Thanks!
Introduction
There is no calendaring app that I love more than GNOME Calendar. The design is slick, it works extremely well, it is touchpad friendly, and best of all, the community around it is just full of wonderful developers, designers, and contributors worth collaborating with, especially with the recent community growth and engagement over the past few years. Georges Stavracas and Jeff Fortin Tam are some of the best maintainers I have ever worked with. I cannot express how thankful I am of Jeff's underappreciated superhuman capabilities to voluntarily coordinate huge initiatives and issue trackers.
One of Jeff's many initiatives is gnome-calendar#1036: the accessibility initiative, which is a big and detailed list of issues related to accessibility. In my opinion, GNOME Calendar's biggest problem was the lack of accessibility support, which made the app completely unusable for people exclusively using a keyboard, or people relying on assistive technologies.
This article will explain in details about the fundamental issues that held back accessibility in GNOME Calendar since the very beginning of its existence (12 years at a minimum), the progress we have made with accessibility as well as our thought process in achieving it, and the now and future of accessibility in GNOME Calendar.
Calendaring Complications
On a desktop or tablet form factor, GNOME Calendar has a month view and a week view, both of which are a grid comprising of cells representing a time frame. In the month view, each row is a week, and each cell is a day. In the week view, the time frame within cells varies on the zooming level.
There are mainly two reasons GNOME Calendar was inaccessible: firstly, the accessibility tree does not cover the logically and structurally complicated workflow and design that is a typical calendaring app; and secondly, the significant negative implications of accessibility due to reducing as much overhead as possible.
Accessibility Trees Are Insufficient for Calendaring Apps
The accessibility tree is rendered insufficient for calendaring apps, mainly because events are extremely versatile. Tailoring the entire interface and experience around that versatility pushes us to explore alternate and custom structures.
Events are highly flexible, because they are time-based. An event can last a couple of minutes, but it can as well last for hours, days, weeks, or even months. It can start in the middle of a day and end on the upcoming day; it can start by the end of a week and end at the beginning of the upcoming week. Essentially, events are limitless.
Since events can last more than a day, cell widgets cannot hold any event widget, because otherwise event widgets would not be capable of spanning across cells. As such, event widgets are overlaid on top of cell widgets and positioned based on the coordinates, width, and height of each widget. However, because cell widgets cannot hold a meaningful link with event widgets, there is no way to easily ensure there is a link between an event widget and a cell widget.
As a consequence, the visual representation of GNOME Calendar is fundamentally incompatible with accessibility trees. GNOME Calendar's month and week views are visually 2.5 dimensional: A grid layout by itself is structurally two-dimensional, but overlaying event widgets that is capable of spanning across cells adds an additional layer. Conversely, accessibility trees are fundamentally two-dimensional, so GNOME Calendar's visual representation cannot be sufficiently adapted into a two-dimensional logical tree.
In summary, accessibility trees are insufficient for calendaring apps, because the versatility and high requirements of events prevents us from linking cell widgets with event widgets, so event widgets are instead overlaid on top, consequently making the visual representation 2.5 dimensional; however, the additional layer makes it fundamentally impossible to adapt to a two-dimensional accessibility tree.
Negative Implications of Accessibility due to Maximizing Performance
Unlike the majority of apps, GNOME Calendar's layout and widgetry consist of custom widgets and complex calculations according to several factors, such as:
- the size of the window;
- the height and width of each cell widget to figure out if one or more event widgets can perceptibly fit inside a cell;
- the position of each event widget to figure out where to position the event widget, and where to reposition all the event widgets around it if necessary;
what went wrong in my life to work on a calendaring app written in C.
Due to these complex calculations, along with the fact that it is also possible to have tens, hundreds, or even thousands of events in a calendar app, calendar apps always rely on maximizing performance as much as possible, while being at the mercy of the framework or toolkit.
One way to minimize that problem is by creating custom widgets that are minimal and only fulfill the purpose we absolutely need. However, this comes at the cost of needing to reimplement most functionality, including most, if not all accessibility features and semantics, such as keyboard focus, which severely impacted accessibility in GNOME Calendar.
While GTK's widgets are great for general purpose use-cases and do not have any performance impact with limited instances of them, performance starts to deteriorate on weaker systems when there are hundreds, if not thousands of instances in the view, because they contain a lot of functionality that event widgets may not need.
In the case of the GtkButton widget, it has a custom multiplexer, it applies different styles for different child types, it implements the GtkActionable interface for custom actions, and more technical characteristics. Other functionality-based widgets will have more capabilities that might impact performance with hundreds of instances.
To summarize, GNOME Calendar reduces overhead by creating minimal custom widgets that fulfill a specific purpose. This unfortunately severely impacted accessibility throughout the app and made it unusable with a keyboard, as some core functionalities, accessibility features and semantics were never (re)implemented.
Improving the Existing Experience
Despite being inaccessible as an app altogether, not every aspect was inaccessible in GNOME Calendar. Most areas throughout the app worked with a keyboard and/or assistive technologies, but they needed some changes to improve the experience. For this reason, this section is reserved specifically for mentioning the aspects that underwent a lot of improvements.
Improving Focus Rings
The first major step was to improve the focus ring situation throughout GNOME Calendar. Since the majority of widgets are custom widgets, many of them require to manually apply focus rings. !563 addresses that by declaring custom CSS properties, to use as a base for focus rings. !399 tweaks the style of the reminders popover in the event editor dialog, with the addition of a focus ring.
We changed the behavior of the event notes box under the "Notes" section in the event editor dialog. Every time the user focuses on the event notes box, the focus ring appears and outlines the entire box until the user leaves focus. This was accomplished by subclassing AdwPreferencesRow to inherit its style, then applying the .focused
class whenever the user focuses on the notes.
Improving the Calendar Grid
The calendar grid on the sidebar suffered from several issues when it came to keyboard navigation, namely:
- pressing ↹ would focus the next cell in the grid up until the last cell;
- when out of bounds, there would be no auditory feedback;
- on the last row, pressing ↓ would focus a blank element; and
- pressing → in left-to-right languages, or ← in right-to-left languages, on the last column would move focus to a completely different widget.
While the calendar grid can be interacted with a keyboard, the keyboard experience was far from desired. !608 addresses these issues by overriding the Gtk.Widget.focus ()
virtual method. Pressing ↹ or Shift+↹ skips the entire grid, and the grid is wrapped to allow focusing between the first and last columns with ← and →, while notifying the user when out of bounds.
Improving the Calendar List Box
The calendar list box holds a list of available calendars, all of which can be displayed or hidden from the week view and month view. Each row is a GtkListBoxRow that holds a GtkCheckButton.
The calendar list box had several problems in regards to keyboard navigation and the information each row provided to assistive technologies.
The user was required to press ↹ a second time to get to the next row in the list. To elaborate: pressing ↹ once focused the row; pressing it another time moved focus to the check button within the row (bad); and finally pressing the third time focused the next row.
Row widgets had no actual purpose besides toggling the check button upon activation. Similarly, the only use for a check button widget inside each row was to display the "check mark" icon if the calendar was displayed. This meant that the check button widget held all the desired semantics, such as the "checkbox" role and the "checked" state; but worst of all, it was getting focus. Essentially, the check button widget was handling responsibilities that should have been handled by the row.
Both inconveniences were addressed by !588. The check button widget was replaced with a check mark icon using GtkImage, a widget that does not grab focus. The accessible role of the row widget was changed to "checkbox", and the code was adapted to handle the "checked" state.
Implementing Accessibility Functionality
Accessibility is often absolute: there is no 'in-between' state; either the user can access functionality, or they cannot, which can potentially make the app completely unusable. This section goes in depth with the widgets that were not only entirely inaccessible but also rendered GNOME Calendar completely unusable with a keyboard and assistive technology.
Making the Event Widget Accessible
GcalEventWidget, the name of the event widget within GNOME Calendar, is a colored rectangular toggle button containing the summary of an event.

Activating it displays a popover that displays additional detail for that event.
GcalEventWidget subclasses GtkWidget.
The biggest problem in GNOME Calendar, which also made it completely impossible to use the app with a keyboard, was the lack of a way to focus and activate event widgets with a keyboard. Essentially, one would be able to create events, but there would be no way to access them in GNOME Calendar.
Quite literally, this entire saga began all thanks to a dream I had, which was to make GcalEventWidget subclass GtkButton instead of GtkWidget directly. The thought process was: GtkButton already implements focus and activation with a keyboard, so inheriting it should therefore inherit focus and activation behavior.
In merge request !559, the initial implementation indeed subclassed GtkButton. However, that implementation did not go through, due to the reason outlined in § Negative Implications of Accessibility due to Maximizing Performance.
Despite that, the initial implementation instead significantly helped us figure out exactly what were missing with GcalEventWidget: specifically, setting Gtk.Widget:receives-default
and Gtk.Widget:focusable
properties to "True". Gtk.Widget:receives-default
makes it so the widget can be activated how ever desired, and Gtk.Widget:focusable
allows it to become focusable with a keyboard. So, instead of subclassing GtkButton, we instead reimplemented GtkButton's functionality in order to maintain performance.
While preliminary support for keyboard navigation was added into GcalEventWidget, accessible semantics for assistive technologies like screen readers were severely lacking. This was addressed by !587, which sets the role to "toggle-button", to convey that GcalEventWidget is a toggle button. The merge request also indicates that the widget has a popup for the event popover, and has the means to update the "pressed" state of the widget.
In summary, we first made GcalEventWidget accessible with a keyboard by reimplementing some of GtkButton's functionality. Then, we later added the means to appropriately convey information to assistive technologies. This was the worst offender, and was the primary reason why GNOME Calendar was unusable with a keyboard, but we finally managed to solve it!
Making the Month and Year Spin Buttons Accessible
GcalMultiChoice is the name of the custom spin button widget used for displaying and cycling through months and/or years.
It comprises of a "decrement" button to the start, a flat toggle button in the middle that contains a label that displays the value, and an "increment" button to the end. Only the button in the middle can gain keyboard focus throughout GcalMultiChoice.

In some circumstances, GcalMultiChoice can display a popover for increased granularity.
GcalMultiChoice was not interactable with a keyboard, because:
- it did not react to ↑ and ↓ keys; and
- the "decrement" and "increment" buttons were not focusable.
For a spin button widget, the "decrement" and "increment" buttons should generally remain unfocusable, because ↑ and ↓ keys already accomplish that behavior. Furthermore, GtkSpinButton's "increment" (+) and "decrement" (-) buttons are not focusable either, and the Date Picker Spin Button Example by the ARIA Authoring Practices Guide (APG) avoids that functionality as well.
However, since GcalMultiChoice did not react to ↑ and ↓ keys, having the "decrement" and "increment" buttons be focusable would have been a somewhat acceptable workaround. Unfortunately, since those buttons were not focusable, and ↑ and ↓ keys were not supported, it was impossible to increment or decrement values in GcalMultiChoice with a keyboard without resorting to workarounds.
Additionally, GcalMultiChoice lacked the semantics to communicate with assistive technologies. So, for example, a screen reader would never say anything meaningful.
All of the above problems remained problems until merge request !603. For starters, it implements GtkAccessible and GtkAccessibleRange, and then implements keyboard navigation.
Implementing GtkAccessible and GtkAccessibleRange
The merge request implements the GtkAccessible interface to retrieve information from the flat toggle button.
Fundamentally, since the toggle button was the only widget capable of gaining keyboard focus throughout GcalMultiChoice, this caused two distinct problems.
The first issue was that assistive technologies only retrieved semantic information from the flat toggle button, such as the type of widget (accessible role), its label, and its description. However, the toggle button was semantically just a toggle button; since it contained semantics and provided information to assistive technologies, the information it provided was actually misleading, because it only provided information as a toggle button, not a spin button!
So, the solution to this is to strip the semantics from the flat toggle button. Setting its accessible role to "none" makes assistive technologies ignore its information. Then, setting the accessible role of the top-level (GcalMultiChoice) to "spin-button" gives semantic meaning to assistive technologies, which allows the widget to appropriately convey these information, when focused.
This led to the second issue: Assistive technologies only retrieved information from the flat toggle button, not from the top-level. Generally, assistive technologies retrieve information from the focused widget. Since the toggle button was the only widget capable of gaining focus, it was also the only widget providing information to them; however, since its semantics were stripped, it had no information to share, and thus assistive technologies would retrieve absolutely nothing.
The solution to this is to override the Gtk.Accessible.get_platform_state ()
virtual method, which allows us to bridge communication between the states of child widgets and the top-level widget. In this case, both GcalMultiChoice and the flat toggle button share the state-if the flat toggle button is focused, then GcalMultiChoice is considered focused; and since GcalMultiChoice is focused, assistive technologies can then retrieve its information and state.
The last issue that needed to be addressed was that GcalMultiChoice was still not providing any of the values to assistive technologies. The solution to this is straightforward: implementing the GtkAccessibleRange interface, which makes it necessary to set values for the following accessible properties: "value-max", "value-min", "value-now", and "value-text".
After all this effort, GcalMultiChoice now provides correct semantics to assistive technologies. It appropriately reports its role, the current textual value, and whether it contains a popover.
To summarize:
- The flat toggle button was the only widget conveying information to assistive technologies, as it was the only widget capable of gaining focus and providing semantic information. To solve this, its semantics were stripped away.
- The top-level, being GcalMultiChoice, was assigned the "spin-button" role to provide semantics; however, it was still incapable of providing information to assistive technologies, because it was never getting focused. To solve this, the state of the toggle button, including the focused state, carried over to the top-level to allow assistive technologies to retrieve information from the top-level.
- GcalMultiChoice still did not provide its values to assistive technologies. This is solved by implementing the GtkAccessibleRange interface.
Providing Top-Level Semantics to a Child Widget As Opposed to the Top-Level Widget Is Discouraged
As you read through the previous section, you may have asked yourself: "Why go through all of those obstacles and complications when you could have just re-assigned the flat toggle button as "spin-button" and not worry about the top-level's role and focus state?"
Semantics should be provided by the top-level, because they are represented by the top-level. What makes GcalMultiChoice a spin button is not just the flat toggle button, but it is the combination of all the child widgets/objects, event handlers (touch, key presses, and other inputs), accessibility attributes (role, states, relationships), widget properties, signals, and other characteristics. As such, we want to maintain that consistency for practically everything, including the state. The only exception to this is widgets whose sole purpose is to contain one or more elements, such as GtkBox.
This is especially important for when we want it to communicate with other widgets and APIs, such as the Gtk.Widget::state-flags-changed
signal, the Gtk.Widget.is_focus ()
method, and other APIs where it is necessary to have the top-level represent data accurately and behave predictably. In the case of GcalMultiChoice, we set accessible labels at the top-level. If we were to re-assign the flat toggle button's role as "spin-button", and set the accessible label to the top-level, assistive technologies would only retrieve information from the toggle button while ignoring the labels defined at the top-level.
For the record, GtkSpinButton also overrides Gtk.Accessible.get_platform_state ()
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
static gboolean gtk_spin_button_accessible_get_platform_state (GtkAccessible *self, GtkAccessiblePlatformState state) { return gtk_editable_delegate_get_accessible_platform_state (GTK_EDITABLE (self), state); } static void gtk_spin_button_accessible_init (GtkAccessibleInterface *iface) { … iface->get_platform_state = gtk_spin_button_accessible_get_platform_state; } |
To be fair, assigning the "spin-button" role to the flat toggle button is unlikely to cause major issues, especially for an app. Re-assigning the flat toggle button was my first instinct. The initial implementation did just that as well. I was completely unaware of the Gtk.Accessible.get_platform_state ()
virtual method before finalizing the merge request, so I initially thought that was the correct way to do. Even if the toggle button had the "spin-button" role instead of the top-level, it would not have stopped us from implementing workarounds, such as a getter method that retrieves the flat toggle button that we can then use to manipulate it.
In summary, we want to provide semantics at the top-level, because they are structurally part of it. This comes with the benefit of making the widget easier to work with, because APIs can directly communicate with it, instead of resorting to workarounds.
The Now and Future of Accessibility in GNOME Calendar
All these accessibility improvements will be available on GNOME 49, but you can download and install the pre-release on the "Nightly GNOME Apps" DLC Flatpak remote on nightly.gnome.org.
In the foreseeable future, I want to continue working on !564, to make the month view itself accessible with a keyboard, as seen in the following:
A screen recording demoing keyboard navigation within the month view. Focus rings appear and disappear as the user moves focus between cells. Going out of bounds in the vertical axis scrolls the view to the direction, and going out of bounds in the horizontal axis moves focus to the logical sibling.
However, it is already adding 640 lines of code, and I can only see it increasing overtime. We also want to make cells in the week view accessible, but this will also be a monstrous merge request, just like the above merge request.
Most importantly, we want (and need) to collaborate and connect with people who rely on assistive technologies to use their computer, especially when everybody working on GNOME Calendar does not rely on assistive technologies themselves.
Conclusion
I am overwhelmingly satisfied of the progress we have made with accessibility on GNOME Calendar in six months. Just a year ago, if I was asked about what needs to be done to incorporate accessibility features in GNOME Calendar, I would have shamefully said "dude, I don't know where to even begin"; but as of today, we somehow managed to turn GNOME Calendar into an actual, usable calendaring app for people who rely on assistive technologies and/or a keyboard.
Since this is still Disability Pride Month, and GNOME 49 is not out yet, I encourage you to get the alpha release of GNOME Calendar on the "Nightly GNOME Apps" Flatpak remote at nightly.gnome.org. The alpha release is in a state where the gays with disabilities can organize and do crimes using GNOME Calendar 😎 /j
25 Jul 2025 12:00am GMT
24 Jul 2025
Planet GNOME
Philip Withnall: A brief parental controls update
Over the past few weeks, Ignacy and I have made good progress on the next phase of features for parental controls in GNOME: a refresh of the parental controls UI, support for screen time limits for child accounts, and basic web filtering support are all in progress. I've been working on the backend stuff, while Ignacy has been speedily implementing everything needed in the frontend.
Ignacy is at GUADEC, so please say hi to him! The next phase of parental controls work will involve changes to gnome-control-center and gnome-shell, so he'll be popping up all over the stack.
I'll try and blog more soon about the upcoming features and how they're implemented, because there are necessarily quite a few moving parts to them.
24 Jul 2025 10:13pm GMT
Sjoerd Stendahl: A Brief History of Graphs; My Journey Into Application Development
It's been a while since I originally created this page. I've been planning for a while (over a year) to write an article like this, but have been putting this off for one reason or another. With GUADEC going on while writing this, listening to some interesting talks on YouTube, I thought this is a good time as ever to actually to submit my first post on this page. In this article, I'll simply lie down the history of Graphs, how it came to be, how it evolved to an actually useful program for some, and what is on the horizon. Be aware that any opinions expressed are my own, and do not necessarily reflect those of my employer, any contributors or the GNOME Foundation itself.
I would also like to acknowledge that while I founded Graphs, the application I'm mainly talking about, I'm not the only one working on the project. We've got a lot of input from the community. And I maintain the project with Christoph, the co-maintainer of the project. Any credit towards this particular program, is shared credit.
Motivations
As many open source projects, I originally developed Graphs because I had a personal itch to scratch. At the time I was working on my PhD, and I regularly had to plot data to prepare for presentations. As well as to do some simple manipulations. Things like cutting away the first few degrees from a X-ray reflectivity measurement, normalizing data, or shifting data to show multiple measurements on the same graph.
At the time, I had a license for OriginLabs. Which was an issue for multiple reasons. Pragmatically, it only works on Windows and even if it had a Linux client, we had a single license coupled to my work PC in my office which I didn't tend to use a lot. Furthermore, the software itself is an interface nightmare and doing simple operations like cutting data or normalization is not exactly intuitive.
My final issue was more philosophical, which is that I have fundamental problems with using proprietary software in scientific work. It is bluntly absurd how we have rigorous and harsh rules about showing your work and making your research replicable in scientific articles (which is fair), but as soon as software is involved it's suddenly good enough when a private entity tells us "just trust me bro". Let it be clear that I have no doubt that a proprietary application actually implements the algorithms that it says it does according to their manual. But you are still replacing a good chunk of your article with a black box, which in my view is fundamentally unscientific. There could be bugs, and subtlety could be missed. Let alone the fact that replicability is just completely thrown out of the window if you delegate all your data processing to a magic black box. This is an issue where a lot of people I talk to tend to agree with me on principle, yet very few people actually care enough to move away from proprietary solutions. Whenever people use open-source software for their research, I found it's typically a coincidence based on the merits that it was free, rather than a philosophical or ideological choice.
Either way, philosophically, I wanted to do my data-reduction using complete transparency. And pragmatically I simply needed something that just plots my data, and allows me to do basic transformations. For years I had asked myself questions like "why can't I just visually select part of the data, and then press a "cut" button?" and "Why do all these applications insist on over-complicating this?" Whilst I still haven't found an answer to the second question, I had picked up programming as a hobby at that stage, so I decided to answer my first question with a "fine, I'll do it myself". But first, let's start at what drove me to start working on applications such as these.
Getting into application development
Whilst I had developed a lot in MatLab during my master's (as well as TI-Basic at high-school), my endeavor in application-development started mostly during my PhD. Starting with some very simple applications, like a calculator tool for growth rate in magnetron sputtering based on calibration measurements. Another application that I wrote during that time was a tool that simply plotted logs that we got from our magnetron sputtering machine. Fun fact here is that my logging software also kept track of how the software running our magnetron sputtering chambers slowed down over time. Basically, our machine was steered using LabView, and after about 1000 instructions or so it started to slow down a bit. So if we tell it to do something for 24 seconds, it started to take 24.1 seconds for instance. At one point we had a reviewer comment that didn't believe that we could get such a delay with modern computers, so it was nice to have the receipts to back this up. Still the conclusion here should be that LabView is not exactly great to steer hardware directly, but it's not me that's calling the shots.
My first "bigger" project was something in between a database (all stored in a csv file), as well as a plotting program. Basically for every sample I created in the lab, I added an item where I stored all relevant information, including links to the measurements I did on the sample (like sane people would do in an excel sheet). Then using a simple list of all samples, I could quickly just plot my data for the measurements I wanted. I also had some functionality like cutting away the start of the data, normalizing the data, or the ability to calculate sample thickness based on the measurement. In a sense, this was Graphs 0.1. The code is still online, if someone wants to laugh at a physicists code without any real developer experience.
The second "big" tool that I created during that time was GIScan. This lied the foundation of my very favourite article that I wrote during my PhD. Essentially, we got 24 hours to measure as many samples as we can at a Synchrotron facility. So that's exactly what we did, almost blindly. Then we came home with thousands of measurements on a few hundred samples, and it was time to analyze. At the very first stage, I did some basic analysis using Python. Basically all filenames were tagged somewhat strategically, so I could use regex to isolate the measurement series, and then I could quite quickly find the needle in the haystack. Basically, I found which 20 measurements or so where interesting for us and where to look further. The only problem, the work we were doing was extremely niche and the data reduction software available, which would do things like background subtraction and coordinate conversion for us (from pixels to actually physical coordinates), was barely functional and not made for our type of measurements. So here I wrote my own data reduction software, GIScan. Explaining what makes it actually incredibly useful would require an article series about the physics behind this, but my entire analysis hinged on this data analysis software. GIScan is also available as GPLv3 licensed software, but also here I will use my right to remain silent on any questions about the crimes committed in the code quality itself. Fun fact, all graphs in the mentioned article were made using Graphs and Inkscape. Most of the figures themselves are available under a CC-BY license as part of my PhD. I asked about using a CC-BY-SA license, but the university strongly recommended against as they felt it could make it more difficult to use for others in publishing if I care about sharing my work, basically journals are the biggest parasites in academia.
Then we get to Graphs. This was the last, and biggest program that I wrote during that time in my career. At the very beginning, I actually started in Qt. Not because I preferred it as a toolkit (I didn't, and still don't), but because it's easier to port to Windows and spread to my peers. Quite quickly I got to the state where I could easily import two-column data, and do simple manipulations on this data like normalizing the data. It was barebones, but really useful for my workflow. However, as this quickly turned into a passion project, I decided to do the selfish thing and actually rewrite the entire thing in the toolkit that I personally preferred, GTK with libadwaita. It looked beautiful (well, in the same way a newborn baby is beautiful to their parents), and it integrated very nicely into my own desktop. In fact, I was so pleased with it, that I felt like I wanted to share this online, the original Reddit post can still be found here. This marked the very first release of Graphs 1.0, which can be seen in its full glory below, and this is essentially where the fun began

The power of community
When I originally developed this for my personal use, I simply called it "Data Manipulator", which I had shortened to DatMan. Quite early in the process, even before I shared the project to Reddit, Hari Rana (aka TheEvilSkeleton) filed an issue asking me to consider naming the project in accordance with the GNOME HIG. After some small discussions there, we settled on Graphs. This was my first experience with feedback or contributions from the community, and something I am still grateful for. It's a much nicer name that fits in with GNOME applications. They also helped me with a few other design patterns like modal windows and capitalization. Shoutout to Skelly here, for the early help to a brand new project. It did push me to look more into the HIG, and thus helped a lot in getting that ball rolling. I don't take any donations, but feel free to help them out with the work on several projects that are significantly more high-stress than Graphs. There's a donation page on their webpage.
After sharing the initial release to Reddit, I continued development and slowly started tweaking things and polishing existing features. I added support for multiple axes, added some more transformations and added basic options like import settings. It was also around this time that Tobias Bernard from the GNOME Design team dropped by with some help. At first with the generous offer to design a logo for the project, which is still the logo of Graphs today. The old logo, followed by the newly designed logo can be found here:


Yet again, I was very pleasantly surprised by complete strangers just dropping by and offering help. Of course, they're not just helping me personally, but rather helping out the community and ecosystem as a whole. But this collaborative feeling that we're all working on a system that we collectively own together is something that really attracted me to GNOME and FOSS in general.
It was also around these early days that Christoph, who now maintains Graphs with me came by with some pull requests. This went on to the point that he pretty naturally ended up in the role of maintainer. I can confidently say that him joining this endeavor is the best thing that ever happened to Graphs. Both in terms of a general elevation of the code quality, but also in terms of motivation and decision making. Not only did the introduction of a second maintainer mean that new code actually got reviewed, but someone else contributing is a really strong motivator and super contagious for me. In these somewhat early days things were moving fast, and we really saw strong improvement both in terms of the quality of the code, but also in the general user experience of the app.
In terms of UX, I'd really like to thank Tobias again. Even before we even had GNOME Circle on our radar as goal, he helped us a lot with general feedback about the UX. Highlighting papercuts, and coming up with design patterns that made more sense. Here we really saw a lot of improvements, and I really learned a lot at the time about having design at the core of the development process. The way the application works is not just a means to get something done, the design is the program. Not to say that I'd classify myself as an expert UI-designer these days, but a lot of lessons have been learned thanks to the involvement of people that have more expertise than me. The GNOME Design team in general has been very helpful with suggestions and feedback during the development. Whenever I got in touch with the GNOME Developer community, it's been nothing but helpfulness and honest advice. The internet stereotype about GNOME Developers being difficult to work with simply does not hold up. Not in my experience. It's been a fantastic journey. Note that nobody is obliged to fix your problems for you, but you will find that if you ask nicely and listen to feedback from others, people are more than willing to help you out!
I couldn't talk about the history of Graphs, without at least mentioning the process of getting to GNOME Circle. It's there for me, that Graphs really went from a neat hobbyist tool to a proper useful application. When we initially applied there, I was pretty happy about the state we were at, but we've actually undergone quite a bit of a transformation since Graphs got accepted. If someone feels inclined following the process, the entire process is still available on the GitLab page. I won't go too much about joining GNOME Circle here, there's a nice talk scheduled at GUADEC2025 from the developer of Drum Machine about that. But here's what Graphs looked like before, and after the GNOME Circle Application:


Two particular changes that stuck to me where the introduction of touchpad gesture support, and the change in the way we handle settings. Starting with touchpad gestures, I had always considered this to be out of our control. We use Matplotlib to render the plots themselves, which by itself doesn't support touch gestures. It's mostly thanks to Tobias naming it as part of the GNOME Cirle Review that I actually went ahead and try to implement it myself. After a week or so digging into documentations and testing some different calculations for the different axes, I actually got this working. It's a moment that stuck with me, partly because of the dopamine hit when things finally worked, but also because it again showed the value of starting with intended the user experience first and then working backwards to fit the technology with that. Rather than starting with the technology, and then creating a user experience from that.
The change in settings is something I wanted to highlight, because this is such a common theme in discussions about GNOME in general. Over time, I've been leaning more and more towards the idea that preferences in many cases are simply just an excuse to avoid making difficult choices. Before submitting, we had settings for basically everything. We had a setting for the default plotting style in dark mode and light mode, we had a setting for the clipboard size. We had a setting for the default equation when creating a new equation, and I could on for a bit. Most of these settings could simply be replaced by making the last alternative persistent between sessions. The default equation now is simply the last used equation, same for import settings where we just added a button to reset these settings to default. For the styling we don't have a separate dark and light style that can be set, instead you just set one style total and one of the options is just "System", which essentially resembles Adwaita and Adwaita-dark in light and dark mode respectively. This really, really streamlined the entire user experience. Things got much easier to use, and options got much easier to find. I would strongly recommend anyone that develops applications (within the GNOME ecosystem or elsewhere), to read the "Choosing our preferences" article, it's a real eye-opener.
Where we are now, and where we're going
These days Graphs is relatively mature and works pretty well. Since being accepted to the GNOME Circle, we haven't had such a major overhaul as presented here. We've had some performance upgrades under the hood, fixed quite a bit of bugs and made some improvements with the layout system. We've since also added full support for touchscreen devices (thanks to me getting a Steam Deck, allowing me to test on touch), improved the rubberband on the canvas, and improved equation parsing a bit.
Despite the somewhat slower pace, there is a major release in the brewing with some exiting features already in the main branch. Some of the features that you can expect in the next stable release:
Full equation support on an infinite canvas
At the moment, you cannot really add an "equation" to Graphs. Instead, what you do is that you generate data based on an equation. In the next release, we actually support "equations". These span the entire canvas, and can be changed afterwards as well. Operations you do on the equation (such as derivatives), actually affect the equation accordingly, and you can actually change the equation also now after adding it.

Generated data can now be changed afterwards
You can still generate data from an equation like you could previously (so it doesn't have to be an infinite equation). But generated data can now also be changed afterwards by changing the input equation.

A fully revamped style editor
In the upcoming release, you can actually open .mplstyle files using Graphs, which opens the style editor itself instead of the main application. Furthermore, you can now import styles from the GUI, and open Graphs styles in another application (like your text editor) to do some advanced changes in the style that are not supported by our GUI. Likewise, you can now export your Graphs style-file so you can share it with others. (Maybe even with us, as a merge request, if it's really nice )
Another really nice touch is that you now get a live preview of the actual style you're working on, so you don't need to go back and forth every time when you make incremental changes.

Drag and drop support
You can now import data by simply drag and dropping data into the As usual, there's more features that I probably forgot. But the next release is bound to be a banger. I won't dare to pin a release date here. But all the mentioned changes are already working (sqlite support is still in MR) and can be tested from the main branch. There's still work to do though with regard to a planned rework on the way we import data, and the way we access the style editor which is currenlty a bit buried in the stable release. main application

Multiple sessions
You can now finally have multiple sessions of Graphs open at the same time. Allowing you to view and work on data side-by-side.

Support for sqlite databases
We now added support for sqlite databases. So you can import data from your .db file

And more
As usual, there's more features that I probably forgot. But the next release is bound to be a banger. I won't dare to pin a release date here. But all the mentioned changes are already working (sqlite support is still in MR) and can be tested from the main branch. There's still work to do though with regard to a planned rework on the way we import data, and the way we access the style editor which is currently a bit buried in the stable release.
Conclusion
This post got a bit longer than I anticipated. But I hope in general this could give people some insight on how it is for a newcomer to get into application development. I really encourage people to test the waters. It really shows that you really can get involved, even if it involves learning along the way. These days I no longer work in academia, and I am willing to bet that I'd probably wouldn't have my current position working with software if it wasn't for these adventures.
Again, I would really like to thank the GNOME Community as a whole. The adventure so far has been great, and I promise that it's far from over
24 Jul 2025 12:27pm GMT
23 Jul 2025
Planet GNOME
Michael Meeks: 2025-07-23 Wednesday
- Mail chew, research.
- Published the next strip around investing in the future and making a splash:
- Somehow this afternoon & evening is an highly contended zone for meetings - Collabora monthly management call, GNOME Advisory Board, Customer call and TDF Advisory Board all competing for the same slot.
- Band practice in the evening.
23 Jul 2025 9:00pm GMT
Christian Hergert: The Foundry of Builder
I won't be traveling this summer for GUADEC, so here is a quick rundown of what I would talk about if I were there.
Personally, I feel like Foundry has the potential to be far more useful than Builder alone. This is probably a good time to write about how it got here and where I intend to take it. Hopefully with your help!

23 Jul 2025 7:23pm GMT
Jussi Pakkanen: Comparing a red-black tree to a B-tree
In an earlier blog post we found that optimizing the memory layout of a red-black tree does not seem to work. A different way of implementing an ordered container is to use a B-tree. It was originally designed to be used for on-disk data. The design principle was that memory access is "instant" while disk access is slow. Nowadays this applies to memory access as well, as cache hits are "instant" and uncached memory is slow.
I implemented a B-tree in Pystd. Here is how all the various containers compare. For test data we used numbers from zero to one million in a random order.
As we can see, an unordered map is massively faster than any ordered container. If your data does not need to be ordered, that is the one you should use. For ordered data, the B-tree is clearly faster than either red-black tree implementation.
Tuning the B-tree
B-trees have one main tunable parameter, namely the spread factor of the nodes. In the test above it was five, but for on disk purposes the recommended value is "in the thousands". Here's how altering the value affects performance.
The sweet spot seems to be in the 256-512 range, where the operations are 60% faster than standard set. As the spread factor grows towards infinity, the B-tree reduces to just storing all data in a single sorted array. Insertion into that is an O(N^2) algorithm as can be seen here.
Getting weird
The B-tree implementation has many assert calls to verify the internal structures. We can compile the code with -DNDEBUG to make all those asserts disappear. Removing redundant code should make things faster, so let's try it.
There are 13 measurements in total and disabling asserts (i.e. enabling NDEBUG) makes the code run slower in 8 of those cases. Let's state that again, since it is very unexpected: in this particular measurement code with assertions enabled runs faster than the same code without them. This should not be happening. What could be causing it?
I don't know for sure, so here is some speculation instead.
First of all a result of 8/13 is probably not statistically significant to say that enabling assertions makes things faster. OTOH it does mean that enabling them does not make the code run noticeably slower. So I guess we can say that both ways of building the code are approximately as fast.
As to why that is, things get trickier. Maybe GCC's optimizer is just really good at removing unnecessary checks. It might even be that the assertions give the compiler more information so it can skip generating code for things that can never happen. I'm not a compiler engineer, so I'll refrain from speculating further, it would probably be wrong in any case.
23 Jul 2025 2:05pm GMT