19 Jun 2025
Planet GNOME
Michael Meeks: 2025-06-19 Thursday
- Up early, tech planning call in the morning, mail catch-up, admin and TORF pieces.
- Really exited to see the team get the first COOL 25.04 release shipped, coming to a browser near you:
Seems our videos are getting more polished over time too which is good. - Mail, admin, compiled some code too; bit patch review here & there.
19 Jun 2025 4:50pm GMT
Peter Hutterer: libinput and tablet tool eraser buttons
This is, to some degree, a followup to this 2014 post. The TLDR of that is that, many a moon ago, the corporate overlords at Microsoft that decide all PC hardware behaviour decreed that the best way to handle an eraser emulation on a stylus is by having a button that is hardcoded in the firmware to, upon press, send a proximity out event for the pen followed by a proximity in event for the eraser tool. Upon release, they dogma'd, said eraser button shall virtually move the eraser out of proximity followed by the pen coming back into proximity. Or, in other words, the pen simulates being inverted to use the eraser, at the push of a button. Truly the future, back in the happy times of the mid 20-teens.
In a world where you don't want to update your software for a new hardware feature, this of course makes perfect sense. In a world where you write software to handle such hardware features, significantly less so.
Anyway, it is now 11 years later, the happy 2010s are over, and Benjamin and I have fixed this very issue in a few udev-hid-bpf programs but I wanted something that's a) more generic and b) configurable by the user. Somehow I am still convinced that disabling the eraser button at the udev-hid-bpf level will make users that use said button angry and, dear $deity, we can't have angry users, can we? So many angry people out there anyway, let's not add to that.
To get there, libinput's guts had to be changed. Previously libinput would read the kernel events, update the tablet state struct and then generate events based on various state changes. This of course works great when you e.g. get a button toggle, it doesn't work quite as great when your state change was one or two event frames ago (because prox-out of one tool, prox-in of another tool are at least 2 events). Extracing that older state change was like swapping the type of meatballs from an ikea meal after it's been served - doable in theory, but very messy.
Long story short, libinput now has a internal plugin system that can modify the evdev event stream as it comes in. It works like a pipeline, the events are passed from the kernel to the first plugin, modified, passed to the next plugin, etc. Eventually the last plugin is our actual tablet backend which will update tablet state, generate libinput events, and generally be grateful about having fewer quirks to worry about. With this architecture we can hold back the proximity events and filter them (if the eraser comes into proximity) or replay them (if the eraser does not come into proximity). The tablet backend is none the wiser, it either sees proximity events when those are valid or it sees a button event (depending on configuration).
This architecture approach is so successful that I have now switched a bunch of other internal features over to use that internal infrastructure (proximity timers, button debouncing, etc.). And of course it laid the ground work for the (presumably highly) anticipated Lua plugin support. Either way, happy times. For a bit. Because for those not needing the eraser feature, we've just increased your available tool button count by 100%[2] - now there's a headline for tech journalists that just blindly copy claims from blog posts.
[1] Since this is a bit wordy, the libinput API call is just libinput_tablet_tool_config_eraser_button_set_button()
[2] A very small number of styli have two buttons and an eraser button so those only get what, 50% increase? Anyway, that would make for a less clickbaity headline so let's handwave those away.
19 Jun 2025 11:44am GMT
18 Jun 2025
Planet GNOME
Marcus Lundblad: Midsommer Maps
As tradition has it, it's about time for the (Northern Hemisphere) summer update on the happenings around Maps!
![]() |
|
About dialog for GNOME Maps 49.alpha development |
Bug Fixes
Since the GNOME 48 release in March, there's been some bug fixes, such as correctly handling daylight savings time in public transit itineraries retrieved from Transitous. Also James Westman fixed a regression where the search result popover wasn't showing on small screen devices (phones) because of sizing issues.
More Clickable Stuff
![]() |
Showing place information for the AVUS motorway in Berlin |
And related to traffic and driving, exit numbers are now shown for highway junctions (exits) when available.
![]() |
Showing information for a highway exit in a driving-on-the-right locallity |
![]() |
Showing information for a highway exit in a driving-on-the-left locallity |
Note how the direction the arrow is pointing depends on the side of the road vehicle traffic drives in the country/territoy of the place…
Furigana Names in Japanese
Configurable Measurement Systems
![]() |
Hamburger menu showing measurement unit selection |
Station Symbols
![]() |
U-Bahn station in Hamburg |
![]() |
Metro stations in Copenhagen |
![]() |
Subway stations in Boston |
![]() |
||
S-Bahn station in Berlin |
This requires the stations being tagged consitently to work out. I did some mass tagging of metro stations in Stockholm, Oslo, and Copenhagen. Other than that I mainly choose places where's at least partial coverage already.
[out:xml][timeout:90][bbox:{{bbox}}];
(
nwr["network"="Washington Metro"]["railway"="station"];
);
(._;>;);
out meta;
![]() |
||
JOSM Overpass download query editor |
Select the region to download from
![]() |
Select region in JOSM |
Select to only show the datalayer (not showing the background map) to make it easier to see the raw data.
![]() |
Toggle data layers in JOSM |
Select the nodes.
![]() |
Show raw datapoints in JSOM |
Edit the field in the tag edit panel to update the value for all selected objects
![]() |
Showing tags for selected objects |
Note that this sample assumed the relevant station node where already tagged with network names (the network tag). Other queries to limit selection might be needed.
Also it could also be a good idea to reach out to local OSM communities before making bulk edits like this (e.g. if there is no such tagging at all in specific region) to make sure it would be aliged with expectations and such.
Then it will also potentially take a while before it gets include in out monthly vector tile update.
When this has been done, given a suitable icon is available as e.g. public domain or commons in WikimediaCommons, it could be bundled in data/icons/stations and a definition added in the data mapping in src/mapStyle/stations.js.
And More…
One feature that has been long-wanted is the ability to dowload maps for offline usage. Lately precisely this is something James Westman has been working on.
It's still an early draft, so we'll see when it is ready, but it already look pretty promising.
![]() |
||
Showing the new Preferences option |
![]() |
Preference dialog with dowloads |
![]() |
Selecting region to download |
![]() |
Entering a name for a downloaded region |
![]() |
Dialog showing dowloaded areas |
And that's it for now!
18 Jun 2025 10:53pm GMT
Michael Meeks: 2025-06-18 Wednesday
- Up too early, out for a run with J. Sync with Dave. Plugged away at calls, admin, partner call, sales call, catch up with Moritz, Philippe and Italo.
- Birthday presents at lunch - new (identical) trousers, and a variable DC power supply for some electronics.
- Published the next strip around the excitement of setting up your own non-profit structure:
- Fine steak dinner with the family in the evening.
18 Jun 2025 9:00pm GMT
Alley Chaggar: Demystifying The Codegen Phase Part 1
Intro
I want to start off and say I'm really glad that my last blog was helpful to many wanting to understand Vala's compiler. I hope this blog will also be just as informative and helpful. I want to talk a little about the basics of the compiler again, but this time, catering to the codegen phase. The phase that I'm actually working on, but has the least information in the Vala Docs.
Last blog, I briefly mentioned the directories codegen and ccode being part of the codegen phase. This blog will be going more into depth about it. The codegen phase takes the AST and outputs the C code tree (ccode* objects), so that it can be generated to C code more easily, usually by GCC or another C compiler you installed. When dealing with this phase, it's really beneficial to know and understand at least a little bit of C.
ccode Directory
- Many of the files in the ccode directory are derived from the class CCodeNode, valaccodenode.vala.
-
The files in this directory represent C Constructs. For example, the valaccodefunction.vala file represents a C code function. Regular C functions have function names, parameters, return types, and bodies that add logic. Essentially, what this class specifically does, is provide the building blocks for building a function in C.
//... writer.write_string (return_type); if (is_declaration) { writer.write_string (" "); } else { writer.write_newline (); } writer.write_string (name); writer.write_string (" ("); int param_pos_begin = (is_declaration ? return_type.char_count () + 1 : 0 ) + name.char_count () + 2; bool has_args = (CCodeModifiers.PRINTF in modifiers || CCodeModifiers.SCANF in modifiers); //...
This code snippet is part of the ccodefunction file, and what it's doing is overriding the 'write' function that is originally from ccodenode. It's actually writing out the C function.
codegen Directory
- The files in this directory are higher-level components responsible for taking the compiler's internal representation, such as the AST and transforming it into the C code model ccode objects.
-
Going back to the example of the ccodefunction, codegen will take a function node from the abstract syntax tree (AST), and will create a new ccodefunction object. It then fills this object with information like the return type, function name, parameters, and body, which are all derived from the AST. Then the CCodeFunction.write() (the code above) will generate and write out the C function.
//... private void add_get_property_function (Class cl) { var get_prop = new CCodeFunction ("_vala_%s_get_property".printf (get_ccode_lower_case_name (cl, null)), "void"); get_prop.modifiers = CCodeModifiers.STATIC; get_prop.add_parameter (new CCodeParameter ("object", "GObject *")); get_prop.add_parameter (new CCodeParameter ("property_id", "guint")); get_prop.add_parameter (new CCodeParameter ("value", "GValue *")); get_prop.add_parameter (new CCodeParameter ("pspec", "GParamSpec *")); push_function (get_prop); //...
This code snippet is from valagobjectmodule.vala and it's calling CCodeFunction (again from the valaccodefunction.vala) and adding the parameters, which is calling valaccodeparameter.vala. What this would output is something that looks like this in C:
void _vala_get_property (GObject *object, guint property_id, GValue *value, GParamSpec *pspec) {
//...
}
Why do all this?
Now you might ask why? Why separate codegen and ccode?
- We split things into codegen and ccode to keep the compiler organized, readable, and maintainable. It prevents us from having to constantly write C code representations from scratch all the time.
- It also reinforces the idea of polymorphism and the ability that objects can behave differently depending on their subclass.
- And it lets us do hidden generation by adding new helper functions, temporary variables, or inlined optimizations after the AST and before the C code output.
Jsonmodule
I'm happy to say that I am making a lot of progress with the JSON module I mentioned last blog. The JSON module follows very closely other modules in the codegen, specifically like the gtk module and the gobject module. It will be calling ccode functions to make ccode objects and creating helper methods so that the user doesn't need to manually override certain JSON methods.
18 Jun 2025 4:30pm GMT
Jamie Gravendeel: UI-First Search With List Models
You can find the repository with the code here.
When managing large amounts of data, manual widget creation finds its limits. Not only because managing both data and UI separately is tedious, but also because performance will be a real concern.
Luckily, there's two solutions for this in GTK:
1. Gtk.ListView
using a factory: more performant since it reuses widgets when the list gets long
2. Gtk.ListBox
's bind_model()
: less performant, but can use boxed list styling
This blog post provides an example of a Gtk.ListView
containing my pets, which is sorted, can be searched, and is primarily made in Blueprint.
The app starts with a plain window:
from gi.repository import Adw, Gtk @Gtk.Template.from_resource("/app/example/Pets/window.ui") class Window(Adw.ApplicationWindow): """The main window.""" __gtype_name__ = "Window"
using Gtk 4.0; using Adw 1; template $Window: Adw.ApplicationWindow { title: _("Pets"); default-width: 450; default-height: 450; content: Adw.ToolbarView { [top] Adw.HeaderBar {} } }
Data Object
The Gtk.ListView
needs a data object to work with, which in this example is a pet with a name and species.
This requires a GObject.Object
called Pet
with those properties, and a GObject.GEnum
called Species
:
from gi.repository import Adw, GObject, Gtk class Species(GObject.GEnum): """The species of an animal.""" NONE = 0 CAT = 1 DOG = 2 […] class Pet(GObject.Object): """Data for a pet.""" __gtype_name__ = "Pet" name = GObject.Property(type=str) species = GObject.Property(type=Species, default=Species.NONE)
List View
Now that there's a data object to work with, the app needs a Gtk.ListView
with a factory and model.
To start with, there's a Gtk.ListView
wrapped in a Gtk.ScrolledWindow
to make it scrollable, using the .navigation-sidebar
style class for padding:
content: Adw.ToolbarView { […] content: ScrolledWindow { child: ListView { styles [ "navigation-sidebar", ] }; }; };
Factory
The factory builds a Gtk.ListItem
for each object in the model, and utilizes bindings to show the data in the Gtk.ListItem
:
content: ListView { […] factory: BuilderListItemFactory { template ListItem { child: Label { halign: start; label: bind template.item as <$Pet>.name; }; } }; };
Model
Models can be modified through nesting. The data itself can be in any Gio.ListModel
, in this case a Gio.ListStore
works well.
The Gtk.ListView
expects a Gtk.SelectionModel
because that's how it manages its selection, so the Gio.ListStore
is wrapped in a Gtk.NoSelection
:
using Gtk 4.0; using Adw 1; using Gio 2.0; […] content: ListView { […] model: NoSelection { model: Gio.ListStore { item-type: typeof<$Pet>; $Pet { name: "Herman"; species: cat; } $Pet { name: "Saartje"; species: dog; } $Pet { name: "Sofie"; species: dog; } $Pet { name: "Rex"; species: dog; } $Pet { name: "Lady"; species: dog; } $Pet { name: "Lieke"; species: dog; } $Pet { name: "Grumpy"; species: cat; } }; }; };
Sorting
To easily parse the list, the pets should be sorted by both name and species.
To implement this, the Gio.ListStore
has to be wrapped in a Gtk.SortListModel
which has a Gtk.MultiSorter
with two sorters, a Gtk.NumericSorter
and a Gtk.StringSorter
.
Both of these need an expression: the property that needs to be compared.
The Gtk.NumericSorter
expects an integer, not a Species
, so the app needs a helper method to convert it:
class Window(Adw.ApplicationWindow): […] @Gtk.Template.Callback() def _species_to_int(self, _obj: Any, species: Species) -> int: return int(species)
model: NoSelection { model: SortListModel { sorter: MultiSorter { NumericSorter { expression: expr $_species_to_int(item as <$Pet>.species) as <int>; } StringSorter { expression: expr item as <$Pet>.name; } }; model: Gio.ListStore { […] }; }; };
To learn more about closures, such as the one used in the
Gtk.NumericSorter
, consider reading my previous blog post.
Search
To look up pets even faster, the user should be able to search for them by both their name and species.
Filtering
First, the Gtk.ListView
's model needs the logic to filter the list by name or species.
This can be done with a Gtk.FilterListModel
which has a Gtk.AnyFilter
with two Gtk.StringFilter
s.
One of the Gtk.StringFilter
s expects a string, not a Species
, so the app needs another helper method to convert it:
class Window(Adw.ApplicationWindow): […] @Gtk.Template.Callback() def _species_to_string(self, _obj: Any, species: Species) -> str: return species.value_nick
model: NoSelection { model: FilterListModel { filter: AnyFilter { StringFilter { expression: expr item as <$Pet>.name; } StringFilter { expression: expr $_species_to_string(item as <$Pet>.species) as <string>; } }; model: SortListModel { […] }; }; };
Entry
To actually search with the filters, the app needs a Gtk.SearchBar
with a Gtk.SearchEntry
.
The Gtk.SearchEntry
's text
property needs to be bound to the Gtk.StringFilter
s' search
properties to filter the list on demand.
To be able to start searching by typing from anywhere in the window, the Gtk.SearchEntry
's key-capture-widget
has to be set to the window, in this case the template
itself:
content: Adw.ToolbarView { […] [top] SearchBar { key-capture-widget: template; child: SearchEntry search_entry { hexpand: true; placeholder-text: _("Search pets"); }; } content: ScrolledWindow { child: ListView { […] model: NoSelection { model: FilterListModel { filter: AnyFilter { StringFilter { search: bind search_entry.text; […] } StringFilter { search: bind search_entry.text; […] } }; model: SortListModel { […] }; }; }; }; }; };
Toggle Button
The Gtk.SearchBar
should also be toggleable with a Gtk.ToggleButton
.
To do so, the Gtk.SearchEntry
's search-mode-enabled
property should be bidirectionally bound to the Gtk.ToggleButton
's active
property:
content: Adw.ToolbarView { [top] Adw.HeaderBar { [start] ToggleButton search_button { icon-name: "edit-find-symbolic"; tooltip-text: _("Search"); } } [top] SearchBar { search-mode-enabled: bind search_button.active bidirectional; […] } […] };
The search_button
should also be toggleable with a shortcut, which can be added with a Gtk.ShortcutController
:
[start] ToggleButton search_button { […] ShortcutController { scope: managed; Shortcut { trigger: "<Control>f"; action: "activate"; } } }
Empty State
Last but not least, the view should fall back to an Adw.StatusPage
if there are no search results.
This can be done with a closure for the visible-child-name
property in an Adw.ViewStack
or Gtk.Stack
. I generally prefer an Adw.ViewStack
due to its animation curve.
The closure takes the amount of items in the Gtk.NoSelection
as input, and returns the correct Adw.ViewStackPage
name:
class Window(Adw.ApplicationWindow): […] @Gtk.Template.Callback() def _get_visible_child_name(self, _obj: Any, items: int) -> str: return "content" if items else "empty"
content: Adw.ToolbarView { […] content: Adw.ViewStack { visible-child-name: bind $_get_visible_child_name(selection_model.n-items) as <string>; enable-transitions: true; Adw.ViewStackPage { name: "content"; child: ScrolledWindow { child: ListView { […] model: NoSelection selection_model { […] }; }; }; } Adw.ViewStackPage { name: "empty"; child: Adw.StatusPage { icon-name: "edit-find-symbolic"; title: _("No Results Found"); description: _("Try a different search"); }; } }; };
End Result
from typing import Any from gi.repository import Adw, GObject, Gtk class Species(GObject.GEnum): """The species of an animal.""" NONE = 0 CAT = 1 DOG = 2 @Gtk.Template.from_resource("/org/example/Pets/window.ui") class Window(Adw.ApplicationWindow): """The main window.""" __gtype_name__ = "Window" @Gtk.Template.Callback() def _get_visible_child_name(self, _obj: Any, items: int) -> str: return "content" if items else "empty" @Gtk.Template.Callback() def _species_to_string(self, _obj: Any, species: Species) -> str: return species.value_nick @Gtk.Template.Callback() def _species_to_int(self, _obj: Any, species: Species) -> int: return int(species) class Pet(GObject.Object): """Data about a pet.""" __gtype_name__ = "Pet" name = GObject.Property(type=str) species = GObject.Property(type=Species, default=Species.NONE)
using Gtk 4.0; using Adw 1; using Gio 2.0; template $Window: Adw.ApplicationWindow { title: _("Pets"); default-width: 450; default-height: 450; content: Adw.ToolbarView { [top] Adw.HeaderBar { [start] ToggleButton search_button { icon-name: "edit-find-symbolic"; tooltip-text: _("Search"); ShortcutController { scope: managed; Shortcut { trigger: "<Control>f"; action: "activate"; } } } } [top] SearchBar { key-capture-widget: template; search-mode-enabled: bind search_button.active bidirectional; child: SearchEntry search_entry { hexpand: true; placeholder-text: _("Search pets"); }; } content: Adw.ViewStack { visible-child-name: bind $_get_visible_child_name(selection_model.n-items) as <string>; enable-transitions: true; Adw.ViewStackPage { name: "content"; child: ScrolledWindow { child: ListView { styles [ "navigation-sidebar", ] factory: BuilderListItemFactory { template ListItem { child: Label { halign: start; label: bind template.item as <$Pet>.name; }; } }; model: NoSelection selection_model { model: FilterListModel { filter: AnyFilter { StringFilter { expression: expr item as <$Pet>.name; search: bind search_entry.text; } StringFilter { expression: expr $_species_to_string(item as <$Pet>.species) as <string>; search: bind search_entry.text; } }; model: SortListModel { sorter: MultiSorter { NumericSorter { expression: expr $_species_to_int(item as <$Pet>.species) as <int>; } StringSorter { expression: expr item as <$Pet>.name; } }; model: Gio.ListStore { item-type: typeof<$Pet>; $Pet { name: "Herman"; species: cat; } $Pet { name: "Saartje"; species: dog; } $Pet { name: "Sofie"; species: dog; } $Pet { name: "Rex"; species: dog; } $Pet { name: "Lady"; species: dog; } $Pet { name: "Lieke"; species: dog; } $Pet { name: "Grumpy"; species: cat; } }; }; }; }; }; }; } Adw.ViewStackPage { name: "empty"; child: Adw.StatusPage { icon-name: "edit-find-symbolic"; title: _("No Results Found"); description: _("Try a different search"); }; } }; }; }
List models are pretty complicated, but I hope that this example provides a good idea of what's possible from Blueprint, and is a good stepping stone to learn more.
Thanks for reading!
PS: a shout out to Markus for guessing what I'd write about next ;)
18 Jun 2025 4:01pm GMT
Hari Rana: It’s True, “We” Don’t Care About Accessibility on Linux
Introduction
What do virtue-signalers and privileged people without disabilities sharing content about accessibility on Linux being trash have in common? They don't actually really care about the group they're defending; they just exploit these victims' unfortunate situation to fuel hate against groups and projects actually trying to make the world a better place.
I never thought I'd be this upset to a point I'd be writing an article about something this sensitive with a clickbait-y title. It's simultaneously demotivating, unproductive, and infuriating. I'm here writing this post fully knowing that I could have been working on accessibility in GNOME, but really, I'm so tired of having my mood ruined because of privileged people spending at most 5 minutes to write erroneous posts and then pretending to be oblivious when confronted while it takes us 5 months of unpaid work to get a quarter of recognition, let alone acknowledgment, without accounting for the time "wasted" addressing these accusations.
I'm Not Angry
I'm not mad. I'm absolutely furious and disappointed in the Linux Desktop community for being quiet in regards to any kind of celebration to advancing accessibility, while proceeding to share content and cheer for random privileged people from big-name websites or social media who have literally put a negative amount of effort into advancing accessibility on Linux. I'm explicitly stating a negative amount because they actually make it significantly more stressful for us.
None of this is fair. If you're the kind of person who stays quiet when we celebrate huge accessibility milestones, yet shares (or even writes) content that trash talk the people directly or indirectly writing the fucking software you use for free, you are the reason why accessibility on Linux is shit.
No one in their right mind wants to volunteer in a toxic environment where their efforts are hardly recognized by the public and they are blamed for "not doing enough", especially when they are expected to take in all kinds of harassment, nonconstructive criticism, and slander for a salary of 0$.
There's only one thing I am shamefully confident about: I am not okay in the head. I shouldn't be working on accessibility anymore. The recognition-to-smearing ratio is unbearably low and arguably unhealthy, but leaving people in unfortunate situations behind is also not in accordance with my values.
I've been putting so much effort, quite literally hundreds of hours, into:
- thinking of ways to come up with inclusive designs and experiences;
- imagining how I'd use something if I had a certain disability or condition;
- asking for advice and feedback from people with disabilities;
- not getting paid from any company or organization; and
- making sure that all the accessibility-related work is in the public, and stays in the public.
Number 5 is especially important to me. I personally go as far as to refuse to contribute to projects under a permissive license, and/or that utilize a contributor license agreement, and/or that utilize anything riskily similar to these two, because I am of the opinion that no amount of code for accessibility should either be put under a paywall or be obscured and proprietary.
Permissive licenses make it painlessly easy for abusers to fork, build an ecosystem on top of it which may include accessibility-related improvements, slap a price tag alongside it, all without publishing any of these additions/changes. Corporations have been doing that for decades, and they'll keep doing it until there's heavy push back. The only time I would contribute to a project under a permissive license is when the tool is the accessibility infrastructure itself. Contributor license agreements are significantly worse in that regard, so I prefer to avoid them completely.
The Truth Nobody Is Telling You
KDE hired a legally blind contractor to work on accessibility throughout the KDE ecosystem, including complying with the EU Directive to allow selling hardware with Plasma.
GNOME's new executive director, Steven Deobald, is partially blind.
The GNOME Foundation has been investing a lot of money to improve accessibility on Linux, for example funding Newton, a Wayland accessibility project and AccessKit integration into GNOME technologies. Around 250,000€ (1/4) of the STF budget was spent solely on accessibility. And get this: literally everybody managing these contracts and communication with funders are volunteers; they're ensuring people with disabilities earn a living, but aren't receiving anything in return. These are the real heroes who deserve endless praise.
The Culprits
Do you want to know who we should be blaming? Those who are benefiting from the community's effort while investing very little to nothing into accessibility.
This includes a significant portion of the companies sponsoring GNOME and even companies that employ developers to work on GNOME. These companies are the ones making hundreds of millions, if not billions, in net profit indirectly from GNOME, and investing little to nothing into accessibility. However, the worst offenders are the companies actively using GNOME without ever donating anything to fund the project.
Some companies actually do put an effort, like Red Hat and Igalia. Red Hat employs people with disabilities to work on accessibility in GNOME, one of which I actually rely on when making accessibility-related contributions in GNOME. Igalia funds Orca, the screen reader as part of GNOME, which is something the Linux community should be thankful of.
The privileged people who keep sharing and making content around accessibility on Linux being bad are, in my opinion, significantly worse than the companies profiting off of GNOME. Companies are and stay quiet, but the privileged people add an additional burden to contributors by either trash talking from their content or sharing trash talkers. Once again, no volunteer deserves to be in the position of being shamed and ridiculed for "not doing enough", since no one is entitled to their free time, but themselves.
My Work Is Free but the Worth Is Not
Earlier in this article, I mentioned, and I quote: "I've been putting so much effort, quite literally hundreds of hours […]". Let's put an emphasis on "hundreds". Here's a list of most accessibility-related merge requests that have been incorporated into GNOME:
- GNOME Calculator: !180 and !186
- GNOME Calendar: !331, !332, !333, !335, !336, !337, !344, !348, !358, !360, !362, !387, !388, !390, !421, !435, !489, !559, !563, !564, !569, !576, !587, and !588
- GNOME Contacts: !230
- GNOME Settings: !3017, !3018, and !3027
- GNOME Software: !1519 and !1570
- Papers: !119, !122, and !527
- libadwaita: !1243 (superseded by !1327) and !1245
GNOME Calendar's !559 addresses an issue where event widgets were unable to be focused and activated by the keyboard. That was present since the very beginning of GNOME Calendar's existence, to be specific: for more than a decade. This alone was was a two-week effort. Despite it being less than 100 lines of code, nobody truly knew what to do to have them working properly before. This was followed up by !576, which made the event buttons usable in the month view with a keyboard, and then !587, which properly conveys the states of the widgets. Both combined are another two-week effort.
Then, at the time of writing this article, !564 adds 640 lines of code, which is something I've been volunteering on for more than a month, excluding the time before I opened the merge request.
Let's do a little bit of math together with 'only' !559, !576, and !587. Just as a reminder: these three merge requests are a four-week effort in total, which I volunteered full-time-8 hours a day, or 160 hours a month. I compiled a small table that illustrates its worth:
Country | Average Wage for Professionals Working on Digital AccessibilityWebAIM | Total in Local Currency (160 hours) |
Exchange Rate | Total (CAD) |
---|---|---|---|---|
Canada | 58.71$ CAD/hour | 9,393.60$ CAD | N/A | 9,393.60$ |
United Kingdom | 48.20£ GBP/hour | 7,712£ GBP | 1.8502 | 14,268.74$ |
United States of America | 73.08$ USD/hour | 11,692.80$ USD | 1.3603 | 15,905.72$ |
To summarize the table: those three merge requests that I worked on for free were worth 9,393.60$ CAD (6,921.36$ USD) in total at a minimum.
Just a reminder:
- these merge requests exclude the time spent to review the submitted code;
- these merge requests exclude the time I spent testing the code;
- these merge requests exclude the time we spent coordinating these milestones;
- these calculations exclude the 30+ merge requests submitted to GNOME; and
- these calculations exclude the merge requests I submitted to third-party GNOME-adjacent apps.
Now just imagine how I feel when I'm told I'm "not doing enough", either directly or indirectly. Whenever anybody says we're "not doing enough", I feel very much included, and I will absolutely take it personally.
It All Trickles Down to "GNOME Bad"
I fully expect everything I say in this article to be dismissed or be taken out of context on the basis of ad hominem, simply by the mere fact I'm a GNOME Foundation member / regular GNOME contributor. Either that, or be subject to whataboutism because another GNOME contributor made a comment that had nothing to do with mine but 'is somewhat related to this topic and therefore should be pointed out just because it was maybe-probably-possibly-perhaps ableist'. I can't speak for other regular contributors, but I presume that they don't feel comfortable talking about this because they dared be a GNOME contributor. At least, that's how I felt for the longest time.
Any content related to accessibility that doesn't dunk on GNOME doesn't foresee as many engagement, activity, and reaction as content that actively attacks GNOME, regardless of whether the criticism is fair. Regular GNOME contributors like myself don't always feel comfortable defending ourselves because dismissing GNOME developers just for being GNOME developers is apparently a trend…
Final Word
Dear people with disabilities,
I won't insist that we're either your allies or your enemies-I have no right to claim that whatsoever.
I wasn't looking for recognition. I wasn't looking for acknowledgment since the very beginning either. I thought I would be perfectly capable of quietly improving accessibility in GNOME, but because of the overall community's persistence to smear developers' efforts without actually tackling the underlying issues within the stack, I think I've justified myself to at least demand for acknowledgment from the wider community.
I highly doubt it will happen anyway, because the Linux community feeds off of drama and trash talking instead of being productive, without realizing that it negatively demotivates active contributors while pushing away potential contributors. And worst of all: people with disabilities are the ones affected the most because they are misled into thinking that we don't care.
It's so unfair and infuriating that all the work I do and share online gain very little activity compared to random posts and articles from privileged people without disabilities that rant about the Linux desktop's accessibility being trash. It doesn't help that I become severely anxious sharing accessibility-related work to avoid signs of virtue-signaling. The last thing I want is to (unintentionally) give any sign and impression of pretending to care about accessibility.
I beg you, please keep writing banger posts like fireborn's I Want to Love Linux. It Doesn't Love Me Back series and their interluding post. We need more people with disabilities to keep reminding developers that you exist and your conditions and disabilities are a spectrum and not absolute.
We simultaneously need more interest from people with disabilities to contribute to FOSS, and the wider community to be significantly more intolerant of bullies who profit from smearing and demotivating people who are actively trying. We could also take inspiration from "Accessibility on Linux sucks, but GNOME and KDE are making progress" by OSNews, as they acknowledge that accessibility on Linux is suboptimal while recognizing the efforts of GNOME and KDE.
18 Jun 2025 12:00am GMT
17 Jun 2025
Planet GNOME
Matthew Garrett: Locally hosting an internet-connected server
I'm lucky enough to have a weird niche ISP available to me, so I'm paying $35 a month for around 600MBit symmetric data. Unfortunately they don't offer static IP addresses to residential customers, and nor do they allow multiple IP addresses per connection, and I'm the sort of person who'd like to run a bunch of stuff myself, so I've been looking for ways to manage this.
What I've ended up doing is renting a cheap VPS from a vendor that lets me add multiple IP addresses for minimal extra cost. The precise nature of the VPS isn't relevant - you just want a machine (it doesn't need much CPU, RAM, or storage) that has multiple world routeable IPv4 addresses associated with it and has no port blocks on incoming traffic. Ideally it's geographically local and peers with your ISP in order to reduce additional latency, but that's a nice to have rather than a requirement.
By setting that up you now have multiple real-world IP addresses that people can get to. How do we get them to the machine in your house you want to be accessible? First we need a connection between that machine and your VPS, and the easiest approach here is Wireguard. We only need a point-to-point link, nothing routable, and none of the IP addresses involved need to have anything to do with any of the rest of your network. So, on your local machine you want something like:
[Interface]
PrivateKey = privkeyhere
ListenPort = 51820
Address = localaddr/32
[Peer]
Endpoint = VPS:51820
PublicKey = pubkeyhere
AllowedIPs = VPS/0
And on your VPS, something like:
[Interface]
Address = vpswgaddr/32
SaveConfig = true
ListenPort = 51820
PrivateKey = privkeyhere
[Peer]
PublicKey = pubkeyhere
AllowedIPs = localaddr/32
The addresses here are (other than the VPS address) arbitrary - but they do need to be consistent, otherwise Wireguard is going to be unhappy and your packets will not have a fun time. Bring that interface up with wg-quick and make sure the devices can ping each other. Hurrah! That's the easy bit.
Now you want packets from the outside world to get to your internal machine. Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005. On the VPS, you're going to want to do:
iptables -t nat -A PREROUTING -p tcp -d 321.985.520.309 -j DNAT --to-destination 867.420.696.005
Now, all incoming packets for 321.985.520.309 will be rewritten to head towards 867.420.696.005 instead (make sure you've set net.ipv4.ip_forward to 1 via sysctl!). Victory! Or is it? Well, no.
What we're doing here is rewriting the destination address of the packets so instead of heading to an address associated with the VPS, they're now going to head to your internal system over the Wireguard link. Which is then going to ignore them, because the AllowedIPs statement in the config only allows packets coming from your VPS, and these packets still have their original source IP. We could rewrite the source IP to match the VPS IP, but then you'd have no idea where any of these packets were coming from, and that sucks. Let's do something better. On the local machine, in the peer, let's update AllowedIps to 0.0.0.0/0 to permit packets form any source to appear over our Wireguard link. But if we bring the interface up now, it'll try to route all traffic over the Wireguard link, which isn't what we want. So we'll add table = off to the interface stanza of the config to disable that, and now we can bring the interface up without breaking everything but still allowing packets to reach us. However, we do still need to tell the kernel how to reach the remote VPN endpoint, which we can do with ip route add vpswgaddr dev wg0. Add this to the interface stanza as:
PostUp = ip route add vpswgaddr dev wg0
PreDown = ip route del vpswgaddr dev wg0
That's half the battle. The problem is that they're going to show up there with the source address still set to the original source IP, and your internal system is (because Linux) going to notice it has the ability to just send replies to the outside world via your ISP rather than via Wireguard and nothing is going to work. Thanks, Linux. Thinux.
But there's a way to solve this - policy routing. Linux allows you to have multiple separate routing tables, and define policy that controls which routing table will be used for a given packet. First, let's define a new table reference. On the local machine, edit /etc/iproute2/rt_tables and add a new entry that's something like:
1 wireguard
where "1" is just a standin for a number not otherwise used there. Now edit your wireguard config and replace table=off with table=wireguard - Wireguard will now update the wireguard routing table rather than the global one. Now all we need to do is to tell the kernel to push packets into the appropriate routing table - we can do that with ip rule add from localaddr lookup wireguard, which tells the kernel to take any packet coming from our Wireguard address and push it via the Wireguard routing table. Add that to your Wireguard interface config as:
PostUp = ip rule add from localaddr lookup wireguard
PreDown = ip rule del from localaddr lookup wireguard
and now your local system is effectively on the internet.
You can do this for multiple systems - just configure additional Wireguard interfaces on the VPS and make sure they're all listening on different ports. If your local IP changes then your local machines will end up reconnecting to the VPS, but to the outside world their accessible IP address will remain the same. It's like having a real IP without the pain of convincing your ISP to give it to you.
comments
17 Jun 2025 5:17am GMT
16 Jun 2025
Planet GNOME
Jamie Gravendeel: Data Driven UI With Closures
It's highly recommended to read my previous blog post first to understand some of the topics discussed here.
UI can be hard to keep track of when changed imperatively, preferably it just follows the code's state. Closures provide an intuitive way to do so by having data as input, and the desired value as output. They couple data with UI, but decouple the specific piece of UI that's changed, making closures very modular. The example in this post uses Python and Blueprint.
Technicalities
First, it's good to be familiar with the technical details behind closures. To quote from Blueprint's documentation:
Expressions are only reevaluated when their inputs change. Because Blueprint doesn't manage a closure's application code, it can't tell what changes might affect the result. Therefore, closures must be pure, or deterministic. They may only calculate the result based on their immediate inputs, not properties of their inputs or outside variables.
To elaborate, expressions know when their inputs have changed due to the inputs being GObject properties, which emit the "notify" signal when modified.
Another thing to note is where casting is necessary. To again quote Blueprint's documentation:
Blueprint doesn't know the closure's return type, so closure expressions must be cast to the correct return type using a cast expression.
Just like Blueprint doesn't know about the return type, it also doesn't know the type of ambiguous properties. To provide an example:
Button simple_button { label: _("Click"); } Button complex_button { child: Adw.ButtonContent { label: _("Click"); }; }
Getting the label
of simple_button
in a lookup does not require a cast, since label
is a known property of Gtk.Button
with a known type:
simple_button.label
While getting the label
of complex_button
does require a cast, since child
is of type Gtk.Widget
, which does not have the label
property:
complex_button.child as <Adw.ButtonContent>.label
Example
To set the stage, there's a window with a Gtk.Stack
which has two Gtk.StackPage
s, one for the content and one for the loading view:
from gi.repository import Adw, Gtk @Gtk.Template.from_resource("/org/example/App/window.ui") class Window(Adw.ApplicationWindow): """The main window.""" __gtype_name__ = "Window"
using Gtk 4.0; using Adw 1; template $Window: Adw.ApplicationWindow { title: _("Demo"); content: Adw.ToolbarView { [top] Adw.HeaderBar {} content: Stack { StackPage { name: "content"; child: Label { label: _("Meow World!"); }; } StackPage { name: "loading"; child: Adw.Spinner {}; } }; }; }
Switching Views Conventionally
One way to manage the views would be to rely on signals to communicate when another view should be shown:
from typing import Any from gi.repository import Adw, GObject, Gtk @Gtk.Template.from_resource("/org/example/App/window.ui") class Window(Adw.ApplicationWindow): """The main window.""" __gtype_name__ = "Window" stack: Gtk.Stack = Gtk.Template.Child() loading_finished = GObject.Signal() @Gtk.Template.Callback() def _show_content(self, *_args: Any) -> None: self.stack.set_visible_child_name("content")
A reference to the stack
has been added, as well as a signal to communicate when loading has finished, and a callback to run when that signal is emitted.
using Gtk 4.0; using Adw 1; template $Window: Adw.ApplicationWindow { title: _("Demo"); loading-finished => $_show_content(); content: Adw.ToolbarView { [top] Adw.HeaderBar {} content: Stack stack { StackPage { name: "content"; child: Label { label: _("Meow World!"); }; } StackPage { name: "loading"; child: Adw.Spinner {}; } }; }; }
A signal handler has been added, as well as a name for the Gtk.Stack
.
Only a couple of changes had to be made to switch the view when loading has finished, but all of them are sub-optimal:
- A reference in the code to the
stack
would be nice to avoid - Imperatively changing the view makes following state harder
- This approach doesn't scale well when the data can be reloaded, it would require another signal to be added
Switching Views With a Closure
To use a closure, the class needs data as input and a method to return the desired value:
from typing import Any from gi.repository import Adw, GObject, Gtk @Gtk.Template.from_resource("/org/example/App/window.ui") class Window(Adw.ApplicationWindow): """The main window.""" __gtype_name__ = "Window" loading = GObject.Property(type=bool, default=True) @Gtk.Template.Callback() def _get_visible_child_name(self, _obj: Any, loading: bool) -> str: return "loading" if loading else "content"
The signal has been replaced with the loading
property, and the template callback has been replaced by a method that returns a view name depending on the value of that property. _obj
here is the template class, which is unused.
using Gtk 4.0; using Adw 1; template $Window: Adw.ApplicationWindow { title: _("Demo"); content: Adw.ToolbarView { [top] Adw.HeaderBar {} content: Stack { visible-child-name: bind $_get_visible_child_name(template.loading) as <string>; StackPage { name: "content"; child: Label { label: _("Meow World!"); }; } StackPage { name: "loading"; child: Adw.Spinner {}; } }; }; }
In Blueprint, the signal handler has been removed, as well as the unnecessary name for the Gtk.Stack
. The visible-child-name
property is now bound to a closure, which takes in the loading
property referenced with template.loading
.
This fixed the issues mentioned before:
- No reference in code is required
- State is bound to a single property
- If the data reloads, the view will also adapt
Closing Thoughts
Views are just one UI element that can be managed with closures, but there's plenty of other elements that should adapt to data, think of icons, tooltips, visibility, etc. Whenever you're writing a widget with moving parts and data, think about how the two can be linked, your future self will thank you!
16 Jun 2025 1:54pm GMT
Jussi Pakkanen: A custom C++ standard library part 4: using it for real
Writing your own standard library is all fun and games until someone (which is to say yourself) asks the important question: could this be actually used for real? Theories and opinions can be thrown about the issue pretty much forever, but the only way to actually know for sure is to do it.
Thus I converted CapyPDF, which is a fairly compact 15k LoC codebase from the C++ standard library to Pystd, which is about 4k lines. All functionality is still the same, which is to say that the test suite passes, there are most likely new bugs that the tests do not catch. For those wanting to replicate the results themselves, clone the CapyPDF repo, switch to the pystdport branch and start building. Meson will automatically download and set up Pystd as a subproject. The code is fairly bleeding edge and only works on Linux with GCC 15.1.
Build times
One of the original reasons for starting Pystd was being annoyed at STL compile times. Let's see if we succeeded in improving on them. Build times when using only one core in debug look like this.
When optimizations are enabled the results look like this:
In both cases the Pystd version compiles in about a quarter of the time.
Binary size
C++ gets a lot of valid criticism for creating bloated code. How much of that is due to the language as opposed to the library?
That's quite unexpected. The debug info for STL types seems to take an extra 20 megabytes. But how about the executable code itself?
STL is still 200 kB bigger. Based on observations most of this seems to come from stdlibc++'s implementation of variant. Note that if you try this yourself the Pystd version is probably 100 kB bigger, because by default the build setup links against libsubc++, which adds 100+ kB to binary sizes whereas linking against the main C++ runtime library does not.
Performance
Ok, fine, so we can implement basic code to build faster and take less space. Fine. But what about performance? That is the main thing that matters after all, right? CapyPDF ships with a simple benchmark program. Let's look at its memory usage first.
Apologies for the Y-axis does not starting at zero. I tried my best to make it happen, but LibreOffice Calc said no. In any case the outcome itself is expected. Pystd has not seen any performance optimization work so it requiring 10% more memory is tolerable. But what about the actual runtime itself?
This is unexpected to say the least. A reasonable result would have been to be only 2x slower than the standard library, but the code ended up being almost 25% faster. This is even stranger considering that Pystd's containers do bounds checks on all accesses, the UTF-8 parsing code sometimes validates its input twice, the hashing algorithm is a simple multiply-and-xor and so on. Pystd should be slower, and yet, in this case at least, it is not.
I have no explanation for this. It is expected that Pystd will start performing (much) worse as the data set size grows but that has not been tested.
16 Jun 2025 12:25am GMT
Victor Ma: A strange bug
In the last two weeks, I've been trying to fix a strange bug that causes the word suggestions list to have the wrong order sometimes.
For example, suppose you have an empty 3x3 grid. Now suppose that you move your cursor to each of the cells of the 1-Across slot (labelled α
, β
, and γ
).
+---+---+---+
| α | β | γ |
+---+---+---+
| | | |
+---+---+---+
| | | |
+---+---+---+
You should expect the word suggestions list for 1-Across to stay the same, regardless of which cell your cursor is on. After all, all three cells have the same information: that the 1-Across slot is empty, and the intersecting vertical slot of whatever cell we're on (1-Down, 2-Down, or 3-Down) is also empty.
There are no restrictions whatsoever, so all three cells should show the same word suggestion list: one that includes every three-letter word.
But that's not what actually happens. In reality, the word suggestions list changes quite dramatically. The order of the list definitely changes. And it looks like there may even be words in one list that doesn't appear in another. What's going on here?
Understanding the code
My first step was to understand how the code for the word suggestions list works. I took notes along the way, in order to solidify my understanding. I especially found it useful to create diagrams for the word list resource (a pre-compiled resource that the code uses):

By the end of the first week, I had a good idea of how the word-suggestions-list code works. The next step was to figure out the cause of the bug and how to fix it.
Investigating the bug
After doing some testing, I realized that the seemingly random orderings of the lists are not so random after all! The lists are actually all in alphabetical order-but based on the letter that corresponds to the cell, not necessarily the first letter.
What I mean is this:
- The word suggestions list for cell
α
is sorted alphabetically by the first letter of the words. (This is normal alphabetical order.) For example:ALE, AXE, BAY, BOA, CAB
- The word suggestions list for cell
β
is sorted alphabetically by the second letter of the words. For example:CAB, BAY, ALE, BOA, AXE
- The word suggestions list for cell
γ
is sorted alphabetically by the third letter of the words. For example:BOA, CAB, ALE, AXE, BAY
Fixing the bug
The cause of the bug is quite simple: The function that generates the word suggestions list does not sort the list before it returns it. So the order of the list is whatever order the function added the words in. And because of how our implementation works, that order happens to be alphabetical, based on the letter that corresponds to the cell.
The fix for the bug is also quite simple-at least theoretically. All we need to do is sort the list before we return it. But in reality, this fix runs into some other problems that need to be addressed. Those problems are what I'm going to work on this week.
16 Jun 2025 12:00am GMT
15 Jun 2025
Planet GNOME
Sam Thursfield: Status update, 15/06/2025
This month I created a personal data map where I tried to list all my important digital identities.

(It's actually now a spreadsheet, which I'll show you later. I didn't want to start the blog post with something as dry as a screenshot of a spreadsheet.)
Anyway, I made my personal data map for several reasons.
The first reason was to stay safe from cybercrime. In a world of increasing global unfairness and inequality, of course crime and scams are increasing too. Schools don't teach how digital tech actually works, so it's a great time to be a cyber criminal. Imagine being a house burglar in a town where nobody knows how doors work.

Lucky for me, I'm a professional door guy. So I don't worry too much beyond having a really really good email password (it has numbers and letters). But its useful to double check if I have my credit card details on a site where the password is still "sam2003".
The second reason is to help me migrate to services based in Europe. Democracy over here is what it is, there are good days and bad days, but unlike the USA we have at least more options than a repressive death cult and a fundraising business. (Shout to @angusm@mastodon.social for that one). You can't completely own your digital identity and your data, but you can at least try to keep it close to home.
The third reason was to see who has the power to influence my online behaviour.
This was an insight from reading the book Technofeudalism. I've always been uneasy about websites tracking everything I do. Most of us are, to the point that we have made myths like "your phone microphone is always listening so Instagram can target adverts". (As McSweeney's Internet Tendency confirms, it's not! It's just tracking everything you type, every app you use, every website you visit, and everywhere you go in the physical world).
I used to struggle to explain why all that tracking feels bad. Technofeudalism frames a concept of cloud capital, saying this is now more powerful than other kinds of capital because cloud capitalists can do something Henry Ford, Walt Disney and The Monopoly Guy can only dream of: mine their data stockpile to produce precisely targeted recommendations, search bubbles and adverts which can influence your behaviour before you've even noticed.
This might sound paranoid when you first hear it, but consider how social media platforms reward you for expressing anger and outrage. Remember the first time you saw a post on Twitter from a stranger that you disagreed with? And your witty takedown attracted likes and praise? This stuff can be habit-forming.
In the 20th century, ad agencies changed people's buying patterns and political views using billboards, TV channel and newspapers. But all that is like a primitive blunderbuss compared to recommendation algorithms, feedback loops and targeted ads on social media and video apps.
I lived through the days when web search for "Who won the last election" would just return you 10 pages that included the word "election". (If you're nostalgic for those days… you'll be happy to know that GNOME's desktop search engine still works like that today! : -) I can spot when apps trying to 'nudge' me with dark patterns. But kids aren't born with that skill, and they aren't necessarily going to understand the nature of Tech Billionaire power unless we help them to see it. We need a framework to think critically and discuss the power that Meta, Amazon and Microsoft have over everyone's lives. Schools don't teach how digital tech actually works, but maybe a "personal data map" can be a useful teaching tool?
By the way, here's what my cobbled-together "Personal data map" looks like, taking into account security, what data is stored and who controls it. (With some fake data… I don't want this blog post to be a "How to steal my identity" guide.)
Name | Risks | Sensitivity rating | Ethical rating | Location | Controller | First factor | Second factor | Credentials cached? | Data stored |
Bank account | Financial loss | 10 | 2 | Europe | Bank | Fingerprint | None | On phone | Money, transactions |
Identity theft | 5 | -10 | USA | Meta | Password | On phone | Posts, likes, replies, friends, views, time spent, locations, searches. | ||
Google Mail (sam@gmail.com) | Reset passwords | 9 | -5 | USA | Password | None | Yes - cookies | Conversations, secrets | |
Github | Impersonation | 3 | 3 | USA | Microsoft | Password | OTP | Yes - cookies | Credit card, projects, searches. |
How is it going migrating off USA based cloud services?
"The internet was always a project of US power", says Paris Marx, a keynote at PublicSpaces conference, which I never heard of before.
Closing my Amazon account took an unnecessary amount of steps, and it was sad to say goodbye to the list of 12 different address I called home at various times since 2006, but I don't miss it; I've been avoiding Amazon for years anyway. When I need English-language books, I get them from an Irish online bookstore named Kenny's. (Ireland, cleverly, did not leave the EU so they can still ship books to Spain without incurring import taxes).
Dropbox took a while because I had years of important stuff in there. I actually don't think they're too bad of a company, and it was certainly quick to delete my account. (And my data… right? You guys did delete all my data?).
I was using Dropbox to sync notes with the Joplin notes app, and switched to the paid Joplin Cloud option, which seems a nice way to support a useful open source project.
I still needed a way to store sensitive data, and realized I have access to Protondrive. I can't recommend that as a service because the parent company Proton AG don't seem so serious about Linux support, but I got it to work thanks to some heroes who added a protondrive backend to rclone.
Instead of using Google cloud services to share photos, and to avoid anything so primitive as an actual cable, I learned that KDE Connect can transfer files from my Android phone over my laptop really neatly. KDE Connect is really good. On the desktop I use GSConnect which integrates with GNOME Shell really well. I think I've not been so impressed by a volunteer-driven open source project in years. Thanks to everyone who worked on these great apps!
I also migrated my VPS from a US-based host Tornado VPS to one in Europe. Tornado VPS (formally prgmr.com) are a great company, but storing data in the USA doesn't seem like the way forwards.
That's about it so far. Feels a bit better.
What's next?
I'm not sure whats next!
I can't leave Github and Gitlab.com, but my days of "Write some interesting new code and push it straight to Github" are long gone. I didn't sign up to train somebody else's LLM for free, and neither should you. (I'm still interested in sharing interesting code with nice people, of course, but let's not make it so easy for Corporate America to take our stuff without credit or compensation. Bring back the "sneakernet"!)
Leaving Meta platforms and dropping YouTube doesn't feel directly useful. It's like individually renouncing debit cards, or air travel: a lot of inconvenience for you, but the business owners don't even notice. The important thing is to use the alternatives more. Hence why I still write a blog in 2025 and mostly read RSS feeds and the Fediverse. Gigs where I live are mostly only promoted on Instagram, but I'm sure that's temporary.
In the first quarter of 2025, rich people put more money into AI startups than everything else put together (see: Pivot to AI). Investors love a good bubble, but there's also an element of power here.
If programmers only know how to write code using Copilot, then whoever controls Microsoft has the power to decide what code we can and can't write. (This currently this seems limited to not using the word 'gender'. But I can imagine a future where it catches you reverse-engineering proprietary software, or jailbreaking locked-down devices, or trying write a new Bittorrent client).
If everyone gets their facts from ChatGPT, then whoever controls OpenAI has the power to tweak everyone's facts, an ability that is currently limited only to presidents of major world superpowers. If we let ourselves avoid critical thinking and rely on ChatGPT to generate answers to hard questions instead, which teachers say is very much exactly what's happening in schools now… then what?
15 Jun 2025 8:10pm GMT
14 Jun 2025
Planet GNOME
Toluwaleke Ogundipe: Hello GNOME and GSoC!
I am delighted to announce that I am contributing to GNOME Crosswords as part of the Google Summer of Code 2025 program. My project primarily aims to add printing support to Crosswords, with some additional stretch goals. I am being mentored by Jonathan Blandford, Federico Mena Quintero, and Tanmay Patil.
The Days Ahead
During my internship, I will be refactoring the puzzle rendering code to support existing and printable use cases, adding clues to rendered puzzles, and integrating a print dialog into the game and editor with crossword-specific options. Additionally, I should implement an ipuz2pdf utility to render puzzles in the IPUZ format to PDF documents.
Beyond the internship, I am glad to be a member of the GNOME community and look forward to so much more. In the coming weeks, I will be sharing updates about my GSoC project and other contributions to GNOME. If you are interested in my journey with GNOME and/or how I got into GSoC, I implore you to watch out for a much longer post coming soon.
Appreciation
Many thanks to Hans Petter Jansson, Federico Mena Quintero and Jonathan Blandford, who have all played major roles in my journey with GNOME and GSoC.
14 Jun 2025 8:48pm GMT
Steven Deobald: 2025-06-14 Foundation Report
These weeks are going by fast and I'm still releasing these reports after the TWIG goes out. Weaker humans than I might be tempted to automate - but don't worry! These will always be artisanal, hand-crafted, single-origin, uncut, and whole bean. Felix encouraged me to add these to following week's TWIG, at least, so I'll start doing that.
## Opaque Stuff
- a few policy decisions are in-flight with the Board - productive conversations happening on all fronts, and it feels really good to see them moving forward
## Elections
Voting closes in 5 days (June 19th). If you haven't voted yet, get your votes in!
## GUADEC
Planning for GUADEC is chugging along. Sponsored visas, flights, and hotels are getting sorted out.
If you have a BoF or workshop proposal, get it in before tomorrow!
## Operations
Our yearly CPA review is finalized. Tax filings and 990 prep are in flight.
## Infrastructure
You may have seen our infrastructure announcement on social media earlier this week. This closes a long chapter of transitioning to AWS for GNOME's essential services. A number of people have asked me if our setup is now highly AWS-specific. It isn't. The vast majority of GNOME's infrastructure runs on vanilla Linux and OpenShift. AWS helps our infrastructure engineers scale our services. They're also generously donating the cloud infrastructure to the Foundation to support the GNOME project.
## Fundraising
Over the weekend, I booted up a couple of volunteer developers to help with a sneaky little project we kicked off last week. As Julian, Pablo, Adrian, and Tobias have told me: No Promises… so I'm not making any. You'll see it when you see it. Hopefully in a few days. This has been the biggest focus of the Foundation over the past week-and-a-half.
Many thanks to the other folks who've been helping with this little initiative. The Foundation could really use some financial help soon, and this project will be the base we build everything on top of.
## Meeting People
Speaking of fundraising, I met Loren Crary of the Python Foundation! She is extremely cool and we found out that we both somehow descended on the term "gentle nerds", each thinking we coined it ourselves. I first used this term in my 2015 Rootconf keynote. She's been using it for ages, too. But I didn't originally ask for her help with terminology. I went to her to sanity-check my approach to fundraising and - hooray! - she tells me I'm not crazy. Semi-related: she asked me if there are many books on GNOME and I had to admit I've never read one myself. A quick search shows me Mastering GNOME: A Beginner's Guide and The Linux GNOME Desktop For Dummies. Have you ever read a book on GNOME? Or written one?
I met Jorge Castro (of CNCF and Bazzite fame), a friend of Matt Hartley. We talked October GNOME, Wayland, dconf, KDE, Kubernetes, Fedora, and the fact that the Linux desktop is the true UI to cloud-native …everything. He also wants to be co-conspirators and I'm all about it. It had never really occurred to me that the ubiquity of dconf means GNOME is actually highly configurable, since I tend to eat the default GNOME experience (mostly), but it's a good point. I told him a little story that the first Linux desktop experience that outstripped both Windows and MacOS for me was on a company-built RHEL machine back in 2010. Linux has been better than commercial operating systems for 15 years and the gap keeps widening. The Year of The Linux Desktop was a decade ago… just take the W.
I had a long chat with Tobias and, among other things, we discussed the possibility of internal conversation spaces for Foundation Members and the possibility of a project General Assembly. Both nice ideas.
I met Alejandro and Ousama from Slimbook. It was really cool to hear what their approach to the market is, how they ensure Linux and GNOME run perfectly on their hardware, and where their devices go. (They sell to NASA!) We talked about improving upstream communications and ways for the Foundation to facilitate that. We're both hoping to get more Slimbooks in the hands of more developers.
We had our normal Board meeting. Karen gave me some sage advice on fundraising campaigns and grants programs.
## One-Month Feedback Session
I had my one-month feedback session with Rob and Allan, who are President and Vice-President at the moment, respectively. (And thus, my bosses.)
Some key take-aways are that they'd like me to increase my focus on the finances and try to make my community outreach a little more sustainable by being less verbose. Probably two sides of the same coin, there. I've already shifted my focus toward finances as of two weeks ago… which may mean you've seen less of me in Matrix and other community spaces. I'm still around! I just have my nose in a spreadsheet or something.
They said some nice stuff, too, but nobody gets better by focusing on the stuff they're already doing right.
14 Jun 2025 5:38am GMT
Ignacy Kuchciński: Taking out the trash, or just sweeping it under the rug? A story of leftovers after removing files
There are many things that we take for granted in this world, and one of them is undoubtedly the ability to clean up your files - imagine a world where you can't just throw all those disk space hungry things that you no longer find useful. Though that might sound impossible, turns out some people have encountered a particularly interesting bug, that resulted in silent sweeping the Trash under the rug instead of emptying it in Nautilus. Since I was blessed to run into that issue myself, I decided to fix it and shed some light on the fun.
Trash after emptying in Nautilus, are the files really gone? |
It all started with a 2009 Ubuntu launchpad ticket, reported against Nautilus. The user found 70 GB worth of files using disk analyzer in the ~/.local/share/Trash/expunged directory, even though they had emptied it with graphical interface. They did realize the offending files belonged to another user, however, they couldn't reproduce it easily at first. After all, when you try to move to trash a file or a directory not belonging to you, you would usually be correctly informed that you don't have necessary permissions, and perhaps even offer to permanently delete them instead. So what was so special about this case?
First let's get a better view of when we can and when we can't permanently delete files, something that is done at the end of a successful trash emptying operation. We'll focus only on the owners of relevant files, since other factors, such as file read/write/execute permissions, can be adjusted freely by their owners, and that's what trash implementations will do for you. Here are cases where you CAN delete files:
- when a file is in a directory owned by you, you can always delete it
- when a directory is in a directory owned by you and it's owned by you, you can obviously delete it
- when a directory is in a directory owned by you but you don't own it, and it's empty, you can surprisingly delete it as well
So to summarize, no matter who the owner of the file or a directory is, if it's in a directory owned by you, you can get rid of it. There is one exception to this - the directory must be empty, otherwise, you will not be able to remove neither it, nor its including files. Which takes us to an analogous list for cases where you CANNOT delete files:
- when a directory is in a directory owned by you but you don't own it, and it's not empty, you can't delete it.
- when a file is in a directory NOT owned by you, you can't delete it
- when a directory is in a directory NOT owned by you, you can't delete it either
In contrast with removing files in a directory you own, when you are not the owner of the parent directory, you cannot delete any of the child files and directories, without exceptions. This is actually the reason for the one case where you can't remove something from a directory you own - to remove a non-empty directory, first you need to recursively delete all of its including files and directories, and you can't do that if the directory is not owned by you.
Now let's look inside the trash can, or rather how it functions - the reason for separating permanently deleting and trashing operations, is obvious - users are expected to change their mind and be able to get their files back on a whim, so there's a need for a middle step. That's where the Trash specification comes, providing a common way in which all "Trash can" implementation should store, list, and restore trashed files, even across different filesystems - Nautilus Trash feature is one of the possible implementations. The way the trashing works is actually moving files to the $XDG_DATA_HOME/Trash/files directory and setting up some metadata to track their original location, to be able to restore them if needed. Only when the user empties the trash, are they actually deleted. If it's all about moving files, specifically outside their previous parent directory (i.e. to Trash), let's look at cases where you CAN move files:
- when a file is in a directory owned by you, you can move it
- when a directory is in a directory owned by you and you own it, you can obviously move it
We can see that the only exception when moving files in a directory you own, is when the directory you're moving doesn't belong to you, in which case you will be correctly informed you don't have permissions. In the remaining cases, users are able to move files and therefore trash them. Now what about the cases where you CANNOT move files?
- when a directory is in a directory owned by you but you don't own it, you can't move it
- when a file is in a directory NOT owned by you, you can't move it either
- when a directory is in a directory NOT owned by you, you still can't move it
In those cases Nautilus will either not expose the ability to trash files, or will tell user about the error, and the system is working well - even if moving them was possible, permanently deleting files in a directory not owned by you is not supported anyway.
So, where's the catch? What are we missing? We've got two different operations that can succeed or fail given different circumstances, moving (trashing) and deleting. We need to find a situation, where moving a file is possible, and such overlap exists, by chaining the following two rules:
- when a directory A is in a directory owned by you and it's owned by you, you can obviously move it
- when a directory B is in a directory A owned by you but you don't own it, and it's not empty, you can't delete it.
So a simple way to reproduce was found, precisely:
mkdir -p test/root
touch test/root/file
sudo chown root:root test/root
Afterwards trashing and emptying in Nautilus or gio trash command will result in the files not being deleted, and left in the ~/.local/share/Trash/expunged, which is used by the gvfsd-trash as an intermediary during emptying operation. The situations where that can happen are very rare, but they do exist - personally I have encountered this when manually cleaning container files created by podman in ~/.local/share/containers, which I arguably I shouldn't be doing in the first place, and rather leave it up to the podman itself. Nevertheless, it's still possible from the user perspective, and should be handled and prevented correctly. That's exactly what was done, a ticket was submitted and moved to appropriate place, which turned out to be glib itself, and I have submitted a MR that was merged - now both Nautilus and gio trash will recursively check for this case, and prevent you from doing this. You can expect it in the next glib release 2.85.1.
On the ending notes I want to thank the glib maintainer Philip Withnall who has walked me through on the required changes and reviewed them, and ask you one thing: is your ~/.local/share/Trash/expunged really empty? :)
14 Jun 2025 2:11am GMT
13 Jun 2025
Planet GNOME
Thibault Martin: TIL that htop can display more useful metrics
A program on my Raspberry Pi was reading data on disk, performing operations, and writing the result on disk. It did so at an unusually slow speed. The problem could either be that the CPU was too underpowered to perform the operations it needed or the disk was too slow during read, write, or both.
I asked colleagues for opinions, and one of them mentioned that htop
could orient me in the right direction. The time a CPU spends waiting for an I/O device such as a disk is known as the I/O wait. If that wait time is superior to 10%, then the CPU spends a lot of time waiting for data from the I/O device, so the disk would likely be the bottleneck. If the wait time remains low, then the CPU is likely the bottleneck.
By default htop
doesn't show the wait time. By pressing F2 I can access htop's configuration. There I can use the right arrow to move to the Display options, select Detailed CPU time (System/IO-Wait/Hard-IRQ/Soft-IRQ/Steal/Guest)
, and press Space to enable it.
I can then press the left arrow to get back to the options menu, and move to Meters
. Using the right arrow I can go to the rightmost column, select CPUs (1/1): all CPUs
by pressing Enter, move it to one of the two columns, and press Enter when I'm done. With it still selected, I can press Enter to alternate through the different visualisations. The most useful to me is the [Text]
one.
I can do the same with Disk IO
to track the global read / write speed, and Blank
to make the whole set-up more readable.
With htop configured like this, I can trigger my slow program again see that the CPU is not waiting for the disk. All CPUs have a wa
of 0%
If you know more useful tools I should know about when chasing bottlenecks, or if you think I got something wrong, please email me at thib@ergaster.org!
13 Jun 2025 12:30pm GMT