05 Jul 2025
Planet GNOME
Jussi Pakkanen: Deoptimizing a red-black tree
An ordered map is typically slower than a hash map, but it is needed every now and then. Thus I implemented one in Pystd. This implementation does not use individually allocated nodes, but instead stores all data in a single contiguous array.
Implementing the basics was not particularly difficult. Debugging it to actually work took ages of staring at the debugger, drawing trees by hand on paper, printfing things out in Graphviz format and copypasting the output to a visualiser. But eventually I got it working. Performance measurements showed that my implementation is faster than std::map but slower than std::unordered_map.
So far so good.
The test application creates a tree with a million random integers. This means that the nodes are most likely in a random order in the backing store and searching through them causes a lot of cache misses. Having all nodes in an array means we can rearrange them for better memory access patterns.
I wrote some code to reorganize the nodes so that the root is at the first spot and the remaining nodes are stored layer by layer. In this way the query always processes memory "left to right" making things easier for the branch predictor.
Or that's the theory anyway. In practice this made things slower. And not even a bit slower, the "optimized version" was more than ten percent slower. Why? I don't know. Back to the drawing board.
Maybe interleaving both the left and right child node next to each other is the problem? That places two mutually exclusive pieces of data on the same cache line. An alternative would be to place the entire left subtree in one memory area and the right one in a separate one. Thinking about this for a while, this can be accomplished by storing the nodes in tree traversal order, i.e. in numerical order.
I did that. It was also slower than a random layout. Why? Again: no idea.
Time to focus on something else, I guess.
05 Jul 2025 1:12pm GMT
Steven Deobald: 2025-07-05 Foundation Update
## The Cat's Out Of The Bag
Since some of you are bound to see this Reddit comment, and my reply, it's probably useful for me to address it in a more public forum, even if it violates my "No Promises" rule.
No, this wasn't a shoot-from-the-hip reply. This has been the plan since I proposed a fundraising strategy to the Board. It is my intention to direct more of the Foundation's resources toward GNOME development, once the Foundation's basic expenses are taken care of. (Currently they are not.) The GNOME Foundation won't stop running infrastructure, planning GUADEC, providing travel grants, or any of the other good things we do. But rather than the Foundation contributing to GNOME's development exclusively through inbound/restricted grants, we will start to produce grants and fellowships ourselves.
This will take time and it will demand more of the GNOME project. The project needs clear governance and management or we won't know where to spend money, even if we have it. The Foundation won't become a kingmaker, nor will we run lotteries - it's up to the project to make recommendations and help us guide the deployment of capital toward our mission.
## Friends of GNOME
So far, we have a cute little start to our fundraising campaign: I count 172 public Friends of GNOME over on https://donate.gnome.org/ … to everyone who contributes to GNOME and to everyone who donates to GNOME: thank you. Every contribution makes a huge difference and it's been really heartwarming to see all this early support.
We've taken the first step out of our cozy f/oss spaces: Reddit. One user even set up a "show me your donation!" thread. It's really cute. It's hard to express just how important it is that we go out and meet our users for this exercise. We need them to know what an exciting time it is for GNOME: Windows 10 is dying, MacOS gets worse with every release, and they're going to run GNOME on a phone soon. We also need them to know that GNOME needs their help.
Big thanks to Sri for pushing this and to him and Brage for moderating /r/gnome
. It matters a lot to find a shared space with users and if, as a contributor, you've been feeling like you need a little boost lately, I encourage you to head over to those Reddit threads. People love what you build, and it shows.
## Friends of GNOME: Partners
The next big thing we need to do is to find partners who are willing to help us push a big message out across a lot of channels. We don't even know who our users are, so it's pretty hard to reach them. The more people see that GNOME needs their help, the more help we'll get.
Everyone I know who runs GNOME (but doesn't pay much attention to the project) said the same thing when I asked what they wanted in return for a donation: "Nothing really… I just need you to ask me. I didn't know GNOME needed donations!"
If you know of someone with a large following or an organization with a lot of reach (or, heck, even a little reach), please email me and introduce me. I'm happy to get them involved to boost us.
## Friends of GNOME: Shell Notification
KDE, Thunderbird, and Blender have had runaway success with their small donation notification. I'm not sure we can do this for GNOME 49 or not, but I'd love to try. I've opened an issue here:
https://gitlab.gnome.org/Teams/Design/os-mockups/-/issues/274
We may not know who our users are. But our software knows who our users are.
## Annual Report
I should really get on this but it's been a busy week with other things. Thanks everyone who's contributed their thoughts to the "Successes for 2025" issue so far. If you don't see your name and you still want to contribute something, please go ahead!
## Fiscal Controls
One of the aforementioned "other things" is Fiscal Controls.
This concept goes by many names. "Fiscal Controls", "Internal Controls", "Internal Policies and Procedures", etc. But they all refer to the same thing: how to manage financial risk. We're taking a three-pronged approach to start with:
- Reduce spend and tighten up policies. We have put the travel policy on pause (barring GUADEC, which was already approved) and we intend to tighten up all our policies.
- Clarity on capital shortages. We need to know exactly what our P&L looks like in any given month, and what our 3-month, 6-month, and annual projections look like based on yesterday's weather. Our bookkeepers, Ops team, and new treasurers are helping with this.
- Clarity in reporting. A 501(c)(3) is … kind of a weird shape. Not everyone in the Board is familiar with running a business and most certainly aren't familiar with running a non-profit. So we need to make it painfully straightforward for everyone on the Board to understand the details of our financial position, without getting into the weeds: How much money are we responsible for, as a fiscal host? How much money is restricted? How much core money do we have? Accounting is more art than science and the nuances of reporting accurately (but without forcing everyone to read a balance sheet) is a large part of why that's the case. Again, we have a lot of help from our bookkeepers, Ops team, and new treasurers.
There's a lot of work to do here and we'll keep iterating, but these feel like strong starts.
## Organizational Resilience
The other aforementioned "other thing" is resilience. We have a few things happening here.
First, we need broader ownership, control, and access to bank accounts. This is, of course, the related to, but different from, fiscal controls - our controls ensure no one person can sign themselves a cheque for $50,000. Multiple signatories ensures that such responsibility doesn't rest with a single individual. Everyone at the GNOME Foundation has impeccable moral standing but people do die, and we need to add resilience to that inevitability. More realistically (and immediately), we will be audited soon and the auditors will not care how trustworthy we believe one another to be.
Second, we have our baseline processes: filing 990s, renewing our registration, renewing insurance, etc. All of these processes should be accessible to (and, preferably, executable by) multiple people.
Third, we're finally starting to make good use of Vaultwarden. Thanks again, Bart, for setting this up for us.
Fourth, we need to ensure we have at least 3 administrators on each of our online accounts. Or, at worst, 2 administrators. Online accounts with an account owner should lean on an organizational account owner (not an individual) which multiple people control together. Thanks Rosanna for helping sort this out.
Last, we need at least 2 folks with root level access to all our self-hosted services. This of course true in the most literal sense, but we also need our SREs to have accounts with each service.
## Digital Wellbeing Kickoff
I'm pleased to announce that the Digital Wellbeing contract has kicked off! The developer who was awarded the contract is Ignacy Kuchciński and he has begun working with Philip and Sam as of Tuesday.
## Office Hours
I had a couple pleasant conversations with hackers this week: Jordan Petridis and Sophie Harold. I asked Sophie what she thought about the idea of "office hours" as I feel like I've gotten increasingly disconnected from the community after my first few weeks. Her response was something to the effect of "you can only try."
So let's do that. I'll invite maintainers and if you'd like to join, please reach out to a maintainer to find out the BigBlueButton URL for next Friday.
## A Hacker In Need Of Help
We have a hacker in the southwest United States who is currently in an unsafe living situation. This person has given me permission to ask for help on their behalf. If you or someone you know could provide a safe temporary living situation within the continental United States, please get in touch with me. They just want to hack in peace.
05 Jul 2025 6:31am GMT
04 Jul 2025
Planet GNOME
Hans de Goede: Recovering a FP2 which gives "flash write failure" errors
This blog post describes my successful os re-install on a fairphone 2 which was giving "flash write failure" errors when flashing it with fastboot, with the flash_FP2_factory.sh script. I'm writing down my recovery steps for this in case they are useful for anyone else.
I believe that this is caused by the bootloader code which implements fastboot not having the ability to retry recoverable eMMC errors. It is still possible to write the eMMC from Linux which can retry these errors.
So we can recover by directly fastboot-ing a recovery.img and then flashing things over adb.
( See step by step instructions... )
comments
04 Jul 2025 4:14pm GMT
This Week in GNOME: #207 Replacing Shortcuts
Update on what happened across the GNOME project in the week from June 27 to July 04.
GNOME Core Apps and Libraries
Sophie 🏳️🌈 🏳️⚧️ (she/her) reports
The Release Team is happy to announce, that Papers will be the default Document Viewer starting with GNOME 49. This comes after a Herculean effort of the Papers maintainers and contributors that started about four years ago. The inclusion into GNOME Core was lately only blocked by missing screen-reader support, which is now ready to be merged. Papers is a fork of Evince motivated by a faster pace of development.
Papers is not just a GTK 4 port but also brings new features like a better document annotations and support for mobile form factors. It is currently maintained by Pablo Correa Gomez, Qiu Wenbo, Markus Göllnitz, and lbaudin.
Emmanuele Bassi reports
While GdkPixbuf, the elderly statesperson of image loading libraries in GNOME, is being phased out in favour or better alternatives, like Glycin, we are still hard at work to ensure it's working well enough while applications and libraries are ported. Two weeks ago, GdkPixbuf acquired a safe, sandboxed image loader using Glycin; this week, this loader has been updated to be the default on Linux. The Glycin loader has also been updated to read SVG, and save image data including metadata. Additionally, GdkPixbuf has a new Android-native loader, using platform API; this allows loading icon assets when building GTK for Android. For more information, see the release notes for GdkPixbuf 2.43.3, the latest development snapshot.
Sophie 🏳️🌈 🏳️⚧️ (she/her) announces
The nightly GNOME Flatpak runtime and SDK
org.gnome.Sdk//master
are now based on the Freedesktop runtime and SDK 25.08beta. If you are using the nightly runtime in you Flatpak development manifest, you might have to adjust a few things:
- If you are using the LLVM extension, the required
sdk-extensions
is noworg.freedesktop.Sdk.Extension.llvm20
. Don't forget to also adjust theappend-path
. On your development system you will probably also have to runflatpak install org.freedesktop.Sdk.Extension.llvm20//25.08beta
.- If you are using other SDK extensions, they might also require a newer version. They can be installed with commands like
flatpak install org.freedesktop.Sdk.Extension.rust-stable//25.08beta
.
Libadwaita ↗
Building blocks for modern GNOME apps using GTK4.
Alice (she/her) 🏳️⚧️🏳️🌈 says
libadwaita finally has a replacement for the deprecated
GtkShortcutsWindow
-AdwShortcutsDialog
.AdwShortcutLabel
is available as a separate widget as well, replacingGtkShortcutLabel
Calendar ↗
A simple calendar application.
Hari Rana | TheEvilSkeleton (any/all) 🇮🇳 🏳️⚧️ announces
Happy Disability Pride Month everybody :)
During the past few weeks, there's been an overwhelming amount of progress with accessibility on GNOME Calendar:
- Event widgets/popovers will convey to screen readers that they are toggle buttons. They will also convey of their states (whether they're pressed or not) and that they have a popover. (See !587)
- Calendar rows will convey to screen readers that they are check boxes, along with their states (whether they're checked or not). Additionally, they will no longer require a second press of a tab to get to the next row; one tab will be sufficient. (See !588)
- Month and year spin buttons are now capable of being interacted with using arrow up/down buttons. They will also convey to screen readers that they are spin buttons, along with their properties (current, minimum, and maximum values). The month spin button will also wrap, where going back a month from January will jump to December, and going to the next month from December will jump to January. (See !603)
- Events in the agenda view will convey to screen readers of their respective titles and descriptions. (See !606)
All these improvements will be available in GNOME 49.
Accessibility on Calendar has progressed to the point where I believe it's safe to say that, as of GNOME 49, Calendar will be usable exclusively with a keyboard, without significant usability friction!
There's still a lot of work to be done in regards to screen readers, for example conveying time appropriately and event descriptions. But really, just 6 months ago, we went from having absolutely no idea where to even begin with accessibility in Calendar - which has been an ongoing issue for literally a decade - to having something workable exclusively with a keyboard and screen reader! :3
Huge thanks to Jeff Fortin for coordinating the accessibility initiative, especially with keeping the accessibility meta issue updated; Georges Stavracas for single-handedly maintaining GNOME Calendar and reviewing all my merge requests; and Lukáš Tyrychtr for sharing feedback in regards to usability.
All my work so far has been unpaid and voluntary; hundreds of hours were put into developing and testing all the accessibility-related merge requests. I would really appreciate if you could spare a little bit of money to support my work, thank you! 🩷
Glycin ↗
Sandboxed and extendable image loading and editing.
Sophie 🏳️🌈 🏳️⚧️ (she/her) reports
We recently switched our legacy image loading library GdkPixbuf over to using glycin internally, which is our new image loading library. Glycin is safer, faster, and supports more features. Something that we missed is how much software depends on the image saving capabilities of GdkPixbuf for different formats. But that's why we are making such changes early in the cycle to find these issues.
Glycin now supports saving images for the AVIF, BMP, DDS, Farbfeld, GIF, HEIC, ICO, JPEG, OpenEXR, PNG, QOI, TGA, TIFF, and WebP image formats. JXL will hopefully follow. This means GdkPixbuf can also save the formats that it could save before. The changes are available as glycin 2.0.alpha.6 and gdk-pixbuf 2.43.3.
Third Party Projects
Alexander Vanhee says
Gradia has been updated with the ability to upload edited images to an online provider of choice. I made sure users are both well informed about these services and can freely choose without being forced to use any particular one. The data related to this feature can also be updated dynamically without requiring a new release, enabling us to quickly address any data quality issues and update the list of providers as needed, without relying on additional package maintainer intervention.
You can find the app on Flathub.
Bilal Elmoussaoui reports
I have released a MCP (Model Context Protocol) server implementation that allows LLMs to access and interact with your favourite desktop environment. The implementation is available at https://github.com/bilelmoussaoui/gnome-mcp-server and you can read a bit more about it in my recent blog post https://belmoussaoui.com/blog/21-mcp-server
Phosh ↗
A pure wayland shell for mobile devices.
Guido reports
Phosh 0.48.0 is out:
There's a new lock screen plugin that show all currently running media players (that support the MPRIS interface). You can thus switch between Podcasts, Shortwave and Gapless without having to unlock the phone.
We also updated phosh's compositor phoc to wlroots 0.19.0 bringing all the goodies from this releases. Phoc now also remembers the output scale in case the automatic scaling doesn't match your expectations.
There's more, see the full details at here
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
04 Jul 2025 12:00am GMT
02 Jul 2025
Planet GNOME
Richard Littauer: A handful of EDs
I had the great privilege of going to UN Open Source Week at the UN, in New York City, last month. At one point, standing on the upper deck and looking out over the East River, I realized that there were more than a few former and current GNOME executive directors. So, we got a photo.
Stormy, Karen, Jeff, me, Steven, and Michael - not an ED, but the host of the event and the former board treasurer - all lined up.
Fun.
Edit: Apparently Jeff was not an ED, but a previous director. I wonder if there is a legacy note of all previous appointments…
02 Jul 2025 10:53pm GMT
Carlos Garnacho: Developing an application with TinySPARQL in 2025
Back a couple of months ago, I was given the opportunity to talk at LAS about search in GNOME, and the ideas floating around to improve it. Part of the talk was dedicated to touting the benefits of TinySPARQL as the base for filesystem search, and how in solving the crazy LocalSearch usecases we ended up with a very versatile tool for managing application data, either application-private or shared with other peers.
It was no one else than our (then) future ED in a trench coat (I figure!) who forced my hand in the question round into teasing an application I had been playing with, to showcase how TinySPARQL should be used in modern applications. Now, after finally having spent some more time on it, I feel it's up to a decent enough level of polish to introduce it more formally.
Behold Rissole
Rissole is a simple RSS feed reader, intended let you read articles in a distraction free way, and to keep them all for posterity. It also sports a extremely responsive full-text search over all those articles, even on huge data sets. It is built as a flatpak, you can ATM download it from CI to try it, meanwhile it reaches flathub and GNOME Circle (?). Your contributions are welcome!
So, let's break down how it works, and what does TinySPARQL bring to the table.
Structuring the data
The first thing a database needs is a definition about how the data is structured. TinySPARQL is strongly based on RDF principles, and depends on RDF Schema for these data definitions. You have the internet at your fingertips to read more about these, but the gist is that it allows the declaration of data in a object-oriented manner, with classes and inheritance:
mfo:FeedMessage a rdfs:Class ; rdfs:subClassOf mfo:FeedElement . mfo:Enclosure a rdfs:Class ; rdfs:subClassOf mfo:FeedElement .
One can declare properties on these classes:
mfo:downloadedTime a rdf:Property ; nrl:maxCardinality 1 ; rdfs:domain mfo:FeedMessage ; rdfs:range xsd:dateTime .
And make some of these properties point to other entities of specific (sub)types, this is the key that makes TinySPARQL a graph database:
mfo:enclosureList a rdf:Property ; rdfs:domain mfo:FeedMessage ; rdfs:range mfo:Enclosure .
In practical terms, a database needs some guidance on what data access patterns are most expected. Being a RSS reader, sorting things by date will be prominent, and we want full-text search on content. So we declare it on these properties:
nie:plainTextContent a rdf:Property ; nrl:maxCardinality 1 ; rdfs:domain nie:InformationElement ; rdfs:range xsd:string ; nrl:fulltextIndexed true . nie:contentLastModified a rdf:Property ; nrl:maxCardinality 1 ; nrl:indexed true ; rdfs:subPropertyOf nie:informationElementDate ; rdfs:domain nie:InformationElement ; rdfs:range xsd:dateTime .
The full set of definitions will declare what is permitted for the database to contain, the class hierarchy and their properties, how do resources of a specific class interrelate with other classes… In essence, how the information graph is allowed to grow. This is its ontology (semi-literally, its view of the world, whoooah duude). You can read more in detail how these declarations work at the TinySPARQL documentation.
This information is kept in files separated from code, built in as a GResource in the application binary, and used during initialization to create a database at a location in control of the application:
let mut store_path = glib::user_data_dir(); store_path.push("rissole"); store_path.push("db"); obj.imp() .connection .set(tsparql::SparqlConnection::new( tsparql::SparqlConnectionFlags::NONE, Some(&gio::File::for_path(store_path)), Some(&gio::File::for_uri( "resource:///com/github/garnacho/Rissole/ontology", )), gio::Cancellable::NONE, )?) .unwrap();
So there's a first advantage right here, compared to other libraries and approaches: The application only has to declare this ontology without much (or any) further code to support supporting code, compare to going through the design/normalization steps for your database design, and. having to CREATE TABLE
your way to it with SQLite.
Handling structure updates
If you are developing an application that needs to store a non-trivial amount of data. It often comes as a second thought how to deal with new data being necessary, stored data being no longer necessary, and other post-deployment data/schema migrations. Rarely things come up exactly right at the first try.
With few documented exceptions, TinySPARQL is able to handle these changes to the database structure by itself, applying the necessary changes to convert a pre-existing database into the new format declared by the application. This also happens at initialization time, from the application-provided ontology.
But of course, besides the data structure, there might also be data content that might some kind of conversion or migration, this is where an application might still need some supporting code. Even then, SPARQL offers the necessary syntax to convert data, from small to big, from minor to radical changes. With the CONSTRUCT
query form, you can generate any RDF graph from any other RDF graph.
For Rissole, I've gone with a subset of the Nepomuk ontology, which does contain much embedded knowledge about the best ways to lay data in a graph database. As such I don't expect major changes or gotchas in the data, but this remains a possibility for the future, e.g. if we were to move to another emerging ontology, or any other less radical data migrations that might crop up.
So here's the second advantage, compare to having to ALTER TABLE
your way to new database schemas, or handle data migration for each individual table, and ensuring you will not paint yourself into a corner in the future.
Querying data
Now we have a database! We can write the queries that will feed the application UI. Of course, the language to write these in is SPARQL, there are plenty of resources over it on the internet, and TinySPARQL has its own tutorial in the documentation.
One feature that sets TinySPARQL apart from other SPARQL engines in terms of developer experience is the support for parameterized values in SPARQL queries, through a little bit of non-standard syntax and the TrackerSparqlStatement API, you can compile SPARQL queries into reusable statements, which can be executed with different arguments, and will compile to an intermediate representation, resulting in faster execution when reused. Statements are also the way to go in terms of security, in order to avoid query injection situations. This is e.g. Rissole (simplified) search query:
SELECT ?urn ?title { ?urn a mfo:FeedMessage ; nie:title ?title ; fts:match ~match . }
Which allows me to funnel a GtkEntry content right away in the ~match
without caring about character escaping or other validation. These queries may also be stored in GResource, and live as separate files in the project tree, and be loaded/compiled early during application startup once, so they are reusable during the rest of the application lifetime:
fn load_statement(&self, query: &str) -> tsparql::SparqlStatement { let base_path = "/com/github/garnacho/Rissole/queries/"; let stmt = self .imp() .connection .get() .unwrap() .load_statement_from_gresource(&(base_path.to_owned() + query), gio::Cancellable::NONE) .unwrap() .expect(&format!("Failed to load {}", query)); stmt } ... // Pre-loading an statement obj.imp() .search_entries .set(obj.load_statement("search_entries.rq")) .unwrap(); ... // Running a search pub fn search(&self, search_terms: &str) -> tsparql::SparqlCursor { let stmt = self.imp().search_entries.get().unwrap(); stmt.bind_string("match", search_terms); stmt.execute(gio::Cancellable::NONE).unwrap() }
This data is of course all introspectable with the gresource
CLI tool, and I can run these queries from a file using the tinysparql query
CLI command, either on the application database itself, or on a separate in-memory testing database created through e.g.tinysparql endpoint --ontology-path ./src/ontology --dbus-service=a.b.c
.
Here's the third advantage for application development. Queries are 100% separate from code, introspectable, and able to be run standalone for testing, while the code remains highly semantic.
Inserting and updating data
When inserting data, we have two major pieces of API to help with the task, each with their own strengths:
- TrackerSparqlStatement also works for SPARQL update queries.
- TrackerResource offers more of a builder API to generate RDF data.
These can be either executed standalone, or combined/accumulated in a TrackerBatch for a transactional behavior. Batches do improve performance by clustering writes to the database, and database stability by making these changes either succeed or fail atomically (TinySPARQL is fully ACID).
This interaction is the most application dependent (concretely, retrieving the data to insert to the database), but here is some links to Rissole code for reference, using TrackerResource to store RSS feed data, and using TrackerSparqlStatement to delete RSS feeds.
And here is the fourth advantage for your application, an async friendly mechanism to efficiently manage large amounts of data, ready for use.
Full-text search
For some reason, there tends to be some magical thinking revolving databases and how these make things fast. And the most damned pattern of all can be typically at the heart of search UIs: substring matching. What feels wonderful during initial development in small datasets soon slows to a crawl in larger ones. See, an index is little more than a tree, you can look up exact items on relatively low big O, lookup by prefix with slightly higher one, and for anything else (substring, suffix) there will be nothing to do but a linear search. Sure, the database engine will comply, however painstakingly.
What makes full-text search fundamentally different? This is a specialized index that performs an effort to pre-tokenize the text, so that each parsed word and term is represented individually, and can be looked up independently (either prefix or exact matches). At the expense of a slightly higher insertion cost (i.e. the usually scarce operation), this provides response times measured in milliseconds when searching for terms (i.e. the usual operation) regardless of their position in the text, even on really large data sets. Of course this is a gross simplification (SQLite has extensive documentation about the details), but I hopefully shined enough light into why full-text search can make things fast in a way a traditional index can not.
I am largely parroting a SQLite feature here, and yes, this might also be available for you if using SQLite, but the fact that I've already taught you in this post how to use it in TinySPARQL (declaring nrl:fulltextIndexed
on the searchable properties, using fts:match
in queries to match on them) does again have quite some contrast with rolling your own database creation code. So here's another advantage.
Backups and other bulk operations
After you got your data stored, is it enshrined? Are there forward plans to get the data back again out of there? Is the backup strategy cp
?
TinySPARQL (and the SPARQL/RDF combo at its core) boldly says no. Data is fully introspectable, and the query language is powerful enough to extract even full data dumps at a single query, if you wished so. This is for example available through the command line with tinysparql export
and tinysparql import
, the full database content can be serialized into any of the supported RDF formats, and can be either post-processed or snapshot into other SPARQL databases from there.
A "small" detail I have not mentioned so far is the (optional) major network transparency of TinySPARQL, since for the most part it is irrelevant on usecases like Rissole. Coming from web standards, of course network awareness is a big component of SPARQL. In TinySPARQL, creating an endpoint to publicly access a database is an explicit choice of made through API, and so it is possible to access other endpoints either from a dedicated connection or by extending your local queries. Why do I bring this up here? I talked at Guadec 2023 about Emergence, a local-first oriented data synchronization mechanism between devices owned by the same user. Network transparency sits at the heart of this mechanism, which could make Rissole able to synchronize data between devices, or any other application that made use of it.
And this is the last advantage I'll bring up today, a solid standards-based forward plan to the stored data.
Closing note
If you develop an application that does need to store data, future you might appreciate some forward thinking on how to handle a lifetime's worth of it. More artisan solutions like SQLite or file-based storage might set you up quickly for other funnier development and thus be a temptation, but will likely decrease rapidly in performance unless you know very well what you are doing, and will certainly increase your project's technical debt over time.
TinySPARQL wraps all major advantages of SQLite with a versatile data model and query language strongly based on open standards. The degree of separation between the data model and the code makes both neater and more easily testable. And it's got forward plans in terms of future data changes, backups, and migrations.
As everything is always subject to improvement, there's some things that could do for a better developer experience:
- Query and schema definition files could be linted/validated as a preprocess step when embedding in a GResource, just as we validate GtkBuilder files
- TinySPARQL's builtin web IDE started during last year's GSoC should move forward, so we have an alternative to the CLI
- There could be graphical ways to visualize and edit these schemas
- Similar thing, but to visually browse a database content
- I would not dislike if some of these were implemented in/tied to GNOME Builder
- It would be nice to have a more direct way to funnel the results of a SPARQL query into UI. Sadly, the GListModel interface API mandates random access and does not play nice with cursor-alike APIs as it is common with databases. This at least excludes making TrackerSparqlCursor just implement GListModel.
- A more streamlined website to teach and showcase these benefits, currently tinysparql.org points to the developer documentation (extensive otoh, but does not make a great landing page).
Even though the developer experience would be more buttered up, there's a solid core that is already a leap compared to other more artisan solutions, in a few areas. I would also like to point out that Rissole is not the first instance here, there are also Polari and Health using TinySPARQL databases this way, and mostly up-to-date in these best practices. Rissole is just my shiny new excuse to talk about this in detail, other application developers might appreciate the resource, and I'd wish it became one of many, so Emergence finally has a worthwhile purpose.
Last but not least, I would like to thank Kristi, Anisa, and all organizers at LAS for a great conference.
02 Jul 2025 5:39pm GMT
Alley Chaggar: Demystifying The Codegen Phase Part 2
Intro
Hello again, I'm here to update my findings and knowledge about Vala. Last blog, I talked about the codegen phase, as intricate as it is, I'm finding some very helpful information that I want to share.
Looking at The Outputted C Code
While doing the JSON module, I'm constantly looking at C code. Back and forth, back and forth, having more than 1 monitor is very helpful in times like these.
At the beginning of GSoC I didn't know much of C, and that has definitely changed. I'm still not fluent in it, but I can finally read the code and understand it without too much brain power. For the JsonModule I'm creating, I first looked at how users can currently (de)serialize JSON. I went scouting json-glib examples since then, and for now, I will be using json-glib. In the future, however, I'll look at other ways in which we can have JSON more streamlined in Vala, whether that means growing away from json-glib or not.
Using the command 'valac -C yourfilename.vala', you'll be able to see the C code that Valac generates. If you were to look into it, you'd see a bunch of temporary variables and C functions. It can be a little overwhelming to see all this if you don't know C.
When writing JSON normally with minimal customization and without the JsonModule's support. You would be writing it like this:
Json.Node node = Json.gobject_serialize (person);
Json.Generator gen = new Json.Generator ();
gen.set_root(node);
string result = gen.to_data (null);
print ("%s\n", result);
This code is showing one way to serialize a GObject class using json-glib.
The code below is a snippet of C code that Valac outputs for this example. Again, to be able to see this, you have to use the -C command when running your Vala code.
static void
_vala_main (void)
{
Person* person = NULL;
Person* _tmp0_;
JsonNode* node = NULL;
JsonNode* _tmp1_;
JsonGenerator* gen = NULL;
JsonGenerator* _tmp2_;
gchar* _result_ = NULL;
gchar* _tmp3_;
_tmp0_ = person_new ();
person = _tmp0_;
person_set_name (person, "Alley");
person_set_age (person, 2);
_tmp1_ = json_gobject_serialize ((GObject*) person);
node = _tmp1_;
_tmp2_ = json_generator_new ();
gen = _tmp2_;
json_generator_set_root (gen, node);
_tmp3_ = json_generator_to_data (gen, NULL);
_result_ = _tmp3_;
g_print ("%s\n", _result_);
_g_free0 (_result_);
_g_object_unref0 (gen);
__vala_JsonNode_free0 (node);
_g_object_unref0 (person);
}
You can see many tempary variables denoted by the names __tmp*_
, but you can also see JsonNode being called, you can see Json's generator being called and setting root, and you can even see json gobject serialize. All of this was in our Vala code, and now it's all in the C code, having temporary variables containing them to be successfully compiled to C code.
The jsonmodule
If you may recall the Codegen is the clash of Vala code, but also writing to C code. The steps I'm taking for the JsonModule are looking at the examples to (de)serialize then looking at how the example compiled to C. Since the whole purpose of my work is to write how the C should look like. I'm mainly going off of C's _vala_main
function when determining which C code I should put into my module, but I'm also going off of what the Vala code the user put.
// serializing gobject classes
void generate_gclass_to_json (Class cl) {
cfile.add_include ("json-glib/json-glib.h");
var to_json_class = new CCodeFunction ("_json_%s_serialize_myclass".printf (get_ccode_lower_case_name (cl, null)), "void");
to_json_class.add_parameter (new CCodeParameter ("gobject", "GObject *"));
to_json_class.add_parameter (new CCodeParameter ("value", " GValue *"));
to_json_class.add_parameter (new CCodeParameter ("pspec", "GParamSpec *"));
//...
var Json_gobject_serialize = new CCodeFunctionCall (new CCodeIdentifier ("json_gobject_serialize"));
Json_gobject_serialize.add_argument (new CCodeIdentifier ("gobject"));
// Json.Node node = Json.gobject_serialize (person); - vala code
Json_gobject_serialize.add_argument (new CCodeIdentifier ("gobject"));
var node_decl_right = new CCodeVariableDeclarator ("node", Json_gobject_serialize);
var node_decl_left = new CCodeDeclaration ("JsonNode *");
node_decl_left.add_declarator (node_decl_right);
// Json.Generator gen = new Json.Generator (); - vala code
var gen_decl_right = new CCodeVariableDeclarator ("generator", json_gen_new);
var gen_decl_left = new CCodeDeclaration ("JsonGenerator *");
gen_decl_left.add_declarator (gen_decl_right);
// gen.set_root(node); - vala code
var json_gen_set_root = new CCodeFunctionCall (new CCodeIdentifier ("json_generator_set_root"));
json_gen_set_root.add_argument (new CCodeIdentifier ("generator"));
json_gen_set_root.add_argument (new CCodeIdentifier ("node"));
//...
The code snippet above is a work in progress method in the JsonModule that I created called 'generate_gclass_to_json' to generate serialization for GObject classes. I'm creating a C code function and passing parameters through it. I'm also filling the body with how the example code did the serializing in the first code snippet. Instead of the function calls being created in _vala_main
(by the user), they'll have their own function that will instantly get created by the module instead.
static void _json_%s_serialize_myclass (GObject *gobject, GValue *value, GParamSpec *pspec)
{
JsonNode *node = Json_gobject_serialize (gobject);
JsonGenerator *generator = json_generator_new ();
json_generator_set_root (generator, node);
//...
}
Comparing the differences with the original Vala code and the compiled code (C code), it takes the Vala code shape, but it's written in C.
02 Jul 2025 7:30am GMT
Hubert Figuière: Dev Log June 2025
May and June in one convenient location.
libopenraw
Released 0.4.0.alpha10
After that, added Nikon Z5 II and P1100, Sony 6400A and RX100M7A, Panasonic S1II and S1IIE, DJI Mavic 3 Pro Cinema (support for Nikon and Sony mostly incomplete, so is Panasonic decompression), Fujifilm X-E5 and OM Systems OM-5 II.
gnome-raw-thumbnailer
Updated to the latest libopenraw.
Released 48.0
flathub-cli
This is a project I started a while ago but put on the back burner due to scheduling conflict. It's a command line tool to integrate all the tasks of maintaining flatpak packages for flathub. Some stuff isn't flathub specific though. I already have a bunch of scripts I use, and this is meant to be next level. It also merges into it my previous tool, flatpak-manifest-generator, an interactive tool to generate flatpak manifests.
One thing I had left in progress and did finish implementing at least the basics is the cleanup
command to purge downloads. The rationale is that when you update a package manifest, you change the sources. But the old ones that have been downloaded are still kept. The cleanup downloads
command will find these unused sources and delete them for you. I really needed this.
flathub-cli is written in Rust.
AbiWord
Fixing some annoying bugs (regressions) in master, some memory leakage in both stable and master, a lot of in the Gtk UI code. I also fixed a crash when editing lists in 3.0.6 that was due to some code not touched since 2004, and even then that part is probably even older. The short story is that updating a value in the StringMap<>
updated the key whose pointer ended up being help somewhere else. Yep, dangling pointer. The fix was to not update the key if it is the same.
On master only, I also started fixing the antiquated C++ syntax. For some reason in C++ there was a lot of typedef enum
and typedef struct
, probably an artifact of the late 90's code origin. At the same time moved to #pragma once
for header includes. Let the compiler handle it. Also fixed a crash with saving a document with revisions.
02 Jul 2025 12:00am GMT
01 Jul 2025
Planet GNOME
Steven Deobald: Disability Pride
I saw Sophie's #DisabilityPrideMonth post two days ago. I don't normally make a point of re-reading tweets, but I've revisited it a dozen times, due to its clarity.
I have been staring at an empty Emacs buffer for an hour now. I have been trying to think of some sincere and supportive words I could add to Sophie's. The best I can do is this: I will try to follow her example.
Thank you, Sophie. Thank you to the GNOME community for providing a welcoming space that allows us all to be who we are. And thank you to the GNOME contributors who work on the #a11y features which enable users like me to access a computer at all: Hari Rana, Jeff Fortin Tam, Bilal Elmoussaoui, Matthias Clasen, Claire, Emmanuele Bassi, Lukáš Tyrychtr, Sam Hewitt (and all our Design team, who take #a11y very seriously), Eitan Isaacson, Mike Gorse, Samuel Thibault, Georges Stavracas, and many more.
01 Jul 2025 10:13pm GMT
Diego Escalante Urrelo: The New Troll Diet
I have been thinking a lot about online harassment in software communities lately.
Harassment is nothing new in our spaces, and I even have a bunch of fun stories from trolls, past and new. However, all these stories have one thing in common: they are irrelevant to modern harassment and trolling. So I would like to humbly propose a new framing of this whole issue.
Harassment In The Troll Feeding Days
Perhaps the most jarring change in online culture has been in how harassment happens on the internet. Spending our formative years in forums, IRC, and mailing lists, we got used to the occasional troll that after a few annoying interactions would get blocked by an admin.
Back then, a troll was limited to baiting for replies, and that power was easy to take away. Remember, removing a troll was as simple as blocking an email address or banning an IP on IRC.
In short: Don't feed the troll and it will either get bored and go away, or be blocked by an admin. Right?
Online Harassment Is a Different Game Now
The days of starving trolls are over. Trolls now have metaphorical DoorDash, UberEats, and are even decent cooks themselves.
It is now impossible to defend an online community by simply "blocking the bad apples". A determined troll now has access to its own audience, peers to amplify their message, and even attack tools that used to be exclusive to nation states.
A DDoS attack can be implemented with a few dozen dollars and cost thousands to defend. Social media accounts can be bought by the hundreds. Doxxing is easy for motivated individuals. Harassment campaigns can be orchestrated in real-time to flood comment sections, media outlets, employer inboxes, and even deplatform creators.
Deterrence used to work because the trolls would lose access to attention and relevance if banned. This is no longer the case. In fact, trolls now have a lot to gain by building an audience around being ostracized by their targets, portraying themselves as brave truth tellers that are censored by evil-doers.
A strange game indeed, and not playing it doesn't work anymore.
Rules Are No Longer Enough
All of the above means that online communities can no longer point to the "No Trolls Allowed" sign and consider the job done, this "rules-based" framework is no longer viable deterrence. A different approach is needed, one that is not naive to the ruses and concern trolling of contemporary harassment.
A relevant example comes to mind. The popular "Nazi Bar" story as told by Michael Tager:
"(...) Tager recounted visiting a "shitty crustpunk bar" where he saw a patron abruptly expelled: the bartender explained that the man was wearing "iron crosses and stuff", and that he feared such patrons would become regulars and start bringing friends if not promptly kicked out, which would lead him to realize "oh shit, this is a Nazi bar now" only after the unwanted patrons became too "entrenched" to kick out without trouble."
(...) "(Internet slang) A space in which bigots or extremists have come to dominate due to a lack of moderation or by moderators wishing to remain neutral or avoid conflict." From Wiktionary
The story is not about the necessity of having a better rulebook. No, the point is that, in some circumstances, moderation can not afford to be naive and has to see through the ruse of bad actors appealing to tolerance or optics. Some times you have to loudly tell someone to fuck off, and kick them out.
This might seem counter intuitive if you grew up in the "don't feed the troll" era. But trolls no longer need the attention of their victims to thrive. In fact, some times silence and retreat from conflict are even bigger rewards.
The Trap Card of Behavioral Outbursts
Because the rules-based framework considers any engagement a failure, it leads groups to avoid conflict at all cost, not realizing that they are already in conflict with their harassers. Taken to an extreme, any push-back against harassment is seen as bad as the harassment itself. This flawed reasoning might even lead to throwing others under the bus, or walking back statements of support, all done in the name of keeping the harassers seemingly silent.
Unfortunately, conceding to trolls after receiving push-back is one of Behavioral Psychology "trap cards". The concept is formally known as "Behavioral Outburst" and describes how a subject will intensify an unwanted behavior after receiving push-back. The classic example is a kid having a tantrum:
A kid is at the store with their parent. The kid starts crying, asking for a new toy. The parent says no and warns the kid that they will go back home if they keep crying.
The kid keeps crying and the parent decides to fulfill the warning to go back home.
As a response to this consequence, the kid then has an outburst of the unwanted behavior: louder crying, screaming, throwing themselves to the floor.
The parent gets overwhelmed and ends up buying a new toy for the kid.
The above example is commonly used to demonstrate two concepts:
- When an unwanted behavior is met with resistance, it frequently leads to an outburst of that behavior to "defeat" such resistance
- If the outburst succeeds, then the outburst becomes the new baseline for responding to any resistance
We should understand that applying consequences to a harasser (bans, warnings, condemnation) is likely to cause an outburst of the unwanted behavior. This is unavoidable. However, it is a fatal mistake to cede to a behavioral outburst. If consequences are taken back, then the outburst becomes the new default level of harassment.
Even worse, an illusion of control is introduced: we harass, they fight back; we intensify the harassment a little bit, they concede.
Why Speaking Up Is Important
Communities are not corporations and morale is not set by a rule-book or by mandate of leadership. Communities, specially the ones giving away tens of thousands of dollars in value to each other, are held together by mutual trust.
One element of this mutual trust, maybe the most important one, is knowing that your colleagues have your back and will defend you from anyone unfairly coming after you. Just like a soccer team will swarm a rival to defend a teammate.
Knowing that your team will loudly tell those coming after you to fuck off is not only good for morale, but also a necessary outlet and catharsis for a community. Silence only leads to festering of the most rancid vibes, it erodes trust and creates feelings of isolation in the targeted individuals.
If solidarity and empathy are not demonstrated, is that any different from there being none?
A New Framework: Never Cede To The Troll
We need a new framework for how to defend against "trolls". The feeding metaphor ran its course many years ago. It is done and will not be coming back.
New online risks demand that we adapt and become proactive in protecting our spaces. We have to loudly and proudly set the terms of what is permissible. Those holding social or institutional power in communities should be willing to drop a few loud fuck offs to anyone trying to work their way in by weaponizing optics, concern trolling, or the well known "tolerance paradox". Conceding through silence, or self-censorship, only emboldens those who benefit from attacking a community.
It is time that we adopt a bolder framework where defending our spaces and standing our ground to protect each other is the bare minimum expected.
01 Jul 2025 10:00am GMT
Victor Ma: Bugs, bugs, and more bugs!
In the past two weeks, I worked on two things:
- Squashing a rebus bug.
- Combine the two suggested words lists into one.
The rebus bug
A rebus cell is a cell that contains more than one letter in it. These aren't too common in crossword puzzles, but they do appear occasionally-and especially so in harder puzzles.

Our word suggestions lists were not working for slots with rebus cells. More specifically, if the cursor was on a cell that's within letters in rebus - 1
cells to the right of a rebus cell, then an assertion would fail, and the word suggestions list would be empty.
The cause of this bug is that our intersection code (which is what generates the suggested words) was not accounting for rebuses at all! The fix was to modify the intersection code to correctly count the additional letters that a rebus cell contains.
Combine the suggested words lists
The Crosswords editor shows a the words list for both Across and Down, at the same time. This is different from what most other crossword editors do, which is to have a single suggested words list that switches between Across and Down, based on the cursor's direction.
I think having a single list is better, because it's visually cleaner, and you don't have to take a second to find right list. It also so happens that we have a problem with our sidebar jumping, in large part because of the two suggested words lists.
So, we decided that I should combine the two lists into one. To do this, I removed the second list widget and list model, and then I added some code to change the contents of the list model whenever the cursor direction changes.

More bugs!
I only started working on the rebus bug because I was working on the word suggestions bug. And I only started working on that bug because I discovered it while using the Editor. And it's a similar story with the words lists unification task. I only started working on it because I noticed the sidebar jumping bug.
Now, the plan was that after I fixed those two bugs, I would turn my attention to a bigger task: adding a step of lookahead to our fill algorithm. But alas, as I was fixing the two bugs, I noticed a few more bugs. But they shouldn't take too long, and they ought to be fixed. So I'm going to do that first, and then transition to working on the fill lookahead task.
01 Jul 2025 12:00am GMT
30 Jun 2025
Planet GNOME
Tobias Bernard: Aardvark: Summer 2025 Update
It's been a while, so here's an update about Aardvark, our initiative to bring local-first collaboration to GNOME apps!
A quick recap of what happened since my last update:
- Since December, we had three more Aardvark-focused events in Berlin
- We discussed peer-to-peer threat models and put together designs addressing some of the concerns that came out of those discussions
- We switched from using Automerge to Loro as a CRDT library in the app, mainly because of better documentation and native support for undo/redo
- As part of a p2panda NLnet grant, Julian Sparber has been building the Aardvark prototype out into a more fully-fledged app
- We submitted and got approved for a new Prototypefund grant to further build on this work, which started a few weeks ago!
- With the initiative becoming more concrete we retired the "Aardvark" codename, and gave the app a real GNOME-style app name: "Reflection"
The Current State
As of this week, the Reflection (formerly Aardvark) app already works for simple Hedgedoc-style use cases. It's definitely still alpha-quality, but we already use it internally for our team meetings. If you're feeling adventurous you can clone the repo and run it from Builder, it should mostly work :)
Our current focus is on reliability for basic collaboration use cases, i.e. making sure we're not losing people's data, handling various networking edge cases smoothly, and so on. After that there are a few more missing UI features we want to add to make it comfortable to use as a Hedgedoc replacement (e.g. displaying other people's cursors and undo/redo).
At the same time, the p2panda team (Andreas, Sam, and glyph) are working on new features in p2panda to enable functionality we want to integrate later on, particularly end-to-end encryption and an authentication/permission system.
Prototype Fund Roadmap
We have two primary goals for the Prototype Fund project: We want to build an app that's polished enough to use as a daily driver for meeting notes in the near-term future, but with an explicit focus on full-stack testing of p2panda in a real-world native desktop app. This is because our second goal is kickstarting a larger ecosystem of local-first GNOME apps. To help with this, the idea is for Reflection to also serve as an example of a GTK app with local-first collaboration that others can copy code and UI patterns from. We're not sure yet how much these two goals (peer-to-peer example vs. daily driver notes app) will be in conflict, but we hope it won't be too bad in practice. If in doubt we'll probably be biased towards the former, because we see this app primarily as a step towards a larger ecosystem of local-first apps.
To that end it's very important to us to involve the wider community of GNOME app developers. We're planning to write more regular blog posts about various aspects of our work, and of course we're always available for questions if anyone wants to start playing with this in their own apps. We're also planning to create GObject bindings so people can easily use p2panda from C, Python, Javascript, Vala, etc. rather than only from Rust.

We aim to release a first basic version of the app to Flathub around August, and then we'll spend the rest of the Prototype Fund period (until end of November) adding more advanced features, such as end-to-end encryption and permission management. Depending on how smoothly this goes, we'd also like to get into some fancier UI features (such as comments and suggested edits), but it's hard to say at this point.
If we're approved for Prototype Fund's Second Stage (will be announced in October), we'll get to spend a few more months doing mostly non-technical tasks for the project, such as writing more developer documentation, and organizing a GTK+Local-First conference next spring.
Meet us at GUADEC
Most of the Reflection team (Julian Sparber, Andreas Dzialocha, and myself) are going to be at GUADEC in July, and we'll have a dedicated Local-First BoF (ideally on Monday July 28th, but not confirmed yet). This will be a great opportunity for discussions towards a potential system sync service, to give feedback on APIs if you've already tried playing with them, or to tell us what you'd need to make your app collaborative!
In the mean time, if you have questions or want to get involved, you can check out the code or find us on Matrix.
Happy Hacking!
30 Jun 2025 9:03pm GMT
Bilal Elmoussaoui: Grant the AI octopus access to a portion of your desktop
The usage of Large Language Models (LLMs) has become quite popular, especially with publicly and "freely" accessible tools like ChatGPT, Google Gemini, and other models. They're now even accessible from the CLI, which makes them a bit more interesting for the nerdier among us.
One game-changer for LLMs is the development of the Model Context Protocol (MCP), which allows an external process to feed information (resources) to the model in real time. This could be your IDE, your browser, or even your desktop environment. It also enables the LLM to trigger predefined actions (tools) exposed by the MCP server. The protocol is basically JSON-RPC over socket communication, which makes it easy to implement in languages like Rust.
So, what could possibly go wrong if you gave portions of your desktop to this ever-growing AI octopus?
The implementation details
Over the weekend, I decided not only to explore building an MCP server that integrates with the GNOME desktop environment, but also to use Anthropic's Claude Code to help implement most of it.
The joyful moments
The first step was to figure out what would be simple yet meaningful to give the LLM access to, to see:
- if it could recognize that an MCP server was feeding it live context, and
- how well it could write code around that, lol.
I started by exposing the list of installed applications on the system, along with the ability to launch them. That way, I could say something like: "Start my work environment", and it would automatically open my favorite text editor, terminal emulator, and web browser.
Overall, the produced code was pretty okay; with some minor comments here and there, the model managed to fix its mistakes without any issues.
Once most of the basic tools and resources were in place, the LLM also did some nice code cleanups by writing a small macro to simplify the process of creating new tools/resources without code duplication.
The less joyful ones
You know that exposing the list of installed applications on the system is not really the important piece of information the LLM would need to do anything meaningful. What about the list of your upcoming calendar events? Or tasks in your Todo list?
If you're not familiar with GNOME, the way to achieve this is by using Evolution Data Server's DBus APIs, which allow access to information like calendar events, tasks, and contacts. For this task, the LLM kept hallucinating DBus interfaces, inventing methods, and insisted on implementing them despite me repeatedly telling it to stop - so I had to take over and do the implementation myself.
My takeaway from this is that LLMs will always require human supervision to ensure what they do is actually what they were asked to do.
Final product
The experience allowed us (me and the LLM pet) to build a simple yet powerful tool that can give your LLM access to the following resources:
- Applications list
- Audio and media status (MPRIS)
- Calendar events
- System information
- Todo list
And we built the following tools:
- Application launcher
- Audio and media control (MPRIS)
- Notifications, allowing sending a new notification
- Opening a file
- Quick settings, allowing the LLM to turn on/off things like dark style, Wi-Fi, or so
- Screenshot, useful for things like text recognition, for example, or even asking the LLM to judge your design skills
- Wallpaper, allows the LLM to set you a new wallpaper, because why not!
- Window management, allows listing, moving, and resizing windows using the unsafe GNOME Shell
Eval
API for now, until there is a better way to do it.
One could add more tools, for example, creating new events or new tasks, but I left the exercise to new contributors.
The tool is available on GitHub at https://github.com/bilelmoussaoui/gnome-mcp-server and is licensed under the MIT License.
Caution
Giving an external LLM access to real-time information about your computer has privacy and potentially security implications, so use with caution. The built tool allows disabling specific tools/resources via a configuration file; see https://github.com/bilelmoussaoui/gnome-mcp-server?tab=readme-ov-file#configuration
Conclusion
The experimentation was quite enriching as I learned how MCP can be integrated into an application/ecosystem and how well LLMs ingest those resources and make use of the exposed actions. Until further improvements are made, enjoy the little toy tool!
30 Jun 2025 12:00am GMT
29 Jun 2025
Planet GNOME
Sam Thursfield: dnf uninstall
I am a long time user of the Fedora operating system. It's very good quality, with a lot of funding from Red Hat (who use it to crowd-source testing for their commercial product Red Hat Enterprise Linux).
On Fedora you use a command named dnf
to install and remove packages. The absolute worst design decision of Fedora is this:
- To install a package:
dnf install
- To uninstall a package:
dnf remove
If I had a dollar for every time I typed dnf uninstall foo
and got an error then I'd be able to stage a lavish wedding in Venice by now.
As a Nushell user, I finally spent 5 minutes to fix this forever by adding the following to my ~/.config/nushell/config.nu
file:
def "dnf uninstall" […packages: string] {
dnf remove …$packages
}
(I also read online about a dnf alias
command that might solve this, but it isn't available for me for whatever reason).
That's all for today!
29 Jun 2025 2:06pm GMT
28 Jun 2025
Planet GNOME
Ahmed Fatthi: GSoC 2025: June Progress Report
June has been a month of deep technical work and architectural progress on my GSoC project with GNOME Papers. Here's a summary of the key milestones, challenges, and decisions from the month.
�️ Architecture Overview
To better illustrate the changes, here are diagrams of the current (unsandboxed) and the new (sandboxed) architectures for GNOME Papers:
Current Architecture (Unsandboxed):
Target Architecture (Sandboxed):
️ Early June: Prototyping, Research & First Meeting
Note: D-Bus is a system that lets different programs on your computer talk to each other, even if they are running in separate processes.
28 Jun 2025 6:00pm GMT
27 Jun 2025
Planet GNOME
This Week in GNOME: #206 Hot Days
Update on what happened across the GNOME project in the week from June 20 to June 27.
Third Party Projects
ranfdev says
DistroShelf now makes it even easier to run your favorite distro:
Support for more terminals have been added, plus the ability to use a custom terminal command
A command log has been added: you can now view precisely each command that's been executed by DistroShelf and copy it to your clipboard. You can use this for learning how the app interacts with distrobox or debugging why a command failed.
A bug affecting the Assemble from File and Assemble from URL functionality has been fixed. You can finally point DistroShelf to a .ini file containing a set of containers you want to be created, along with the initial packages you need and all the GUI apps you want to export and use from your desktop. More info on this feature from the distrobox documentation documentation.
Automatic host path resolution: when selecting a file/folder from a flatpak portal, the portal returns a dummy path, representing a capability to access the file you want, but not the absolute path you selected. We now use
getfattr
to resolve the dummy path to the real host path you selected. If you encounter any problem with this solution, notify us by opening an issue
Pipeline ↗
Follow your favorite video creators.
schmiddi announces
Pipeline version 2.6.0 was released. This release adds more keyboard shortcuts to the video player, like changing the volume or playback speed, as well as seeking in the video. Furthermore, you can now hide the sidebar when viewing a video. Pipeline now also displays a setup window on first startup, allowing the user to import their subscriptions from YouTube or NewPipe, as well as informing the user about privacy implications when using Pipeline. This release also fixes a few minor errors, as well as minor UI issues. For details about those fixes, refer to the changelog.
Fractal ↗
Matrix messaging app for GNOME written in Rust.
Kévin Commaille says
Hot! Hot! Hot! No, we are not talking about the summer weather in the northern hemisphere, but about the brand new release of Fractal 12.beta! Coming soon to your device:
- The safety setting to hide media previews in rooms is now synced between Matrix clients.
- We added another safety setting (which is also synced) to hide avatars in invites.
- A room can be marked as unread via the context menu in the sidebar.
- We changed the UX a little for tombstoned rooms. Instead of showing a banner at the top of the history, it now replaces the composer at the bottom of the history.
- You can now see if a section in the sidebar has any notifications or activity when it is collapsed.
As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.
It is available to install via Flathub Beta, see the instructions in our README.
As the version implies, there might be a slight risk of regressions, but it should be mostly stable. If all goes well the next step is the release candidate!
If you are wondering how to lower the temperature in your house, there is nothing cooler than fixing one of our newcomers issues!
GNOME Foundation
steven reports
The June 27th Foundation Report is out! This week:
- In dev research/funding news: Flathub/Flatpak, libxml2, and Digital Wellbeing
- In best friends news: GIMP, KDE, Framework Computer, and the Python Foundation
- In money news: new treasurers, OSU-OSL fundraising, and GNOME fundraising
https://blogs.gnome.org/steven/2025/06/27/2025-06-27-foundation-report/
steven reports
… and last week:
https://blogs.gnome.org/steven/2025/06/20/2025-06-20-foundation-report/
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
27 Jun 2025 12:00am GMT