18 Oct 2025
Planet GNOME
Dorothy Kabarozi: Deploying a Simple HTML Project on Linode Using Nginx
Deploying a Simple HTML Project on Linode Using Nginx: My Journey and Lessons Learned
Deploying web projects can seem intimidating at first, especially when working with a remote server like Linode. Recently, I decided to deploy a simple HTML project (index.html) on a Linode server using Nginx. Here's a detailed account of the steps I took, the challenges I faced, and the solutions I applied.
Step 1: Accessing the Linode Server
The first step was to connect to my Linode server via SSH:
ssh root@<your-linode-ip>
Initially, I encountered a timeout issue, which reminded me to check network settings and ensure SSH access was enabled for my Linode instance. Once connected, I had access to the server terminal and could manage files and services.
Step 2: Preparing the Project
My project was simple-it only contained an index.html file. I uploaded it to the server under:
/var/www/hng13-stage0-devops
I verified the project folder structure with:
ls -l /var/www/hng13-stage0-devops
Since there was no public folder or PHP files, I knew I needed to adjust the Nginx configuration to serve directly from this folder.
Step 3: Setting Up Nginx
I opened the Nginx configuration for my site:
sudo nano /etc/nginx/sites-available/hng13
Initially, I mistakenly pointed root to a non-existent folder (public), which caused a 404 Not Found error. The correct configuration looked like this:
server {
listen 80;
server_name <your_linode-ip>;
root /var/www/hng13-stage0-devops; # points to folder containing index.html
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
Step 4: Enabling the Site and Testing
After creating the configuration file, I enabled the site:
sudo ln -s /etc/nginx/sites-available/hng13 /etc/nginx/sites-enabled/
I also removed the default site to avoid conflicts:
sudo rm /etc/nginx/sites-enabled/default
Then I tested the configuration:
sudo nginx -t
If the syntax was OK, I reloaded Nginx:
sudo systemctl reload nginx
Step 5: Checking Permissions
Nginx must have access to the project files. I ensured the correct permissions:
sudo chown -R www-data:www-data /var/www/hng13-stage0-devops
sudo chmod -R 755 /var/www/hng13-stage0-devops
Step 6: Viewing the Site
Finally, I opened my browser and navigated to
http://<your-linode-ip>
And there it was-my index.html page served perfectly via Nginx. 
Challenges and Lessons Learned
- Nginx
server_nameError- Error:
"server_name" directive is not allowed here - Lesson: Always place
server_nameinside aserver { ... }block.
- Error:
- 404 Not Found
- Cause: Nginx was pointing to a
publicfolder that didn't exist. - Solution: Update
rootto the folder containingindex.html.
- Cause: Nginx was pointing to a
- Permissions Issues
- Nginx could not read files initially.
- Solution: Ensure ownership by
www-dataand proper read/execute permissions.
- SSH Timeout / Connection Issues
- Double-check firewall rules and Linode network settings.
Key Takeaways
- For static HTML projects, Nginx is simple and effective.
- Always check the root folder matches your project structure.
- Testing the Nginx config (
nginx -t) before reload saves headaches. - Proper permissions are crucial for serving files correctly.
Deploying my project was a learning experience. Even small mistakes like pointing to the wrong folder or placing directives in the wrong context can break the site-but step-by-step debugging and understanding the errors helped me fix everything quickly.This has kick started my devOps journey and I truly loved the challenge
18 Oct 2025 4:14pm GMT
17 Oct 2025
Planet GNOME
Allan Day: GNOME Foundation Update, 2025-10-17
It's the end of the working week, the weekend is calling, and it's time for another weekly GNOME Foundation update. As always, there's plenty going on at the GNOME Foundation, and this post just covers the highlights that are easy to share. Let's get started.
Board meeting
The Board of Directors had a regular meeting on Tuesday this week (the meeting was regular in the sense that it is regularly scheduled for the 2nd Tuesday of the month).
We were extremely pleased to approve the addition of two new members to the Circle Committee: welcome to Alireza and Ignacy, who will be helping out with the fantastic Circle initiative!
For those who don't know, the Circle Committee is the team that is responsible for reviewing app submissions, as well as doing regular maintenance on the list of member apps. It's valuable work.
The main item on the agenda for this week's Board meeting was the 2025-26 budget, which we finalized and approved. Our financial year runs from October to September, so the budget approval was slightly late, but a delay this small doesn't have any practical consequence for our operations. We'll provide a separate post on the budget itself, to provide more details on our plans and financial position.
GIMP grants
Some news which I can share now, even though it isn't technically from this week: last week the Foundation finished the long process of awarding the GIMP project's first two development grants. I'm really excited for the GIMP project now that we have reached this milestone, and I'm sure that the grants will give their development efforts a major boost.
More specifics about the grants are coming in a dedicated announcement, so I won't go into too many details now. However, I will say that a fair amount of work was required on the Foundation side to implement the grants in a compliant manner, including the creation and roll out of a new conflict of interest policy. The nice thing about this is that, with the necessary frameworks in place, it will be relatively easy to award additional grants in the future.
Fundraising Committee
The new Fundraising Committee had its first meeting this week, and I hear that its members have started working through a list of tasks, which is great news. I'm very appreciative of this effort, and especial thanks has to go to Maria Majadas who has pushed it forward.
The committee isn't an official committee just yet - this is something that the Board will hopefully look at during its next meeting.
Message ends
That's it for this week! Thanks for reading, and see you next week.
17 Oct 2025 4:25pm GMT
Sam Thursfield: Status update, 17/10/2025
Greetings readers. I'm writing to you from a hotel room in Manchester which I'm currently sharing with a variant of COVID 19. We are listening to disco funk music.
This virus prevents me from working or socializing, but I at least I have time to do some cyber-janitorial tasks like updating my "dotfiles" (which holds configuration for all the programs i use on Linux, stored in Git… for those who aren't yet converts).
I also caught up with some big upcoming changes in the GNOME 50 release cycle - more on that below.
nvim
I picked up Vim as my text editor ten years ago while working on a very boring project. This article by Jon Beltran de Heredia, "Why, oh WHY, do those #?@! nutheads use vi?" sold me on the key ideas: you use "normal mode" for everything, which gives you powerful and composable edit operations. I printed out this Vim quick reference card by Michael Goerz and resolved to learn one new operation every day.
It worked and I've been a convert ever since. Doing consultancy work makes you a nomad: often working via SSH or WSL on other people's computers. So I never had the luxury of setting up an IDE like GNOME Builder, or using something that isn't packaged in 99% of distros. Luckily Vim is everywhere.
Over the years, I read a newletter named Vimtricks and I picked up various Vim plugins like ALE, ctrlp, and sideways. But there's a problem: some of these depend on extra Vim features like Python support. If a required feature is missing, you get an error message that appears on like… every keystroke:

In this case, on a Debian 12 build machine, I could work around by installing the vim-gtk3 package. But it's frustrating enough that I decided it was time to try Neovim.
The Neovim project began around the time I was switching to Vim, and is based on the premise that "Vim is, without question, the worst C codebase I have seen.".
So far its been painless to switch and everything works a little better. The :terminal feels better integrated. I didn't need to immediately disable mouse mode. I can link to online documentation! The ALE plugin (which provides language server integration) is even ready packaged in Fedora.
I'd send a screenshot but my editor looks… exactly the same as before. Boring!

I also briefly tried out Helix, which appears to take the good bits of Vim (modal editing) and run in a different direction (visible selection and multiple cursors). I need a more boring project before I'll be able to learn a completely new editor. Give me 10 years.
Endless OS 7
I've been working flat out on Endless OS 7, as last month. Now that the basics work and the system boots, we were mainly looking at integrating Endless-specific Pay as you Go functionality that they use for affordable laptop programs.
I learned more than I wanted to about Linux early boot process, particularly the dracut-ng initramfs generator (one of many Linux components that seems to be named after a town in Massachusetts).
GNOME OS actually dropped Dracut altogether, in "vm-secure: Get rid of dracut and use systemd's ukify" by Valentin David, and now uses a simple Python script. A lot of Dracut's features aren't necessary for building atomic, image-based distros. For EOS we decided to stick with Dracut, at least for now.
So we get to deal with fun changes such as the initramfs growing from 90MB to 390MB after we updated to latest Dracut. Something which is affecting Fedora too (LWN: "Last-minute /boot boost for Fedora 43").
I requested time after the contract finishes to write up a technical article on the work we did, so I won't go into more details yet. Watch this space!
GNOME 50
I haven't had a minute to look at upstream GNOME this month, but there are some interesting things cooking there.
Jordan merged the GNOME OS openQA tests into the main gnome-build-meta repo. This is a simple solution to a number of basic questions we had around testing, such as, "how do we target tests to specific versions of GNOME?".
We separated the tests out of gnome-build-meta because, at the time, each new CI pipeline would track new versions of each GNOME module. This meant, firstly that pipelines could take anywhere from 10 minutes to 4 hours rebuilding a disk image before the tests even started, and secondly that the system under test would change every time you ran the pipeline.
While that sounds dumb, it worked this way for historical reasons: GNOME OS has been an under-resourced ad-hoc project ongoing since 2011, whose original goal was simply to continuously build: already a huge challenge if you remember GNOME in the early 2010s. Of course, such as CI pipeline is highly counterproductive if you're trying to develop and review changes to the tests, and not the system: so the separate openqa-tests repo was a necessary step.
Thanks to Abderrahim's work in 2022 ("Commit refs to the repository" and "Add script to update refs"), plus my work on a tool to run the openQA tests locally before pushing to CI (ssam_openqa), I hope we're not going to have those kinds of problems any more. We enter a brave new world of testing!
The next thing the openQA tests need, in my opinion, is dedicated test infrastructure. The shared Gitlab CI runners we have are in high demand. The openQA tests have timeouts, as they ultimately are doing this in a loop:
- Send an input event
- Wait for the system under test to react
If a VM is running on a test runner with overloaded CPU or IO then tests will start to time out in unhelpful ways. So, if you want to have better testing for GNOME, finding some dedicated hardware to run tests would be a significant help.
There are also some changes cooking in Localsearch thanks to Carlos Garnacho:
The first of these is a nicely engineered way to allow searching files on removable disks like external HDs. This should be opt-in: so you can opt in to indexing your external hard drive full of music, but your machine wouldn't be vulnerable to an attack where someone connects a malicious USB stick while your back is turned. (The sandboxing in localsearch makes it non-trivial to construct such an attack, but it would require a significantly greater level of security auditing before I'd make any guarantees about that).
The second of these changes is pretty big: in GNOME 50, localsearch will now consider everything in your homedir for indexing.
As Carlos notes in the commit message, he has spent years working on performance optimisations and bug fixes in localsearch to get to a point where he considers it reasonable to enable by default. From a design point of view, discussed in the issue "Be more encompassing about what get indexed", it's hard to justify a search feature that only surfaces a subset of your files.
I don't know if it's a great time to do this, but nothing is perfect and sometimes you have to take a few risks to move forwards.
There's a design, testing and user support element to all of this, and it's going to require help from the GNOME community and our various downstream distributors, particularly around:
- Widely testing the new feature before the GNOME 50 release.
- Making sure users are aware of the change and how to manage the search config.
- Handling an expected increase in bug reports and support requests.
- Highlighting how privacy-focused localsearch is.
I never got time to extend the openQA tests to cover media indexing; it's not a trivial job. We will rely on volunteers and downstream testers to try out the config change as widely as possible over the next 6 months.
One thing that makes me support this change is that the indexer in Android devices already works like this: everything is scanned into a local cache, unless there's a .nomedia file. Unfortunately Google don't document how the Android media scanner works. But it's not like this is GNOME treading a radical new path.
The localsearch index lives in the same filesystem as the data, and never leaves your PC. In a world where Microsoft Windows can now send your boss screenshots of everything you looked at, GNOME is still very much on your side. Let's see if we can tell that story.
17 Oct 2025 4:16pm GMT
Michael Meeks: 2025-10-17 Friday
- Plugged through mail & tickets. Call with Dave. Sync with Laser, chat with a partner.
- Published the next strip exploring the somewhat perverse way that those who do the most work are often blamed for not having doing more:
17 Oct 2025 2:40pm GMT
This Week in GNOME: #221 Virus Season
Update on what happened across the GNOME project in the week from October 10 to October 17.
GNOME Circle Apps and Libraries
Podcasts ↗
Podcast app for GNOME.
alatiera reports
A new release of Podcasts is out! Version 25.3 Introduces the long awaited episode chapters! Additionally it includes performance improvements and interface polish, especially for mobile devices.
Available now only on Flathub
Third Party Projects
Alain reports
Planify 4.15.1 - A smoother, more focused experience
Planify 4.15.1 introduces a brand-new Markdown editor, Focus Mode, animated progress bars, improved keyboard navigation, and better translation management through Weblate. This release also brings numerous stability fixes and UI refinements that make task management faster, more fluid, and delightful.
Vladimir Kosolapov announces
This week I released Lenspect - a lightweight security threat scanner powered by VirusTotal.
In almost 11 years, this is the first native GUI VirusTotal client developed specifically for Linux platform and using a modern GNOME technology stack. Stay tuned for updates to try out new features in the next versions.
Check out the project on GitHub
Alexander Vanhee reports
Bazaar got a pretty big update this week. I added the in-app screenshot viewer (featuring zoom) and the featured apps carousel, as seen on the Flathub site. Kolumni worked on a custom rendering engine for app descriptions, featuring a nicer multi-line list item experience. She also added a custom animated pending state for the Global Progress Bar, shown whenever the current task has no associated percentage.
Please check out these changes on Flathub
Bilal Elmoussaoui reports
Today I have finished all the remaining missing bits of the Rust re-implementation of gnome-keyring to be spec compatible. It is only missing PAM integration for automatically unlocking the keyring when you log in, otherwise most of the features should just work. The code source is available at https://github.com/bilelmoussaoui/oo7/tree/main/server, any help with testing would be appreciated. Thanks!
Quadrapassel ↗
Fit falling blocks together.
Will Warner reports
Quadrapassel 49.1 is out! This release improves upon 49.0 by updating some of its dependencies, fixing bugs, and polishing the UI. New in 49.1:
- Updated translations: Occitan, Chinese (China), Brazilian Portuguese, Slovenian, Ukrainian, Georgian
- Improved controller support and controller mappings
- Replaced the theme dialog with one that is easier to use
- Improved the scores dialog You can check it out on Flathub!
Pipeline ↗
Follow your favorite video creators.
schmiddi reports
I've released version 3.1.0 of Pipeline. Starting with this release, Pipeline now fetches data about YouTube videos directly from YouTube instead of proxying over Piped. This is due to pretty much no public Piped instances working anymore. If you have a private Piped instance you want to use, you can still switch back to using Piped in the settings. This change also speeds up fetching the feed of videos a lot, for my personal feed by a factor of about 20.
Fractal ↗
Matrix messaging app for GNOME written in Rust.
Kévin Commaille reports
Ah, Autumn… The trees are wearing their warmest colors, the wine harvest is ending, developers are preparing to hibernate… and Fractal 13.rc is here!
Our repository has been relatively quiet since the beta release, with mostly work on bug fixes for our new audio player, and a bit of code refactoring.
As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.
It is available to install via Flathub Beta, see the instructions in our README.
As the version implies, it should be mostly stable and we expect to only include minor improvements until the release of Fractal 13.
If you want to join the fun, you can try to fix one of our newcomers issues. We are always looking for new contributors!
GNOME Foundation
Allan Day reports
Another weekly GNOME Foundation update is available! Highlights this week include a new budget, new Circle Committee members, GIMP development grants, and more.
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
17 Oct 2025 12:00am GMT
16 Oct 2025
Planet GNOME
Michael Meeks: 2025-10-16 Thursday
- Up early, more Marketing & Sales updates lunch; tour from Doug of the Lava-Lab - driving all sorts of KernelCI and other testing goodness.
- Bid 'bye to Eloy, wrapped up, fixed Alina's bluetooth setup, dropped her into Cambridge & home.
- Out for new antibiotics - started on the mail and admin backlog, made some progress.
16 Oct 2025 9:00pm GMT
15 Oct 2025
Planet GNOME
Jussi Pakkanen: Building Android apps with native code using Meson
Building code for Android with Meson has long been possible, but a bit hacky and not particularly well documented. Recently some new features have landed in Meson main, which make the experience quite a bit nicer. To demonstrate, I have updated the Platypus sample project to build and run on Android. The project itself aims demonstrate how you'd build a GUI application with shared native code on multiple platforms using native widget toolkits on each of them. Currently it supports GTK, Win32, Cocoa, WASM and Android. In addition to building the code it also generates native packages and installers.
It would be nice if you could build full Android applications with just a toolchain directly from the command line. As you start looking into how Android builds work you realize that this is not really the way to go if you want to preserve your sanity. Google has tied app building very tightly into Android Studio. Thus the simple way is to build the native code with Meson, Java/Kotlin code with Android Studio and then merge the two together.
The Platypus repo has a script called build_android.py, which does exactly this. The steps needed to get a working build are the following:
- Use Meson's env2mfile to introspect the current Android Studio installation and create cross files for all discovered Android toolchains
- Set up a build directory for the toolchain version/ABI/CPU combination given, defaulting to the newest toolchain and arm64-v8a
- Compile the code.
- Install the generated shared library in the source tree under <app source dir>/jniLibs/<cpu>.
- Android Studio will then automatically install the built libs when deploying the project.
Here is a picture of the end result. The same application is running both in an emulator (x86_64) and a physical device (arm64-v8a).
The main downside is that you have to run the native build step by hand. It should be possible to make this a custom build step in Gradle but I've never actually written Gradle code so I don't know how to do it.
15 Oct 2025 11:32pm GMT
Gedit Technology blog: Mid-October News
Misc news about the gedit text editor, mid-October edition! (Some sections are a bit technical).
Rework of the file loading and saving (continued)
The refactoring continues in the libgedit-gtksourceview module, this time to tackle a big class that takes too much responsibilities. A utility is in development which will permit to delegate a part of the work.
The utility is about character encoding conversion, with support of invalid bytes. It takes as input a single GBytes (the file content), and transforms it into a list of chunks. A chunk contains either valid (successfully converted) bytes, or invalid bytes. The output format - the "list of chunks" - is subject to change to improve memory consumption and performances.
Note that invalid bytes are allowed, to be able to open really any kind of files with gedit.
I must also note that this is quite sensitive work, at the heart of document loading for gedit. Normally all these refactorings and improvements will be worth it!
Progress in other modules
There has been some progress on other modules:
- gedit: version 48.1.1 has been released with a few minor updates.
- The Flatpak on Flathub: update to gedit 48.1.1 and the GNOME 49 runtime.
- gspell: version 1.14.1 has been released, mainly to pick up the updated translations.
GitHub Sponsors
In addition to Liberapay, you can now support the work that I do on GitHub Sponsors. See the gedit donations page.
Thank you ❤️
15 Oct 2025 10:00am GMT
Victor Ma: This is a test post
Over the past few weeks, I've been working on improving some test code that I had written.
Refactoring time!
My first order of business was to refactor the test code. There was a lot of boilerplate, which made it difficult to add new tests, and also created visual clutter.
For example, have a look at this test case:
static void
test_egg_ipuz (void)
{
g_autoptr (WordList) word_list = NULL;
IpuzGrid *grid;
g_autofree IpuzClue *clue = NULL;
g_autoptr (WordArray) clue_matches = NULL;
word_list = get_broda_word_list ();
grid = create_grid (EGG_IPUZ_FILE_PATH);
clue = get_clue (grid, IPUZ_CLUE_DIRECTION_ACROSS, 2);
clue_matches = word_list_find_clue_matches (word_list, clue, grid);
g_assert_cmpint (word_array_len (clue_matches), ==, 3);
g_assert_cmpstr (word_list_get_indexed_word (word_list,
word_array_index (clue_matches, 0)),
==,
"EGGS");
g_assert_cmpstr (
word_list_get_indexed_word (word_list,
word_array_index (clue_matches, 1)),
==,
"EGGO");
g_assert_cmpstr (
word_list_get_indexed_word (word_list,
word_array_index (clue_matches, 2)),
==,
"EGGY");
}
That's an awful lot of code just to say:
- Use the
EGG_IPUZ_FILE_PATHfile. - Run the
word_list_find_clue_matches()function on the 2-Across clue. - Assert that the results are
["EGGS", "EGGO", "EGGY"].
And this was repeated in every test case, and needed to be repeated in every new test case I added. So, I knew that I had to refactor my code.
Fixtures and functions
My first step was to extract all of this setup code:
g_autoptr (WordList) word_list = NULL;
IpuzGrid *grid;
g_autofree IpuzClue *clue = NULL;
g_autoptr (WordArray) clue_matches = NULL;
word_list = get_broda_word_list ();
grid = create_grid (EGG_IPUZ_FILE_PATH);
clue = get_clue (grid, IPUZ_CLUE_DIRECTION_ACROSS, 2);
clue_matches = word_list_find_clue_matches (word_list, clue, grid);
To do this, I used a fixture:
typedef struct {
WordList *word_list;
IpuzGrid *grid;
} Fixture;
static void fixture_set_up (Fixture *fixture, gconstpointer user_data)
{
const gchar *ipuz_file_path = (const gchar *) user_data;
fixture->word_list = get_broda_word_list ();
fixture->grid = create_grid (ipuz_file_path);
}
static void fixture_tear_down (Fixture *fixture, gconstpointer user_data)
{
g_object_unref (fixture->word_list);
}
My next step was to extract all of this assertion code:
g_assert_cmpint (word_array_len (clue_matches), ==, 3);
g_assert_cmpstr (word_list_get_indexed_word (word_list,
word_array_index (clue_matches, 0)),
==,
"EGGS");
g_assert_cmpstr (
word_list_get_indexed_word (word_list,
word_array_index (clue_matches, 1)),
==,
"EGGO");
g_assert_cmpstr (
word_list_get_indexed_word (word_list,
word_array_index (clue_matches, 2)),
==,
"EGGY");
To do this, I created a new function that runs word_list_find_clue_matches() and asserts that the result equals an expected_words parameter.
static void
test_clue_matches (WordList *word_list,
IpuzGrid *grid,
IpuzClueDirection clue_direction,
guint clue_index,
const gchar *expected_words[])
{
const IpuzClue *clue = NULL;
g_autoptr (WordArray) clue_matches = NULL;
g_autoptr (WordArray) expected_word_array = NULL;
clue = get_clue (grid, clue_direction, clue_index);
clue_matches = word_list_find_clue_matches (word_list, clue, grid);
expected_word_array = str_array_to_word_array (expected_words, word_list);
g_assert_true (word_array_equals (clue_matches, expected_word_array));
}
After all that, here's what my test case looked like:
static void
test_egg_ipuz (Fixture *fixture, gconstpointer user_data)
{
test_clue_matches (fixture->word_list,
fixture->grid,
IPUZ_CLUE_DIRECTION_ACROSS,
2,
(const gchar*[]){"EGGS", "EGGO", "EGGY", NULL});
}
Much better!
Macro functions
But as great as that was, I knew that I could take it even further, with macro functions.
I created a macro function to simplify test case definitions:
#define ASSERT_CLUE_MATCHES(DIRECTION, INDEX, ...) \
test_clue_matches (fixture->word_list, \
fixture->grid, \
DIRECTION, \
INDEX, \
(const gchar*[]){__VA_ARGS__, NULL})
Now, test_egg_ipuz() looked like this:
static void
test_egg_ipuz (Fixture *fixture, gconstpointer user_data)
{
ASSERT_CLUE_MATCHES (IPUZ_CLUE_DIRECTION_ACROSS, 2, "EGGS", "EGGO", "EGGY");
}
I also made a macro function for the test case declarations:
#define ADD_IPUZ_TEST(test_name, file_name) \
g_test_add ("/clue_matches/" #test_name, \
Fixture, \
"tests/clue-matches/" #file_name, \
fixture_set_up, \
test_name, \
fixture_tear_down)
Which turned this:
g_test_add ("/clue_matches/test_egg_ipuz",
Fixture,
EGG_IPUZ,
fixture_set_up,
test_egg_ipuz,
fixture_tear_down);
Into this:
ADD_IPUZ_TEST (test_egg_ipuz, egg.ipuz);
An unfortunate bug
So, picture this: You've just finished refactoring your test code. You add some finishing touches, do a final test run, look over the diff one last time…and everything seems good. So, you open up an MR and start working on other things.
But then, the unthinkable happens-the CI pipeline fails! And apparently, it's due to a test failure? But you ran your tests locally, and everything worked just fine. (You run them again just to be sure, and yup, they still pass.) And what's more, it's only the Flatpak CI tests that failed. The native CI tests succeeded.
So…what, then? What could be the cause of this? I mean, how do you even begin debugging a test failure that only happens in a particular CI job and nowhere else? Well, let's just try running the CI pipeline again and see what happens. Maybe the problem will go away. Hopefully, the problem goes away.
…
Nope. Still fails.
…
Rats.
Well, I'll spare you the gory details that it took for me to finally figure this one out. But the cause of the bug was me accidentally freeing an object that I should never have freed.
This meant that the corresponding memory segment could be-but, importantly, did not necessarily have to be-filled with garbage data. And this is why only the Flatpak job's test run failed…well, at first, anyway. By changing around some of the test cases, I was able to get the native CI tests and local tests to fail. And this is what eventually clued me into the true nature of this bug.
So, after spending the better part of two weeks, here is the fix I ended up with:
@@ -94,7 +94,7 @@ test_clue_matches (WordList *word_list,
guint clue_index,
const gchar *expected_words[])
{
- g_autofree IpuzClue *clue = NULL;
+ const IpuzClue *clue = NULL;
g_autoptr (WordArray) clue_matches = NULL;
g_autoptr (WordArray) expected_word_array = NULL;
15 Oct 2025 12:00am GMT
14 Oct 2025
Planet GNOME
Jordan Petridis: Nightly Flatpak CI gets a cache
Recently I got around tackling a long standing issue for good. There were multiple attempts in the past 6 years to cache flatpak-builder artifacts with Gitlab but none had worked so far.
On the technical side of things, flatpak-builder relies heavily on extended attributes (xattrs) on files to do cache validation. Using gitlab's built-in cache or artifacts mechanisms results in a plain zip archive which strips all the attributes from the files, causing the cache to always be invalid once restored. Additionally the hardlinks/symlinks in the cache break. One workaround for this is to always tar the directories and then manually extract them after they are restored.
On the infrastructure of things we stumble once again into Gitlab. When a cache or artifact is created, it's uploaded into the Gitlab's instance storage so it can later be reused/redownloaded into any runner. While this is great, it also quickly ramps up the network egress bill we have to pay along with storage. And since its a public gitlab instance that anyone can make request against repositories, it gets out of hand fast.
Couple weeks ago Bart pointed me out to Flathub's workaround for this same problem. It comes down to making it someone else problem, and ideally one someone who is willing to fund FOSS infrastructure. We can use ORAS to wrap files and directories into an OCI wrapper and publish it to public registries. And it worked. Quite handy! OCI images are the new tarballs.
Now when a pipeline run against your default branch (and assuming it's protected) it will create a cache artifact and upload to the currently configured OCI registry. Afterwards, any build, including Merge Request pipelines, will download the image, extract the artifacts and check how much of it is still valid.
From some quick tests and numbers, GNOME Builder went from a ~16 minute build to 6 minutes for our x86_64 runners. While on the AArch64 runner the impact was even bigger, going from 50 minutes to 16 minutes. Not bad. The more modules you are building in your manifest, the more noticeable it is.
Unlike Buildstream, there is no Content Addressable Server and flatpak-builder itself isn't aware of the artifacts we publish or can associate them with the cache keys. The OCI/ORAS cache artifacts are manual and a bit hacky of a solution but works well in practice and until we have better tooling. To optimize a bit better for less cache-misses consider building modules from pinned commits/tags/tarballs and building modules from moving branches as late as possible.
If you are curious in the details, take a look at the related Merge Request in the templates repository and the follow up commits.
Free Palestine 
14 Oct 2025 6:00pm GMT
13 Oct 2025
Planet GNOME
Jordan Petridis: The Flatpak Runtime drops the 32-bit compatibility extension
Last month GNOME 49 was released, very smooth overall, especially given the amount of changes across the entire stack that we shipped.
One thing that is missing and that flew under the radar however, is that 32 bit Compatibility extension (org.gnome.Platform.i386.Compat) of the GNOME Flatpak Runtime is now gone. We were planning on making an announcement earlier but life got in the way.
That extension is a 32-bit version of the Runtime that applications could request to use. This is mostly helpful so Wine can use a 32 bit environment to run against. However your wine or legacy applications most likely don't require a 32 bit build of GTK 4, libadwaita or WebkitGTK.
We rebuild all of GNOME from the latest commits in git in each module, at least twice a day. This includes 2 builds of WebkitGTK, a build of mozjs and a couple of rust libraries and applications. Multiplied for each architecture we support. This is no small task for our CI machines to handle. There were also a couple of updates that were blocked on 32-bit specific build failures, as projects rarely test for that before merging the code. Suffice to say that supporting builds that almost nobody used or needed was a universal annoyance across developers and projects.
When we lost our main pool of donated CI machines and builders, the first thing in the chopping block was the 32-bit build of the runtime. It affected no applications, as none are relying on the Nightly version of the extension but it would affect some applications on Flathub once released.
In order to keep the applications working, and to avoid having to overload our runners again, we thought about another approach. In theory it would be possible to make the runtime compatible with the org.Freedesktop.i386.Compat extension point instead. We already use freedesktop-sdk as the base for the runtime so we did not expect many issues.
There were exactly 4 applications that made use of the gnome specific extension, 2 in Flathub, 1 in Flathub Beta and 1 archived.
Abderrahim and I worked on porting all the application to the GNOME 49 runtime and have Pull Requests open. The developers of Bottles were great help in our testing and the subsequent PR is almost ready to be merged. Lutris and Minigalaxy need some extra work to upgrade the runtime but its for unrelated reasons.
Since everything was working we never re-published the i386 GNOME compatibility extension again in Nightly, and thus we also didn't for GNOME 49. As a result, the GNOME Runtime is only available for x86_64 and AArch64.
Couple years ago we dropped the normal armv7 and i386 build as of the Runtime. With the i386 compatibility extension also gone, it means that we no longer have any 32 bit targets we QA before releasing GNOME as a whole. Previously, all modules we released would be guaranteed to at least compile for i386/x86 but going forward that will not be the case.
Some projects, for example glib, have their own CI specifically for 32 bit architectures. What was a project-wide guarantee before, is now a per-project opt-in. While many maintainers will no longer go out of their way to fix 32 bit specific issues anymore, they will most likely still review and merge any patches sent their way.
If you are a distributor, relying on 32 bit builds of GNOME, you will now be expected to debug and fix issues on your own for the majority of the projects. Alternatively you could also get involved upstream and help avoid further bit rot of 32 bit builds.
Free Palestine 
13 Oct 2025 6:00am GMT
Bilal Elmoussaoui: Testing a Rust library - Code Coverage
It has been a couple of years since I started working on a Rust library called oo7 as a Secret Service client implementation. The library ended up also having support for per-sandboxed app keyring using the Secret portal with a seamless API for end-users that makes usage from the application side straightforward.
The project, with time, grew support for various components:
- oo7-cli: A secret-tool replacement but much better, as it allows not only interacting with the Secret service on the DBus session bus but also with any keyring.
oo7-cli --app-id com.belmoussaoui.Authenticator list, for example, allows you to read the sandboxed app with app-idcom.belmoussaoui.Authenticator's keyring and list its contents, something that is not possible with secret-tool. - oo7-portal: A server-side implementation of the Secret portal mentioned above. Straightforward, thanks to my other library ASHPD.
- cargo-credential-oo7: A cargo credential provider built using oo7 instead of libsecret.
- oo7-daemon: A server-side implementation of the Secret service.
The last component was kickstarted by Dhanuka Warusadura, as we already had the foundation for that in the client library, especially the file backend reimplementation of gnome-keyring. The project is slowly progressing, but it is almost there!
The problem with replacing such a very sensitive component like gnome-keyring-daemon is that you have to make sure the very sensitive user data is not corrupted, lost, or inaccessible. For that, we need to ensure that both the file backend implementation in the oo7 library and the daemon implementation itself are well tested.
That is why I spent my weekend, as well as a whole day off, working on improving the test suite of the wannabe core component of the Linux desktop.
Coverage Report
One metric that can give the developer some insight into which lines of code or functions of the codebase are executed when running the test suite is code coverage.
In order to get the coverage of a Rust project, you can use a project like Tarpaulin, which integrates with the Cargo build system. For a simple project, a command like this, after installing Tarpaulin, can give you an HTML report:
cargo tarpaulin \
--package oo7 \
--lib \
--no-default-features \
--features "tracing,tokio,native_crypto" \
--ignore-panics \
--out Html \
--output-dir coverage
Except in our use case, it is slightly more complicated. The client library supports switching between Rust native cryptographic primitives crates or using OpenSSL. We must ensure that both are tested.
For that, we can export our report in LCOV for native crypto and do the same for OpenSSL, then combine the results using a tool like grcov.
mkdir -p coverage-raw
cargo tarpaulin \
--package oo7 \
--lib \
--no-default-features \
--features "tracing,tokio,native_crypto" \
--ignore-panics \
--out Lcov \
--output-dir coverage-raw
mv coverage-raw/lcov.info coverage-raw/native-tokio.info
cargo tarpaulin \
--package oo7 \
--lib \
--no-default-features \
--features "tracing,tokio,openssl_crypto" \
--ignore-panics \
--out Lcov \
--output-dir coverage-raw
mv coverage-raw/lcov.info coverage-raw/openssl-tokio.info
and then combine the results with
cat coverage-raw/*.info > coverage-raw/combined.info
grcov coverage-raw/combined.info \
--binary-path target/debug/ \
--source-dir . \
--output-type html \
--output-path coverage \
--branch \
--ignore-not-existing \
--ignore "**/portal/*" \
--ignore "**/cli/*" \
--ignore "**/tests/*" \
--ignore "**/examples/*" \
--ignore "**/target/*"
To make things easier, I added a bash script to the project repository that generates coverage for both the client library and the server implementation, as both are very sensitive and require intensive testing.
With that script in place, I also used it on CI to generate and upload the coverage reports at https://bilelmoussaoui.github.io/oo7/coverage/. The results were pretty bad when I started.
Testing
For the client side, most of the tests are straightforward to write; you just need to have a secret service implementation running on the DBus session bus. Things get quite complicated when the methods you have to test require a Prompt, a mechanism used in the spec to define a way for the user to be prompted for a password to unlock the keyring, create a new collection, and so on. The prompter is usually provided by a system component. For now, we just skipped those tests.
For the server side, it was mostly about setting up a peer-to-peer connection between the server and the client:
let guid = zbus::Guid::generate();
let (p0, p1) = tokio::net::UnixStream::pair().unwrap();
let (client_conn, server_conn) = tokio::try_join!(
// Client
zbus::connection::Builder::unix_stream(p0).p2p().build(),
// Server
zbus::connection::Builder::unix_stream(p1)
.server(guid)
.unwrap()
.p2p()
.build(),
)
.unwrap();
Thanks to the design of the client library, we keep the low-level APIs under oo7::dbus::api, which allowed me to straightforwardly write a bunch of server-side tests already.
There are still a lot of tests that need to be written and a few missing bits to ensure oo7-daemon is in an acceptable shape to be proposed as an alternative to gnome-keyring.
Don't overdo it
The coverage report is not meant to be targeted at 100%. It's not a video game. You should focus only on the critical parts of your code that must be tested. Testing a Debug impl or a From trait (if it is straightforward) is not really useful, other than giving you a small dose of dopamine from "achieving" something.
Till then, may your coverage never reach 100%.
13 Oct 2025 12:00am GMT
11 Oct 2025
Planet GNOME
Hubert Figuière: Dev Log September 2025
Not as much as I wanted to do was done in September.
libopenraw
Extracting more of the calibration values for colour correction on DNG. Currently work on fixing the purple colour cast.
Added Nikon ZR and EOS C50.
ExifTool
Submitted some metadata updates to ExifTool. Because it nice to have, and also because libopenraw uses some of these autogenerated: I have a Perl script to generate Rust code from it (it used to do C++).
Niepce
Finally merged the develop branch with all the import dialog work after having requested that it be removed from Damned Lies to not strain the translator is there is a long way to go before we can freeze the strings.
Supporting cast
Among the number of packages I maintain / update on flathub, LightZone is a digital photo editing application written in Java1. Updating to the latest runtime 25.08 cause it to ignore the HiDPI setting. It will honour GDK_SCALE environment but this isn't set. So I wrote the small command line too gdk-scale to output the value. See gdk-scale on gitlab. And another patch in the wrapper script.
HiDPI support remains a mess across the board. Fltk just recently gained support for it (it's used by a few audio plugins).
Don't try this at home.
11 Oct 2025 12:00am GMT
10 Oct 2025
Planet GNOME
Sebastian Wick: SO_PEERPIDFD Gets More Useful
A while ago I wrote about the limited usefulness of SO_PEERPIDFD. for authenticating sandboxed applications. The core problem was simple: while pidfds gave us a race-free way to identify a process, we still had no standardized way to figure out what that process actually was - which sandbox it ran in, what application it represented, or what permissions it should have.
The situation has improved considerably since then.
cgroup xattrs
Cgroups now support user extended attributes. This feature allows arbitrary metadata to be attached to cgroup inodes using standard xattr calls.
We can change flatpak (or snap, or any other container engine) to create a cgroup for application instances it launches, and attach metadata to it using xattrs. This metadata can include the sandboxing engine, application ID, instance ID, and any other information the compositor or D-Bus service might need.
Every process belongs to a cgroup, and you can query which cgroup a process belongs to through its pidfd - completely race-free.
Standardized Authentication
Remember the complexity from the original post? Services had to implement different lookup mechanisms for different sandbox technologies:
- For flatpak: look in
/proc/$PID/root/.flatpak-info - For snap: shell out to
snap routine portal-info - For firejail: no solution
- …
All of this goes away. Now there's a single path:
- Accept a connection on a socket
- Use
SO_PEERPIDFDto get a pidfd for the client - Query the client's cgroup using the pidfd
- Read the cgroup's user xattrs to get the sandbox metadata
This works the same way regardless of which sandbox engine launched the application.
A Kernel Feature, Not a systemd One
It's worth emphasizing: cgroups are a Linux kernel feature. They have no dependency on systemd or any other userspace component. Any process can manage cgroups and attach xattrs to them. The process only needs appropriate permissions and is restricted to a subtree determined by the cgroup namespace it is in. This makes the approach universally applicable across different init systems and distributions.
To support non-Linux systems, we might even be able to abstract away the cgroup details, by providing a varlink service to register and query running applications. On Linux, this service would use cgroups and xattrs internally.
Replacing Socket-Per-App
The old approach - creating dedicated wayland, D-Bus, etc. sockets for each app instance and attaching metadata to the service which gets mapped to connections on that socket - can now be retired. The pidfd + cgroup xattr approach is simpler: one standardized lookup path instead of mounting special sockets. It works everywhere: any service can authenticate any client without special socket setup. And it's more flexible: metadata can be updated after process creation if needed.
For compositor and D-Bus service developers, this means you can finally implement proper sandboxed client authentication without needing to understand the internals of every container engine. For sandbox developers, it means you have a standardized way to communicate application identity without implementing custom socket mounting schemes.
10 Oct 2025 5:04pm GMT
Allan Day: GNOME Foundation Update, 2025-10-10
It's Friday, which means that it's time for another GNOME Foundation update. Here's what's been happening in the Foundation over the past 7 days.
Membership rule change
The GNOME Foundation's members are a vitally important part of the organisation, and this week we changed our membership requirements to make them more inclusive. This change required legal input, and was one of the reasons that I had a call with a lawyer last week. With that done we have been able to drop the requirement that members provide a legally registered name: as long as the name you provide is used elsewhere and we have a valid email address, that should be enough.
I'd like to thank community members for their patience while we dealt with this matter. I'd also like to thank Andrea Veri for helping with the change, as well as all the work he's done over the years on the GNOME Foundation Membership Committee. He's a hugely important part of the Foundation and has been tireless over many years helping to keep our membership running smoothly. Thank you Andrea!
If you've wanted to apply for membership in the past, but have been put off by the name requirement, I hope you'll feel encouraged to apply now.
Board meeting preparation
The Board of Directors has a regular meeting scheduled for next Tuesday, and there is quite a lot on the agenda, so this week has been taken up with preparing the various motions and policy changes that will be presented for ratification.
This is how Boards of Directors are generally supposed to work, with policies, reports, and plans being prepared ahead of time, so that the Board can then review and/or authorize them. I'm glad that we seem to be working in that model.
Digital wellbeing
There was another team call for our digital wellbeing program this week. As mentioned in a previous post, this program is in its final stages, and we are meeting regularly to review progress.
The project is currently focusing on delivering essential parental controls features, primarily screen time limits for children. This will make GNOME into a viable platform for children, young people and their carers: an important demographic that we want to serve better.
This week Ignacy did a demo of the work so far, showing off the updated Parental Controls app, screen limits and bedtime features. Sam Hewitt from the design team joined the call to provide UX review, and identified a list of papercut issues that the team will be working on as the project draws to a conclusion.
Testing these new digital wellbeing features can be challenging, due to them requiring development branches in multiple modules, so Ignacy produced a custom GNOME OS image with the changes. If you're curious, you can try it. (Sidenote: this is a great demonstration of GNOME OS and its associated tooling.)
Staff vacations
Several staff members have been taking a well-earned break this week. The past few months have been a busy period for our staff, so now is a good time for a recharge. I hope everyone comes back full of energy!
Credit card policy
The Foundation provides credit cards for certain staff members and officers, as a low-friction payment method for some types of expenses. We had some spending and reporting rules defined in the platform we use, and we haven't had any issues around credit card usage, but we didn't have a written policy, so this week I introduced one. This will make it clearer when credit cards should and shouldn't be used, and make sure that our corporate credit card usage follows best practice.
Message ends
That's it! Thanks for reading, and see you next week!
10 Oct 2025 3:52pm GMT
This Week in GNOME: #220 Exemplary Snake
Update on what happened across the GNOME project in the week from October 03 to October 10.
GNOME Core Apps and Libraries
Libadwaita ↗
Building blocks for modern GNOME apps using GTK4.
Alice (she/her) 🏳️⚧️🏳️🌈 reports
the new sidebar widgets have had a few additions the last 2 weeks:
AdwSidebaritems can now have tooltips, a context menu and a drop targetAdwViewSwitcherSidebarhas the same search API as the regular sidebar now (:filterand:placeholder)- Both types of sidebars can now activate items when hovering them during a drag-n-drop operation, same as view switchers.
AdwSidebarItemhas API for disabling that, as sometimes it's unwanted
Third Party Projects
JumpLink announces
Learn 6502 Assembly v0.6.0 is out!
Learn 6502 Assembly is a modern, native GNOME application that provides a complete learning environment for 6502 assembly language programming.
This update adds an all-new Examples section with ready-to-run 6502 assembly programs, including Snake and a Stack example! You can now even share your own code examples directly from within the app, this will open a GitHub pull request automatically.
Grab it on Flathub
Phosh ↗
A pure wayland shell for mobile devices.
Guido announces
Phosh 0.50.0 is out:
The phone shell itself got updated to work with GNOME 49. The compositor gained workspace support and the on-screen keyboard now deletes complete words on backspace long press, better handles partially deleted words (when using the presage completer), automatically swaps punctuation and space after completing words and got a slight visual refresh.
There's more, see the full details at here
Parabolic ↗
Download web video and audio.
Nick says
Parabolic V2025.10.2 is here! This release contains new
yt-dlpsupported plugins and some bug fixes.Here's the full changelog:
- Added support for the nsig decryption yt-dlp plugin
- Added support for the srt_fix yt-dlp plugin
- Added the ability to see exact yt-dlp error during validation
- Fixed an issue where incompatible OPUS audios would be selected on Windows
- Fixed an issue where no formats were available when preferred codecs were set
OS-Installer ↗
A (third-party) generic OS-Installer that can be customized by distributions.
Peter Eisenmann says
OS-Installer version 0.5 was released this week featuring these changes:
- Working translations independent of system languages
- Slideshow support, shown during installation
- Support for more scripting languages, e.g. Python and Lua
- Simpler disk/partition selection with expander rows
- Improved output terminal, copying support
- Config can be passed as CLI parameter
- Extended and simplified config options
- Faster search for timezone and region lists
- Internal refactorings for a cleaner code base
- Added translations (Farsi, Hebrew, Kabyle, Tamil, Vietnamese) and updated existing translations
Many thanks to all translators and everyone providing feedback! Special thanks to Clayton Craft for fixes and extra motivation ✨
Feel free to reach out in our Matrix chat for support.
Miscellaneous
aleasto says
Canonical has announced the release of Ubuntu 25.10 "Questing Quokka" featuring GNOME 49.
GNOME Foundation
Allan Day reports
A weekly update is available with highlights from the GNOME Foundation. This week's post covers membership requirement changes, Digital Wellbeing progress, Board meeting preparation, and more.
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
10 Oct 2025 12:00am GMT












