23 Jan 2026

feedPlanet GNOME

Luis Villa: two questions on software “sovereignty”

The EU looks to be getting more serious about software independence, often under the branding of "sovereignty". India has been taking this path for a while. (A Wikipedia article on that needs a lot of love.) I don't have coherent thoughts on this yet, but prompted by some recent discussions, two big questions:

First: does software sovereignty for a geopolitical entity mean:

  1. we wrote the software from the bottom up
  2. we can change the software as necessary (not just hypothetically, but concretely: the technical skills and organizational capacity exist and are experienced)
  3. we sysadmin it (again, concretely: real skills, not just the legal license to download it)
  4. we can download it

My understanding is that India increasingly demands one for important software systems, though apparently both their national desktop and mobile OSes are based on Ubuntu and Android, respectively, which would be more level 2. (FOSS only guarantees #4; it legally permits 2 and 3 but as I've said before, being legally permitted to do a thing is not the same as having the real capability to do the thing.)

As the EU tries to set open source policy it will be interesting to see whether they can coherently ask this question, much less answer it.

Second, and related: what would a Manhattan Project to make the EU reasonably independent in core operating system technologies (mobile, desktop, cloud) look like?

It feels like, if well-managed, such a project could have incredible spillovers for the EU. Besides no longer being held hostage when a US administration goes rogue, tudents would upskill; project management chops would be honed; new businesses would form. And (in the current moment) it could provide a real rationale and focus for being for the various EU AI Champions, which often currently feel like their purpose is to "be ChatGPT but not American".

But it would be a near-impossible project to manage well: it risks becoming, as Mary Branscombe likes to say, "three SAPs in a trenchcoat". (Perhaps a more reasonable goal is to be Airbus?)

23 Jan 2026 1:46am GMT

21 Jan 2026

feedPlanet GNOME

Christian Schaller: Can AI help ‘fix’ the patent system?

So one thing I think anyone involved with software development for the last decades can see is the problem of "forest of bogus patents". I have recently been trying to use AI to look at patents in various ways. So one idea I had was "could AI help improve the quality of patents and free us from obvious ones?"

Lets start with the justification for patents existing at all. The most common argument for the patent system I hear is this one : "Patents require public disclosure of inventions in exchange for protection. Without patents, inventors would keep innovations as trade secrets, slowing overall technological progress.". This reasoning is something that makes sense to me, but it is also screamingly obvious to me that for it to hold true you need to ensure the patents granted are genuinely inventions that otherwise would stay hidden as trade secrets. If you allow patents on things that are obvious to someone skilled in the art, you are not enhancing technological progress, you are hampering it because the next person along will be blocking from doing it.

So based on this justification the question then becomes does for example the US Patents Office do a good job filtering out obvious patents? And I believe the answer is "No they don't". Having been working in the space of software for many decades now it is very clear to me that the patent office do very little to avoid patents getting approved for obvious things. And there are many reasons for why that happens I think. First of all if you are an expert in a field you would likely not be working as a case worker in the patent office, no disrespect for the people working there, but for instance if you are a really good software engineer there are much better paying options to be found elsewhere, same goes for medical researchers for example. So we have people who have limited understanding of the field of the patent trying to evaluate if the patent is obvious or not.

Secondly the patent office is not really incentivized to deny patents as they are a self funded entity and their 'success' and revenue is based on how many patent applications are submitted. So the lower the bar of approval, the more patents applications get submitted and thus the patent office makes more money and is more "successful". This is a problem, but maybe if we fix the first issue I mention it is maybe possible to reduce the impact of this second issue.

So what can AI do to help here? It can help patent reviewers at the patent office with evaluating patents for obviousness due to having been trained on these fields. If we required every patent applied for to be run through a AI engine for review then the patent office reviewer would have to justify why he or she believes something is non-obvious after the AI has looked at it and concluded it is not. My assumption would be that they would very rarely do so.

To test out this I decided to ask Gemini to review a H264 video patent that is still valid and which has been used in litigation before, to see if it could do a decent job of reviewing the patent application and judge if this is a true invention or not. Here is what Gemini came up with.

US Patent 7,916,782 (Samsung Electronics) is one of the most famous "battleground" patents in the history of smartphone litigation. It has been asserted against Apple, Huawei, and others.

It covers a specific part of the H.264 standard known as CAVLC (Context-Adaptive Variable Length Coding).

The Patent: US 7,916,782 : Method and apparatus for video coding and decoding The Mechanism: Efficiently coding the count of zero-valued pixels in a video block ("TotalZeros").

The Problem
In video compression, you divide the image into small 4×4 pixel blocks. After processing (transform and quantization), most of the 16 pixels in the block end up being zero. You only have a few non-zero numbers left.
Goal: You need to tell the decoder how many zeros are in the block so it can reconstruct the empty space.
Constraint: You already told the decoder how many non-zero coefficients exist (let's say you have 5 non-zeros).
The "Invention" The patent claims a method where the encoder selects a specific lookup table (VLC table) to encode the "TotalZeros" value, and-crucially-this selection is based on the number of non-zero coefficients (TotalCoeff) that were just processed. If TotalCoeff is 1, use Table A (allows for many zeros).If TotalCoeff is 10, use Table B (allows for fewer zeros).

The "Skilled Engineer" Test
Imagine you are a video coding engineer in 2002. You are tasked with compressing the "TotalZeros" value. Here is the logical deduction chain you would inevitably follow:

21 Jan 2026 6:35pm GMT

Sebastian Wick: Best Practices for Ownership in GLib

For all the rightful criticisms that C gets, GLib does manage to alleviate at least some of it. If we can't use a better language, we should at least make use of all the tools we have in C with GLib.

This post looks at the topic of ownership, and also how it applies to libdex fibers.

Ownership

In normal C usage, it is often not obvious at all if an object that gets returned from a function (either as a real return value or as an out-parameter) is owned by the caller or the callee:

MyThing *thing = my_thing_new ();

If thing is owned by the caller, then the caller also has to release the object thing. If it is owned by the callee, then the lifetime of the object thing has to be checked against its usage.

At this point, the documentation is usually being consulted with the hope that the developer of my_thing_new documented it somehow. With gobject-introspection, this documentation is standardized and you can usually read one of these:

The caller of the function takes ownership of the data, and is responsible for freeing it.

The returned data is owned by the instance.

If thing is owned by the caller, the caller now has to release the object or transfer ownership to another place. In normal C usage, both of those are hard issues. For releasing the object, one of two techniques are usually employed:

  1. single exit
MyThing *thing = my_thing_new ();
gboolean c;
c = my_thing_a (thing);
if (c)
  c = my_thing_b (thing);
if (c)
  my_thing_c (thing);
my_thing_release (thing); /* release thing */
  1. goto cleanup
  MyThing *thing = my_thing_new ();
  if (!my_thing_a (thing))
    goto out;
  if (!my_thing_b (thing))
    goto out;
  my_thing_c (thing);
out:
  my_thing_release (thing); /* release thing */

Ownership Transfer

GLib provides automatic cleanup helpers (g_auto, g_autoptr, g_autofd, g_autolist). A macro associates the function to release the object with the type of the object (e.g. G_DEFINE_AUTOPTR_CLEANUP_FUNC). If they are being used, the single exit and goto cleanup approaches become unnecessary:

g_autoptr(MyThing) thing = my_thing_new ();
if (!my_thing_a (thing))
  return;
if (!my_thing_b (thing))
  return;
my_thing_c (thing);

The nice side effect of using automatic cleanup is that for a reader of the code, the g_auto helpers become a definite mark that the variable they are applied on own the object!

If we have a function which takes ownership over an object passed in (i.e. the called function will eventually release the resource itself) then in normal C usage this is indistinguishable from a function call which does not take ownership:

MyThing *thing = my_thing_new ();
my_thing_finish_thing (thing);

If my_thing_finish_thing takes ownership, then the code is correct, otherwise it leaks the object thing.

On the other hand, if automatic cleanup is used, there is only one correct way to handle either case.

A function call which does not take ownership is just a normal function call and the variable thing is not modified, so it keeps ownership:

g_autoptr(MyThing) thing = my_thing_new ();
my_thing_finish_thing (thing);

A function call which takes ownership on the other hand has to unset the variable thing to remove ownership from the variable and ensure the cleanup function is not called. This is done by "stealing" the object from the variable:

g_autoptr(MyThing) thing = my_thing_new ();
my_thing_finish_thing (g_steal_pointer (&thing));

By using g_steal_pointer and friends, the ownership transfer becomes obvious in the code, just like ownership of an object by a variable becomes obvious with g_autoptr.

Ownership Annotations

Now you could argue that the g_autoptr and g_steal_pointer combination without any conditional early exit is functionally exactly the same as the example with the normal C usage, and you would be right. We also need more code and it adds a tiny bit of runtime overhead.

I would still argue that it helps readers of the code immensely which makes it an acceptable trade-off in almost all situations. As long as you haven't profiled and determined the overhead to be problematic, you should always use g_auto and g_steal!

The way I like to look at g_auto and g_steal is that it is not only a mechanism to release objects and unset variables, but also annotations about the ownership and ownership transfers.

Scoping

One pattern that is still somewhat pronounced in older code using GLib, is the declaration of all variables at the top of a function:

static void
foobar (void)
{
  MyThing *thing = NULL;
  size_t i;

  for (i = 0; i < len; i++) {
    g_clear_pointer (&thing);
    thing = my_thing_new (i);
    my_thing_bar (thing);
  }
}

We can still avoid mixing declarations and code, but we don't have to do it at the granularity of a function, but of natural scopes:

static void
foobar (void)
{
  for (size_t i = 0; i < len; i++) {
    g_autoptr(MyThing) thing = NULL;

    thing = my_thing_new (i);
    my_thing_bar (thing);
  }
}

Similarly, we can introduce our own scopes which can be used to limit how long variables, and thus objects are alive:

static void
foobar (void)
{
  g_autoptr(MyOtherThing) other = NULL;

  {
    /* we only need `thing` to get `other` */
    g_autoptr(MyThing) thing = NULL;

    thing = my_thing_new ();
    other = my_thing_bar (thing);
  }

  my_other_thing_bar (other);
}

Fibers

When somewhat complex asynchronous patterns are required in a piece of GLib software, it becomes extremely advantageous to use libdex and the system of fibers it provides. They allow writing what looks like synchronous code, which suspends on await points:

g_autoptr(MyThing) thing = NULL;

thing = dex_await_object (my_thing_new_future (), NULL);

If this piece of code doesn't make much sense to you, I suggest reading the libdex Additional Documentation.

Unfortunately the await points can also be a bit of a pitfall: the call to dex_await is semantically like calling g_main_loop_run on the thread default main context. If you use an object which is not owned across an await point, the lifetime of that object becomes critical. Often the lifetime is bound to another object which you might not control in that particular function. In that case, the pointer can point to an already released object when dex_await returns:

static DexFuture *
foobar (gpointer user_data)
{
  /* foo is owned by the context, so we do not use an autoptr */
  MyFoo *foo = context_get_foo ();
  g_autoptr(MyOtherThing) other = NULL;
  g_autoptr(MyThing) thing = NULL;

  thing = my_thing_new ();
  /* side effect of running g_main_loop_run */
  other = dex_await_object (my_thing_bar (thing, foo), NULL);
  if (!other)
    return dex_future_new_false ();

  /* foo here is not owned, and depending on the lifetime
   * (context might recreate foo in some circumstances),
   * foo might point to an already released object
   */
  dex_await (my_other_thing_foo_bar (other, foo), NULL);
  return dex_future_new_true ();
}

If we assume that context_get_foo returns a different object when the main loop runs, the code above will not work.

The fix is simple: own the objects that are being used across await points, or re-acquire an object. The correct choice depends on what semantic is required.

We can also combine this with improved scoping to only keep the objects alive for as long as required. Unnecessarily keeping objects alive across await points can keep resource usage high and might have unintended consequences.

static DexFuture *
foobar (gpointer user_data)
{
  /* we now own foo */
  g_autoptr(MyFoo) foo = g_object_ref (context_get_foo ());
  g_autoptr(MyOtherThing) other = NULL;

  {
    g_autoptr(MyThing) thing = NULL;

    thing = my_thing_new ();
    /* side effect of running g_main_loop_run */
    other = dex_await_object (my_thing_bar (thing, foo), NULL);
    if (!other)
      return dex_future_new_false ();
  }

  /* we own foo, so this always points to a valid object */
  dex_await (my_other_thing_bar (other, foo), NULL);
  return dex_future_new_true ();
}
static DexFuture *
foobar (gpointer user_data)
{
  /* we now own foo */
  g_autoptr(MyOtherThing) other = NULL;

  {
    /* We do not own foo, but we only use it before an
     * await point.
     * The scope ensures it is not being used afterwards.
     */
    MyFoo *foo = context_get_foo ();
    g_autoptr(MyThing) thing = NULL;

    thing = my_thing_new ();
    /* side effect of running g_main_loop_run */
    other = dex_await_object (my_thing_bar (thing, foo), NULL);
    if (!other)
      return dex_future_new_false ();
  }

  {
    MyFoo *foo = context_get_foo ();

    dex_await (my_other_thing_bar (other, foo), NULL);
  }

  return dex_future_new_true ();
}

One of the scenarios where re-acquiring an object is necessary, are worker fibers which operate continuously, until the object gets disposed. Now, if this fiber owns the object (i.e. holds a reference to the object), it will never get disposed because the fiber would only finish when the reference it holds gets released, which doesn't happen because it holds a reference. The naive code also suspiciously doesn't have any exit condition.

static DexFuture *
foobar (gpointer user_data)
{
  g_autoptr(MyThing) self = g_object_ref (MY_THING (user_data));

  for (;;)
    {
      g_autoptr(GBytes) bytes = NULL;

      bytes = dex_await_boxed (my_other_thing_bar (other, foo), NULL);

      my_thing_write_bytes (self, bytes);
    }
}

So instead of owning the object, we need a way to re-acquire it. A weak-ref is perfect for this.

static DexFuture *
foobar (gpointer user_data)
{
  /* g_weak_ref_init in the caller somewhere */
  GWeakRef *self_wr = user_data;

  for (;;)
    {
      g_autoptr(GBytes) bytes = NULL;

      bytes = dex_await_boxed (my_other_thing_bar (other, foo), NULL);

      {
        g_autoptr(MyThing) self = g_weak_ref_get (&self_wr);
        if (!self)
          return dex_future_new_true ();

        my_thing_write_bytes (self, bytes);
      }
    }
}

Conclusion

21 Jan 2026 3:31pm GMT

Sam Thursfield: Status update, 21st January 2026

Happy new year, ye bunch of good folks who follow my blog.

I ain't got a huge bag of stuff to announce. It's raining like January. I've been pretty busy with work amongst other things, doing stuff with operating systems but mostly internal work, and mostly management and planning at that.

We did make an actual OS last year though, here's a nice blog post from Endless and a video interview about some of the work and why its cool: "Endless OS: A Conversation About What's Changing and Why It Matters".

I tried a new audio setup in advance of that video, using a pro interface and mic I had lying around. It didn't work though and we recorded it through the laptop mic. Oh well.

Later I learned that, by default a 16 channel interface will be treated by GNOME as a 7.1 surround setup or something mental. You can use the Pipewire loopback interface to define a single mono source on the channel that you want to use, and now audio Just Works again. Pipewire has pretty good documentation now too!

What else happened? Jordan and Bart finally migrated the GNOME openQA server off the ad-hoc VM setup that it ran on, and brought it into OpenShift, as the Lord intended. Hopefully you didn't even notice. I updated the relevant wiki page.

The Linux QA monthly calls are still going, by the way. I handed over the reins to another participant, but I'm still going to the calls. The most active attendees are the Debian folk, who are heroically running an Outreachy internship right now to improve desktop testing in Debian. You can read a bit about it here: "Debian welcomes Outreachy interns for December 2025-March 2026 round".

And it looks like Localsearch is going to do more comprehensive indexing in GNOME 50. Carlos announced this back in October 2025 ("A more comprehensive LocalSearch index for GNOME 50") aiming to get some advance testing on this, and so far the feedback seems to be good.

That's it from me I think. Have a good year!


21 Jan 2026 1:00pm GMT

20 Jan 2026

feedPlanet GNOME

Ignacy Kuchciński: Digital Wellbeing Contract: Conclusion

A lot of progress has been made since my last Digital Wellbeing update two months ago. That post covered the initial screen time limits feature, which was implemented in the Parental Controls app, Settings and GNOME Shell. There's a screen recording in the post, created with the help of a custom GNOME OS image, in case you're interested.

Finishing Screen Time Limits

After implementing the major framework for the rest of the code in GNOME Shell, we added the mechanism in the lock screen to prevent children from unlocking when the screen time limit is up. Parents are now also able to extend the session limit temporarily, so that the child can use the computer until the rest of the day.

Parental Controls Shield

Screen time limits can be set as either a daily limit or a bedtime. With the work that has recently landed, when the screen time limit has been exceeded, the session locks and the authentication action is hidden on the lock screen. Instead, a message is displayed explaining that the current session is limited and the child cannot login. An "Ignore" button is presented to allow the parents to temporarily lift the restrictions when needed.

Parental Controls shield on the lock screen, preventing the children from unlocking

Extending Screen Time

Clicking the "Ignore" button prompts the user for authentication from a user with administrative privileges. This allows parents to temporarily lift the screen time limit, so that the children may log in as normal until the rest of the day.

Authentication dialog allowing the parents to temporarily override the Screen Time restrictions

Showcase

Continuing the screen cast of the Shell functionality from the previous update, I've recorded the parental controls shield together, and showed the extending screen time functionality:

GNOME OS Image

You can also try the feature out for yourself, with the very same GNOME OS live image I've used in the recording, that you can either run in GNOME Boxes, or try on your hardware if you know what you're doing 🙂

Conclusion

Now that the full Screen Time Limits functionality has been merged in GNOME Shell, this concludes my part in the Digital Wellbeing Contract. Here's the summary of the work:

In the initial plan, we also covered web filtering, and the foundation of the feature has been introduced as well. However, integrating the functionality in the Parental Controls application has been postponed to a future endeavour.

I'd like to thank GNOME Foundation for giving me this opportunity, and Endless for sponsoring the work. Also kudos to my colleagues, Philip Withnall and Sam Hewitt, it's been great to work with you and I've learned a lot (like the importance of wearing Christmas sweaters in work meetings!), and to Florian Müllner, Matthijs Velsink and Felipe Borges for very helpful reviews. I also want to thank Allan Day for organizing the work hours and meetings, and helping with my blog posts as well 🙂 Until next project!

20 Jan 2026 3:00am GMT

17 Jan 2026

feedPlanet GNOME

Sriram Ramkrishna: GNOME OS Hackfest During FOSDEM week

For those of you who are attending FOSDEM, we're doing a GNOME OS hackfest and invite those of you who might be interested on our experiments on concepts as the 'anti-distro', eg an OS with no distro packaging that integrates GNOME desktop patterns directly.

The hackfest is from January 28th - January 29th. If you're interested, feel free to respond on the comments. I don't have an exact location yet.

We'll likely have some kind of BigBlueButton set up so if you're not available to come in-person you can join us remotely.

Agenda and attendees are linked here here.

There is likely a limited capacity so acceptance will be "first come, first served".

See you there!

17 Jan 2026 11:17pm GMT

16 Jan 2026

feedPlanet GNOME

Allan Day: GNOME Foundation Update, 2026-01-16

Welcome to my regular weekly update on what's been happening at the GNOME Foundation. As usual, this post just covers highlights, and there are plenty of smaller and in progress items that haven't been included.

Board meeting

The Board of Directors had a regular meeting this week. Topics on the agenda included:

According to our new schedule, the next meeting will be on 9th February.

New finance platform

As mentioned last week, we started using a new platform for payments processing at the beginning of the year. Overall the new system brings a lot of great features which will make our processes more reliable and integrated. However, as we adopt the tool we are having to deal with some ongoing setup tasks which mean that it is taking additional time in the short term.

GUADEC 2026 planning

Kristi has been extremely busy with GUADEC 2026 planning in recent weeks. She has been working closely with the local team to finalise arrangements for the venue and accommodation, as well as preparing the call for papers and sponsorship brochure.

If you or your organisation are interested in sponsoring this fantastic event, just reach out to me directly, or email guadec@gnome.org. We'd love to hear from you.

FOSDEM preparation

FOSDEM 2026 is happening over the weekend of 31st January and 1st February, and preparations for the event continue to be a focus. Maria has been organising the booth, and I have been arranging the details for the Advisory Board meeting which will happen on 30 January. Together we have also been hunting down a venue for a GNOME social event on the Saturday night.

Digital Wellbeing

This week the final two merge requests landed for the bedtime and screen time parental controls features. These features were implemented as part of our Digital Wellbeing program, and it's great to see them come together in advance of the GNOME 50 release. More details can be found in gnome-shell!3980 and gnome-shell!3999.

Many thanks to Ignacy for seeing this work through to completion!

Flathub

Among other things, Bart recently wrapped up a chunk of work on Flathub's build and publishing infrastructure, which he's summarised in a blog post. It's great to see all the improvements that have been made recently.

That's it for this week. Thanks for reading, and have a great weekend!

16 Jan 2026 5:48pm GMT

Gedit Technology blog: gedit 49.0 released

gedit 49.0 has been released! Here are the highlights since version 48.0 which dates back from September 2024. (Some sections are a bit technical).

File loading and saving enhancements

A lot of work went into this area. It's mostly under-the-scene changes where there was a lot of dusty code. It's not entirely finished, but there are already user-visible enhancements:

Improved preferences

gedit screenshot - reset all preferences

gedit screenshot - spell-checker preferences

There is now a "Reset All..." button in the Preferences dialog. And it is now possible to configure the default language used by the spell-checker.

Python plugins removal

Initially due to an external factor, plugins implemented in Python were no longer supported.

During some time a previous version of gedit was packaged in Flathub in a way that still enabled Python plugins, but it is no longer the case.

Even though the problem is fixable, having some plugins in Python meant to deal with a multi-language project, which is much harder to maintain for a single individual. So for now it's preferable to keep only the C language.

So the bad news is that Python plugins support has not been re-enabled in this version, not even for third-party plugins.

More details.

Summary of changes for plugins

The following plugins have been removed:

Only Python plugins have been removed, the C plugins have been kept. The Code Comment plugin which was written in Python has been rewritten in C, so it has not disappeared. And it is planned and desired to bring back some of the removed plugins.

Summary of other news

Wrapping-up statistics for 2025

The total number of commits in gedit and gedit-related git repositories in 2025 is: 884. More precisely:

138     enter-tex
310     gedit
21      gedit-plugins
10      gspell
4       libgedit-amtk
41      libgedit-gfls
290     libgedit-gtksourceview
70      libgedit-tepl

It counts all contributions, translation updates included.

The list contains two apps, gedit and Enter TeX. The rest are shared libraries (re-usable code available to create other text editors).

If you do a comparison with the numbers for 2024, you'll see that there are fewer commits, the only module with more commits is libgedit-gtksourceview. But 2025 was a good year nevertheless!

For future versions: superset of the subset

With Python plugins removed, the new gedit version is a subset of the previous version, when comparing approximately the list of features. In the future, we plan to have a superset of the subset. That is, to bring in new features and try hard to not remove any more functionality.

In fact, we have reached a point where we are no longer interested to remove any more features from gedit. So the good news is that gedit will normally be incrementally improved from now on without major regressions. We really hope there won't be any new bad surprises due to external factors!

Side note: this "superset of the subset" resembles the evolution of C++, but in the reverse order. Modern C++ will be a subset of the superset to have a language in practice (but not in theory) as safe as Rust (it works with compiler flags to disable the unsafe parts).

Onward to 2026

Since some plugins have been removed, this makes gedit a less advanced text editor. It has become a little less suitable for heavy programming workloads, but for that there are lots of alternatives.

Instead, gedit could become a text editor of choice for newcomers in the computing science field (students and self-learners). It can be a great tool for markup languages too. It can be your daily companion for quite a while, until your needs evolve for something more complete at your workplace. Or it can be that you prefer its simplicity and its not-going-in-the-way default setup, plus the fact that it launches quickly. In short, there are a lot of reasons to still love gedit ❤️ !

If you have any feedback, even for a small thing, I would like to hear from you :) ! The best places are on GNOME Discourse, or GitLab for more actionable tasks (see the Getting in Touch section).

16 Jan 2026 10:00am GMT

This Week in GNOME: #232 Upcoming Deadlines

Update on what happened across the GNOME project in the week from January 09 to January 16.

GNOME Releases

Sophie (she/her) reports

The API, UI, and feature freeze for GNOME 50 is closing in. The deadline is in about two weeks from now on Jan 31 at 23:59 UTC. After that, the focus will be on bug fixes, polishing, and translations for GNOME 50.

Sophie (she/her) announces

GNOME 50 alpha has been released. One of the biggest changes is the removal of X11 support from several components like GNOME Shell, while the login screen can still launch non-X11 sessions of other desktop environments. More information is available in the announcement post.

Third Party Projects

Ronnie Nissan reports

Embellish v0.6.0 was released this week. I finally was able to make the app translatable, which was not easy due to me not knowing how to translate GKeyFiles. I also added Arabic translations.

I had also released v0.5.2 to update to the latest GNOME runtime and switch to the new libadwaita shortcuts dialog.

You can get Embellish from flathub

Nathan Perlman announces

v1.1.1 of Rewaita was released this week!

To recap, Rewaita allows you to easily modify Adwaita. Like changing the color scheme to match Tokyonight or Gruvbox, or make the window controls look more like MacOS.

A lot has changed over the last month, so this post covers v1.0.9 -> v1.1.1.

What's new?

  • Patched up most remaining holes in Gnome Shell integration, especially with the overview and dock
  • Extra customization options: transparency, window borders, and sharp corners
  • Major performance improvements
  • Added two new light themes: Kanagawa-Paper, and Thorn
  • Fixed issue with Tokyonight Storm
  • Now allows palette swapping/tinting your wallpapers
  • Added Vietnamese translations, thanks to @hthienloc
  • UI changes + uses Fortune for text snippets
  • Updated adwgtk3 to v6.4
  • New Zypper package for OpenSUSE users
  • Won't autostart when running in background is disabled
  • 'Get Involved' page now loads correctly

I hope you all enjoy this release, and I look forward to seeing your creations on r/gnome and r/unixporn!

Ronnie Nissan announces

Concessio v0.2.0 and v0.2.1 were released this week. The updates include:

  • Switching to Blueprint for UI definitions.
  • Update to the latest GNOME runtime.
  • Use the new libadwaita shortcuts dialog.
  • Make the application accessible to screen reader.

Concessio can be downloaded from flathub

Turtle

Manage git repositories in Nautilus.

Philipp reports

Turtle 0.14 released!

There has been a massive visual improvement on how the commit log graph looks. Instead of adding branches at the top when "Show All Branches" is enabled it now weaves the branches into the graph directly ontop of its parent commit. This results in a much narrower graph, see screenshot below showing the same git repo before and after the change.

It is now also possible to configure the menu entries of the file manager context menu entries.

See the release for more details.

Flare

Chat with your friends on Signal.

schmiddi announces

Version 0.18.0 of Flare was now released. Besides allowing for Flare being used as a primary device, this release contains a critical hotfix that since Tuesday of this week (2026-01-13) some messages are not received properly anymore, which got worse on Wednesday. I urge everyone to upgrade, and check in with one of their official Signal applications that you have not missed any critical messages.

GNOME Foundation

Allan Day says

Another weekly GNOME Foundation update is available this week, covering highlights from the past 7 days. The update includes details from this week's board meeting, FOSDEM preparations, GUADEC planning, and Flathub infrastructure development.

Digital Wellbeing Project

Ignacy Kuchciński (ignapk) says

As part of the Digital Wellbeing project, sponsored by the GNOME Foundation, there is an initiative to redesign the Parental Controls to bring it on par with modern GNOME apps and implement new features such as Screen Time monitoring, Bedtime Schedule and Web Filtering.

Recently, the changes preventing children from unlocking after their bedtime and allowing parents to extend their screen time have been merged in GNOME Shell (!3980, !3999).

These were the last remaining bits for the parental controls session limits integration in Shell 🎉

That's all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

16 Jan 2026 12:00am GMT

15 Jan 2026

feedPlanet GNOME

Ignacio Casal Quinteiro: Mecalin

Many years ago when I was a kid, I took typing lessons where they introduced me to a program called Mecawin. With it, I learned how to type, and it became a program I always appreciated not because it was fancy, but because it showed step by step how to work with a keyboard.

Now the circle of life is coming back: my kid will turn 10 this year. So I started searching for a good typing tutor for Linux. I installed and tried all of them, but didn't like any. I also tried a couple of applications on macOS, some were okish, but they didn't work properly with Spanish keyboards. At this point, I decided to build something myself. Initially, I hacked out keypunch, which is a very nice application, but I didn't like the UI I came up with by modifying it. So in the end, I decided to write my own. Or better yet, let Kiro write an application for me.

Mecalin is meant to be a simple application. The main purpose is teaching people how to type, and the Lessons view is what I'll be focusing on most during development. Since I don't have much time these days for new projects. I decided to take this opportunity to use Kiro to do most of the development for me. And to be honest, it did a pretty good job. Sure, there are things that could be better, but I definitely wouldn't have finished it in this short time otherwise.

So if you are interested, give it a try, go to flathub and install it: https://flathub.org/apps/io.github.nacho.mecalin

In this application, you'll have several lessons that guide you step by step through the different rows of the keyboard, showing you what to type and how to type it.

This is an example of the lesson view.

You also have games.

The falling keys game: keys fall from top to bottom, and if one reaches the bottom of the window, you lose. This game can clearly be improved, and if anybody wants to enhance it, feel free to send a PR.

The scrolling lanes game: you have 4 rows where text moves from right to left. You need to type the words before they reach the leftmost side of the window, otherwise you lose.

For those who want to support your language, there are two JSON files you'll need to add:

  1. The keyboard layout: https://github.com/nacho/mecalin/tree/main/data/keyboard_layouts
  2. The lessons: https://github.com/nacho/mecalin/tree/main/data/lessons

Note that the Spanish lesson is the source of truth; the English one is just a translation done by Kiro.

If you have any questions, feel free to contact me.

15 Jan 2026 7:13pm GMT

14 Jan 2026

feedPlanet GNOME

Asman Malika: Think About Your Audience

When I started writing this blog, I didn't fully understand what "think about your audience" really meant. At first, it sounded like advice meant for marketers or professional writers. But over time, I've realized it's one of the most important lessons I'm learning, not just for writing, but for building software and contributing to open source.

Who I'm Writing (and Building) For

When I sit down to write, I think about a few people.

I think about aspiring developers from non-traditional backgrounds, people who didn't follow a straight path into tech, who might be self-taught, switching careers, or learning in community-driven programs. I think about people who feel like they don't quite belong in tech yet, and are looking for proof that they do.

I also think about my past self, about some months ago. Back then, everything felt overwhelming: the tools, the terminology, the imposter syndrome. I remember wishing I could read honest stories from people who were still in the process, not just those who had already "made it."

And finally, I think about the open-source community I'm now part of: contributors, maintainers, and users who rely on the software we build.

Why My Audience Matters to My Work

Thinking about my audience has changed how I approach my work on Papers.

Papers isn't just a codebase, it's a tool used by researchers, students, and academics to manage references and organize their work. When I think about those users, I stop seeing bugs as abstract issues and start seeing them as real problems that affect real people's workflows.

The same applies to documentation. Remembering how confusing things felt when I was a beginner pushes me to write clearer commit messages, better explanations, and more accessible documentation. I'm no longer writing just to "get the task done". I'm writing so that someone else, maybe a first-time contributor, can understand and build on my work.

Even this blog is shaped by that mindset. After my first post, someone commented and shared how it resonated with them. That moment reminded me that words can matter just as much as code.

What My Audience Needs From Me

I've learned that people don't just want success stories. They want honesty.

They want to hear about the struggle, the confusion, and the small wins in between. They want proof that non-traditional paths into tech are valid. They want practical lessons they can apply, not just motivation quotes.

Most of all, they want representation and reassurance. Seeing someone who looks like them, or comes from a similar background, navigating open source and learning in public can make the journey feel possible.

That's a responsibility I take seriously.

How I've Adjusted Along the Way

Because I'm thinking about my audience, I've changed how I share my journey.

I explain things more clearly. I reflect more deeply on what I'm learning instead of just listing achievements. I'm more intentional about connecting my experiences, debugging a feature, reading unfamiliar code, asking questions in the GNOME community, to lessons others can take away.

Understanding the Papers user base has also influenced how I approach features and fixes. Understanding my blog audience has influenced how I communicate. In both cases, empathy plays a huge role.

Moving Forward

Thinking about my audience has taught me that good software and good writing have something in common: they're built with people in mind.

As I continue this internship and this blog, I want to keep building tools that are accessible, contributing in ways that lower barriers, and sharing my journey honestly. If even one person reads this and feels more capable, or more encouraged to try, then it's worth it.

That's who I'm writing for. And that's who I'm building for.

14 Jan 2026 12:07pm GMT

Flathub Blog: What's new in Vorarbeiter

It is almost a year since the switch to Vorarbeiter for building and publishing apps. We've made several improvements since then, and it's time to brag about them.

RunsOn

In the initial announcement, I mentioned we were using RunsOn, a just-in-time runner provisioning system, to build large apps such as Chromium. Since then, we have fully switched to RunsOn for all builds. Free GitHub runners available to open source projects are heavily overloaded and there are limits on how many concurrent builds can run at a time. With RunsOn, we can request an arbitrary number of threads, memory and disk space, for less than if we were to use paid GitHub runners.

We also rely more on spot instances, which are even cheaper than the usual on demand machines. The downside is that jobs sometimes get interrupted. To avoid spending too much time on retry ping-pong, builds retried with the special bot, retry command use the on-demand instances from the get-go. The same catch applies to large builds, which are unlikely to finish in time before spot instances are reclaimed.

The cost breakdown since May 2025 is as follows:

Cost breakdown

Once again, we are not actually paying for anything thanks to the AWS credits for open source projects program. Thank you RunsOn team and AWS for making this possible!

Caching

Vorarbeiter now supports caching downloads and ccache files between builds. Everything is an OCI image if you are feeling brave enough, and so we are storing the per-app cache with ORAS in GitHub Container Registry.

This is especially useful for cosmetic rebuilds and minor version bumps, where most of the source code remains the same. Your mileage may vary for anything more complex.

End-of-life without rebuilding

One of the Buildbot limitations was that it was difficult to retrofit pull requests marking apps as end-of-life without rebuilding them. Flat-manager itself exposes an API call for this since 2019 but we could not really use it, as apps had to be in a buildable state only to deprecate them.

Vorarbeiter will now detect that a PR modifies only the end-of-life keys in the flathub.json file, skip test and regular builds, and directly use the flat-manager API to republish the app with the EOL flag set post-merge.

Web UI

GitHub's UI isn't really built for a centralized repository building other repositories. My love-hate relationship with Buildbot made me want to have a similar dashboard for Vorarbeiter.

The new web UI uses PicoCSS and HTMX to provide a tidy table of recent builds. It is unlikely to be particularly interesting to end users, but kinkshaming is not nice, okay? I like to know what's being built and now you can too here.

Reproducible builds

We have started testing binary reproducibility of x86_64 builds targetting the stable repository. This is possible thanks to flathub-repro-checker, a tool doing the necessary legwork to recreate the build environment and compare the result of the rebuild with what is published on Flathub.

While these tests have been running for a while now, we have recently restarted them from scratch after enabling S3 storage for diffoscope artifacts. The current status is on the reproducible builds page.

Failures are not currently acted on. When we collect more results, we may start to surface them to app maintainers for investigation. We also don't test direct uploads at the moment.

14 Jan 2026 12:00am GMT

13 Jan 2026

feedPlanet GNOME

Jussi Pakkanen: How to get banned from Facebook in one simple step

I, too, have (or as you can probably guess from the title of this post, had) a Facebook account. I only ever used it for two purposes.

  1. Finding out what friends I rarely see are doing
  2. Getting invites to events

Facebook has over the years made usage #1 pretty much impossible. My feed contains approximately 1% posts by my friends and 99% ads for image meme "humor" groups whose expected amusement value seems to be approximately the same as punching yourself in the groin.

Still, every now and then I get a glimpse of a post by the people I actively chose to follow. Specifically a friend was pondering about the behaviour of people who do happy birthday posts on profiles of deceased people. Like, if you have not kept up with someone enough to know that they are dead, why would you feel the need to post congratulations on their profile pages.

I wrote a reply which is replicated below. It is not accurate as it is a translation and I no longer have access to the original post.

Some of these might come via recommendations by AI assistants. Maybe in the future AI bots from people who themselves are dead carry on posting birthday congratulations on profiles of other dead people. A sort of a social media for the deceased, if you will.

Roughly one minute later my account was suspended. Let that be a lesson to you all. Do not mention the Dead Internet Theory, for doing so threatens Facebook's ad revenue and is thus taboo. (A more probable explanation is that using the word "death" is prohibited by itself regardless of context, leading to idiotic phrasing in the style of "Person X was born on [date] and d!ed [other date]" that you see all over IG, FB and YT nowadays.)

Apparently to reactivate the account I would need to prove that "[I am] a human being". That might be a tall order given that there are days when I doubt that myself.

The reactivation service is designed in the usual deceptive way where it does not tell you all the things you need to do in advance. Instead it bounces you from one task to another in the hopes that sunk cost fallacy makes you submit to ever more egregious demands. I got out when they demanded a full video selfie where I look around different directions. You can make up your own theories as to why Meta, a known advocate for generative AI and all that garbage, would want a high resolution scans of people's faces. I mean, surely they would not use it for AI training without paying a single cent for usage rights to the original model. Right? Right?

The suspension email ends with this ultimatum.

If you think we suspended your account by mistake, you have 180 days to appeal our decision. If you miss this deadline your account will be permanently disabled.

Well, mr Zuckerberg, my response is the following:

Close it! Delete it! Burn it down to the ground! I'd do it myself this very moment, but I can't delete the account without reactivating it first.

Let it also be noted that this post is a much better way of proving that I am a human being than some video selfie thing that could be trivially faked with genAI.

13 Jan 2026 6:06pm GMT

Arun Raghavan: Accessibility Update: Enabling Mono Audio

If you maintain a Linux audio settings component, we now have a way to globally enable/disable mono audio for users who do not want stereo separation of their audio (for example, due to hearing loss in one ear). Read on for the details on how to do this.

Background

Most systems support stereo audio via their default speaker output or 3.5mm analog connector. These devices are exposed as stereo devices to applications, and applications typically render stereo content to these devices.

Visual media use stereo for directional cues, and music is usually produced using stereo effects to separate instruments, or provide a specific experience.

It is not uncommon for modern systems to provide a "mono audio" option that allows users to have all stereo content mixed together and played to both output channels. The most common scenario is hearing loss in one ear.

PulseAudio and PipeWire have supported forcing mono audio on the system via configuration files for a while now. However, this is not easy to expose via user interfaces, and unfortunately remains a power-user feature.

Implementation

Recently, Julian Bouzas implemented a WirePlumber setting to force all hardware audio outputs (MR 721 and 769). This lets the system run in stereo mode, but configures the audioadapter around the device node to mix down the final audio to mono.

This can be enabled using the WirePlumber settings via API, or using the command line with:

wpctl settings node.features.audio.mono true

The WirePlumber settings API allows you to query the current value as well as clear the setting and restoring to the default state.

I have also added (MR 2646 and 2655) a mechanism to set this using the PulseAudio API (via the messaging system). Assuming you are using pipewire-pulse, PipeWire's PulseAudio emulation daemon, you can use pa_context_send_message_to_object() or the command line:

pactl send-message /core pipewire-pulse:force-mono-output true

This API allows for a few things:

  • Query existence of the feature: when an empty message body is sent, if a null value is returned, feature is not supported
  • Query current value: when an empty message body is sent, the current value (true or false) is returned if the feature is supported
  • Setting a value: the requested setting (true or false) can be sent as the message body
  • Clearing the current value: sending a message body of null clears the current setting and restores the default

Looking ahead

This feature will become available in the next release of PipeWire (both 1.4.10 and 1.6.0).

I will be adding a toggle in Pavucontrol to expose this, and I hope that GNOME, KDE and other desktop environments will be able to pick this up before long.

Hit me up if you have any questions!

13 Jan 2026 12:09am GMT

09 Jan 2026

feedPlanet GNOME

Allan Day: GNOME Foundation Update, 2026-01-09

Welcome to the first GNOME Foundation update of 2026! I hope that the new year finds you well. The following is a brief summary of what's been happening in the Foundation this week.

Trademark registration renewals

This week we received news that GNOME's trademark registration renewals have been completed. This is an example of the routine legal functions that the GNOME Foundation handles for the GNOME Project, and is part of what I think of as our core operations. The registration lasts for 10 years, so the next renewal is due in 2036. Many thanks to our trademark lawyers for handling this for us!

Microsoft developer account

Another slow registration process that completed this week was getting verified status on our Microsoft Developer Account. This was primarily being handled by Andy Holmes, with a bit of assistance on the Foundation side, so many thanks to him. The verification is required to allow those with Microsoft 365 organizational accounts to use GNOME Online Accounts.

Travel Committee

The Travel Committee had its first meeting of 2026 this week, where it discussed travel sponsorships for last month's GNOME.Asia conference. Sadly, a number of people who were planning to travel to the conference had their visas denied. The committee spent some time assessing what happened with these visa applications, and discussed how to support visa applicants better in future. Thanks in particular to Maria for leading that conversation.

GNOME.Asia Report

Also related to GNOME.Asia: Kristi has posted a very nice report on the event, including some very nice pictures. It looks like it was a great event! Do make sure that you check out the post.

Audit preparation

As I mentioned in previous posts, audit preparation is going to be a major focus for the GNOME Foundation over the next three months. We are also finishing off the final details of our 2024-25 accounts. These two factors resulted in a lot of activity around the books this week. In addition to a lot of back and forth with our bookkeeper and finance advisor, we also had a regular monthly bookkeeping call yesterday, and will be having an extra meeting to make more process in the next few weeks.

New payments platform rollout

With it being the first week of the month, we had a batch of invoices to process and pay this week. For this we made the switch to a new payments processing system, which is going to be used for reimbursement and invoice tracking going forward. So far the system is working really well, and provides us with a more robust, compliant, and integrated process than what we had previously.

Infrastructure

Over the holiday, Bart cleared up the GNOME infrastructure issues backlog. This led him to write a service which will allow us to respond to GitLab abuse reports in a better fashion. On the Flathub side, he completed some work on build reproducibility, and finished adding the ability to re-publish apps that were previously marked as end of life.

FOSDEM

FOSDEM 2026 preparations continued this week. We will be having an Advisory Board meeting, for which attendance is looking good, so good that we are currently in the process of booking a bigger room. We are also in the process of securing a venue for a GNOME social event on the Saturday night.

GNOME Foundation donation receipts

Bart added a new feature to donate.gnome.org this week, to allow donors to generate a report on their donations over the last calendar year. This is intended to provide US tax payers with the documentation necessary to allow them to offset their donations against their tax payments. If you are a donor, you can generate a receipt for 2025 at donate.gnome.org/help .

That's it for this week's update! Thanks for reading, and have a great weekend.

09 Jan 2026 3:56pm GMT

Jussi Pakkanen: AI and money

If you ask people why they are using AI (or want other people to use it) you get a ton of different answers. Typically none of them contain the real reason, which is that using AI is dirt cheap. Between paying a fair amount to get something done and paying very little to give off an impression that the work has been done, the latter tends to win.

The reason AI is so cheap is that it is being paid by investors. And the one thing we know for certain about those kinds of people is that they expect to get their money back. Multiple times over. This might get done by selling the system to a bigger fool before it collapses, but eventually someone will have to earn that money back from actual customers (or from government bailouts, i.e. tax payers).

I'm not an economist and took a grand total of one economics class in the university, most of which I have forgotten. Still, using just that knowledge we can get a rough estimate of the money flows involved. For simplicity let's bundle all AI companies to a single entity and assume a business model based on flat monthly fees.

The total investment

A number that has been floated around is that AI companies have invested approximately one trillion (one thousand billion or 1e12) dollars. Let's use that as the base investment we want to recover.

Number of customers

Sticking with round figures, let's assume that AI usage becomes ubiquitous and that there are one billion monthly subscribers. For comparison the estimated number of current Netflix subscribers is 300 million.

Income and expenses

This one is really hard to estimate. What seems to be the case is that current monthly fees are not enough to even pay back the electricity costs of providing the service. But let's again be generous and assume that some sort of a efficiency breakthrough happens in the future and that the monthly fee is $20 with expenses being $10. This means a $10 profit per user per month.

We ignore one-off costs such as buying several data centers' worth of GPUs every few years to replace the old ones.

The simple computation

With these figures you get $10 billion per month or $120 billion per year. Thus paying off the investment would take a bit more than 8 years. I don't personally know any venture capitalists, but based on random guessing this might fall in the "takes too long, but just about tolerable" level of delay.

So all good then?

Not so fast!

One thing to keep in mind when doing investment payback calculations is the time value of money. Money you get in "the future" is not as valuable as money you have right now. Thus we need to discount them to current value.

Interest rate

I have no idea what a reasonable discount rate for this would be. So let's pick a round number of 5.

The "real-er" numbers

At this point the computations become complex enough that you need to break out the big guns. Yes, spreadsheets.

Here we see that it actually takes 12 years to earn back the investment. Doubling the investment to two trillion would take 36 years. That is a fair bit of time for someone else to create a different system that performs maybe 70% as well but which costs a fraction of the old systems to get running and operate. By which time they can drive the price so low that established players can't even earn their operating expenses let alone pay back the original investment.

Exercises for the reader

  • This computation assumes the system to have one billion subscribers from day one. How much longer does it take to recuperate the investment if it takes 5 years to reach that many subscribers? What about 10 years?
  • How long is the payback period if you have a mere 500 million paid subscribers?
  • Your boss is concerned about the long payback period and wants to shorten it by increasing the monthly fee. Estimate how many people would stop using the service and its effect on the payback time if the fee is raised from $20 to $50. How about $100? Or $1000?
  • What happens when the ad revenue you can obtain by dumping tons of AI slop on the Internet falls below the cost of producing said slop?

09 Jan 2026 1:56pm GMT