12 Jun 2025

feedPlanet GNOME

Lennart Poettering: ASG! 2025 CfP Closes Tomorrow!

The All Systems Go! 2025 Call for Participation Closes Tomorrow!

The Call for Participation (CFP) for All Systems Go! 2025 will close tomorrow, on 13th of June! We'd like to invite you to submit your proposals for consideration to the CFP submission site quickly!

12 Jun 2025 12:00am GMT

11 Jun 2025

feedPlanet GNOME

Andy Wingo: whippet in guile hacklog: evacuation

Good evening, hackfolk. A quick note this evening to record a waypoint in my efforts to improve Guile's memory manager.

So, I got Guile running on top of the Whippet API. This API can be implemented by a number of concrete garbage collector implementations. The implementation backed by the Boehm collector is fine, as expected. The implementation that uses the bump-pointer-allocation-into-holes strategy is less good. The minor reason is heap sizing heuristics; I still get it wrong about when to grow the heap and when not to do so. But the major reason is that non-moving Immix collectors appear to have pathological fragmentation characteristics.

Fragmentation, for our purposes, is memory under the control of the GC which was free after the previous collection, but which the current cycle failed to use for allocation. I have the feeling that for the non-moving Immix-family collector implementations, fragmentation is much higher than for size-segregated freelist-based mark-sweep collectors. For an allocation of, say, 1024 bytes, the collector might have to scan over many smaller holes until you find a hole that is big enough. This wastes free memory. Fragmentation memory is not gone-it is still available for allocation!-but it won't be allocatable until after the current cycle when we visit all holes again. In Immix, fragmentation wastes allocatable memory during a cycle, hastening collection and causing more frequent whole-heap traversals.

The value proposition of Immix is that if there is too much fragmentation, you can just go into evacuating mode, and probably improve things. I still buy it. However I don't think that non-moving Immix is a winner. I still need to do more science to know for sure. I need to fix Guile to support the stack-conservative, heap-precise version of the Immix-family collector which will allow for evacuation.

So that's where I'm at: a load of gnarly Guile refactors to allow for precise tracing of the heap. I probably have another couple weeks left until I can run some tests. Fingers crossed; we'll see!

11 Jun 2025 8:56pm GMT

Alireza Shabani: Why GNOME’s Translation Platform Is Called “Damned Lies”

Damned Lies is the name of GNOME's web application for managing localization (l10n) across its projects. But why is it named like this?

Damned Lies about GNOME

Screenshot of Gnome Damned Lies from Google search with the title: Damned Lies about GNOME

On the About page of GNOME's localization site, the only explanation given for the name Damned Lies is a link to a Wikipedia article called "Lies, damned lies, and statistics".

"Damned Lies" comes from the saying "Lies, damned lies, and statistics" which is a 19th-century phrase used to describe the persuasive power of statistics to bolster weak arguments, as described on Wikipedia. One of its earliest known uses appeared in a 1891 letter to the National Observer, which categorised lies into three types:

"Sir, -It has been wittily remarked that there are three kinds of falsehood: the first is a 'fib,' the second is a downright lie, and the third and most aggravated is statistics. It is on statistics and on the absence of statistics that the advocate of national pensions relies …"

To find out more, I asked in GNOME's i18n Matrix room, and Alexandre Franke helped a lot, he said:

Stats are indeed lies, in many ways.
Like if GNOME 48 gets 100% translated in your language on Damned Lies, it doesn't mean the version of GNOME 48 you have installed on your system is 100% translated, because the former is a real time stat for the branch and the latter is a snapshot (tarball) at a specific time.
So 48.1 gets released while the translation is at 99%, and then the translators complete the work, but you won't get the missing translations until 48.2 gets released.
Works the other way around: the translation is at 100% at the time of the release, but then there's a freeze exception and the stats go 99% while the released version is at 100%.
Or you are looking at an old version of GNOME for which there won't be any new release, which wasn't fully translated by the time of the latest release, but then a translator decided that they wanted to see 100% because the incomplete translation was not looking as nice as they'd like, and you end up with Damned Lies telling you that version of GNOME was fully translated when it never was and never will be.
All that to say that translators need to learn to work smart, at the right time, on the right modules, and not focus on the stats.

So there you have it: Damned Lies is a name that reminds us that numbers and statistics can be misleading even on GNOME's I10n Web application.

11 Jun 2025 1:32pm GMT

Varun R Mallya: The Design of Sysprof-eBPF

Sysprof

This is a tool that is used to profile applications on Linux. It tracks function calls and other events in the system to provide a detailed view of what is happening in the system. It is a powerful tool that can help developers optimize their applications and understand performance issues. Visit Sysprof for more information.

sysprof-ebpf

This is a project I am working on as part of GSoC 2025 mentored by Christian Hergert. The goal is to create a new backend for Sysprof that uses eBPF to collect profiling data. This will mostly serve as groundwork for the coming eBPF capabilities that will be added to Sysprof. This will hopefully also serve as the design documentation for anyone reading the code for Sysprof-eBPF in the future.

Testing

If you want to test out the current state of the code, you can do so by following these steps:

  1. Clone the repo and fetch my branch.
  2. Run the following script in the root of the project:
    #!/bin/bash
    set -euo pipefail
    GREEN="\033[0;32m"
    BLUE="\033[0;34m"
    RESET="\033[0m"
    
    prefix() {
     local tag="$1"
     while IFS= read -r line; do
     printf "%b[%s]%b %s\n" "$BLUE" "$tag" "$RESET" "$line"
     done
    }
    
    trap 'sudo pkill -f sysprofd; sudo pkill -f sysprof; exit 0' SIGINT SIGTERM
    
    meson setup build --reconfigure || true
    ninja -C build || exit 1
    sudo ninja -C build install || exit 1
    sudo systemctl restart polkit || exit 1
    
    # Run sysprofd and sysprof as root
    echo -e "${GREEN}Launching sysprofd and sysprof in parallel as root...${RESET}"
    
    sudo stdbuf -oL ./build/src/sysprofd/sysprofd 2>&1 | prefix "sysprofd" &
    sudo stdbuf -oL sysprof 2>&1 | prefix "sysprof" &
    
    wait
    

Capabilities of Sysprof-eBPF

alt text sysprof-ebpf will be a subprocess that will be created by sysprofd when the user selects the eBPF backend on the UI. I will be adding an options menu on the UI to choose which tracers to activate after I am done with the initial implementation. You can find my current dirty code here. As of writing this blog, this MR has the following capabilities:

alt text

Follow up stuff

Structure of sysprof-ebpf

I planned on making this a single threaded process initially, but it dawned on me that not all ring-buffers will update at the same time and this will certainly block IO during polling, so I figured I'll just put each tracer in it's own DexFuture to do this capture in an async way. This has not been implemented as of writing this blog though.

alt text

The eBPF programs will follow the this block diagram in general. I haven't made the config hashmap part of this yet, but I think I'll make it only if it's required in the future. All the currently planned features do not require this config map, but it certainly will be useful when I would need to make the program cross-platform or cross-kernel. This will be one of the last things I will be implementing in the project. alt text

Conclusion

I hope to make this a valuable addition to Sysprof. I will be writing more blogs as I make progress on the project. If you have any questions or suggestions, feel free to reach out to me on GitLab or Twitter. Also, I'd absolutely LOVE suggestions on how to improve the design of this project. I am still learning and I am open to any suggestions that can make this project better.

11 Jun 2025 12:00am GMT

10 Jun 2025

feedPlanet GNOME

Adrian Vovk: Introducing stronger dependencies on systemd

Doesn't GNOME already depend on systemd?

Kinda… GNOME doesn't have a formal and well defined policy in place about systemd. The rule of thumb is that GNOME doesn't strictly depend on systemd for critical desktop functionality, but individual features may break without it.

GNOME does strongly depend on logind, systemd's session and seat management service. GNOME first introduced support for logind in 2011, then in 2015 ConsoleKit support was removed and logind became a requirement. However, logind can exist in isolation from systemd: the modern elogind service does just that, and even back in 2015 there were alternatives available. Some distributors chose to patch ConsoleKit support back into GNOME. This way, GNOME can run in environments without systemd, including the BSDs.

While GNOME can run with other init systems, most upstream GNOME developers are not testing GNOME in these situations. Our automated testing infrastructure (i.e. GNOME OS) doesn't test any non-systemd codepaths. And many modules that have non-systemd codepaths do so with the expectation that someone else will maintain them and fix them when they break.

What's changing?

GNOME is about to gain a few strong dependencies on systemd, and this will make running GNOME harder in environments that don't have systemd available.

Let's start with the easier of the changes. GDM is gaining a dependency on systemd's userdb infrastructure. GNOME and systemd do not support running more than one graphical session under the same user account, but GDM supports multi-seat configurations and Remote Login with RDP. This means that GDM may try to display multiple login screens at once, and thus multiple graphical sessions at once. At the moment, GDM relies on legacy behaviors and straight-up hacks to get this working, but this solution is incompatible with the modern dbus-broker and so we're looking to clean this up. To that end, GDM now leverages systemd-userdb to dynamically allocate user accounts, and then runs each login screen as a unique user.

In the future, we plan to further depend on userdb by dropping the AccountsService daemon, which was designed to be a stop-gap measure for the lack of a rich user database. 15 years later, this "temporary" solution is still in use. Now that systemd's userdb enables rich user records, we can start work on replacing AccountsService.

Next, the bigger change. Since GNOME 3.34, gnome-session uses the systemd user instance to start and manage the various GNOME session services. When systemd is unavailable, gnome-session falls back to a builtin service manager. This builtin service manager uses .desktop files to start up the various GNOME session services, and then monitors them for failure. This code was initially implemented for GNOME 2.24, and is starting to show its age. It has received very minimal attention in the 17 years since it was first written. Really, there's no reason to keep maintaining a bespoke and somewhat primitive service manager when we have systemd at our disposal. The only reason this code hasn't completely bit rotted is the fact that GDM's aforementioned hacks break systemd and so we rely on the builtin service manager to launch the login screen.

Well, that has now changed. The hacks in GDM are gone, and the login screen's session is managed by systemd. This means that the builtin service manager will now be completely unused and untested. Moreover: we'd like to implement a session save/restore feature, but the builtin service manager interferes with that. For this reason, the code is being removed.

So what should distros without systemd do?

First, consider using GNOME with systemd. You'd be running in a configuration supported, endorsed, and understood by upstream. Failing that, though, you'll need to implement replacements for more systemd components, similarly to what you have done with elogind and eudev.

To help you out, I've put a temporary alternate code path into GDM that makes it possible to run GDM without an implementation of userdb. When compiled against elogind, instead of trying to allocate dynamic users GDM will look-up and use the gdm-greeter user for the first login screen it spawns, gdm-greeter-2 for the second, and gdm-greeter-N for the Nth. GDM will have similar behavior with the gnome-initial-setup[-N] users. You can statically allocate as many of these users as necessary, and GDM will work with them for now. It's quite likely that this will be necessary for GNOME 49.

Next: you'll need to deal with the removal of gnome-session's builtin service manager. If you don't have a service manager running in the user session, you'll need to get one. Just like system services, GNOME session services now install systemd unit files, and you'll have to replace these unit files with your own service manager's definitions. Next, you'll need to replace the "session leader" process: this is the main gnome-session binary that's launched by GDM to kick off session startup. The upstream session leader just talks to systemd over D-Bus to upload its environment variables and then start a unit, so you'll need to replace that with something that communicates with your service manager instead. Finally, you'll probably need to replace "gnome-session-ctl", which is a tiny helper binary that's used to coordinate between the session leader, the main D-Bus service, and systemd. It is also quite likely that this will be needed for GNOME 49

Finally: You should implement the necessary infrastructure for the userdb Varlink API to function. Once AccountsService is dropped and GNOME starts to depend more on userdb, the alternate code path will be removed from GDM. This will happen in some future GNOME release (50 or later). By then, you'll need at the very least:

Apologies for the short timeline, but this blog post could only be published after I knew how exactly I'm splitting up gnome-session into separate launcher and main D-Bus service processes. Keep in mind that GNOME 48 will continue to receive security and bug fixes until GNOME 50 is released. Thus, if you cannot address these changes in time, you have the option of holding back the GNOME version. If you can't do that, you might be able to get GNOME 49 running with gnome-session 48, though this is a configuration that won't be tested or supported upstream so your mileage will vary (much like running GNOME on other init systems). Still, patching that scenario to work may buy you more time to upgrade to gnome-session 49.

And that should be all for now!

10 Jun 2025 10:48pm GMT

GNOME Foundation News: GNOME Has a New Infrastructure Partner: Welcome AWS!

This post was contributed by Andrea Veri from the GNOME Foundation.

GNOME has historically hosted its infrastructure on premises. That changed with an AWS Open Source Credits program sponsorship which has allowed our team of two SREs to migrate the majority of the workloads to the cloud and turn the existing OpenShift environment into a fully scalable and fault tolerant one thanks to the infrastructure provided by AWS. By moving to the cloud, we have dramatically reduced the maintenance burden, achieved lower latency for our users and contributors and increased security through better access controls.

Our original infrastructure did not account for the exponential growth that GNOME has seen in its contributors and userbase over the past 4-5 years thanks to the introduction of GNOME Circle. GNOME Circle is composed of applications that are not part of core GNOME but are meant to extend the ecosystem without being bound to the stricter core policies and release schedules. Contributions on these projects also make contributors eligible for GNOME Foundation membership and potentially allow them to receive direct commit access to GitLab in case the contributions are consistent over a long period of time in order to gain more trust from the community. GNOME recently migrated to GitLab, away from cgit and Bugzilla.

In this post, we'd like to share some of the improvements we've made as a result of our migration to the cloud.

A history of network and storage challenges

In 2020, we documented our main architectural challenges:

  1. Our infrastructure was built on OpenShift in a hyperconverged setup, using OpenShift Data Foundations (ODF), running Ceph and Rook behind the scenes. Our control plane and workloads were also running on top of the same nodes.
  2. Because GNOME historically did not have an L3 network and generally had no plans to upgrade the underlying network equipment and/or invest time in refactoring it, we would have to run our gateway using a plain Linux VM with all the associated consequences.
  3. We also wanted to make use of an external Ceph cluster with slower storage, but this was not supported in ODF and required extra glue to make it work.
  4. No changes were planned on the networking equipment side to make links redundant. That meant a code upgrade on switches would have required full service downtime.
  5. We had to work with with Dell support for every broken hardware component, which added further toil.
  6. With the GNOME user and contributor base always increasing, we never really had a good way to scale our compute resources due to budget constraints.

Cloud migration improvements

In 2024, during a hardware refresh cycle, we started evaluating the idea of migrating to the public cloud. We have been participating in the AWS Open Source Credits program for many years and received sponsorship for a set of Amazon Simple Storage Service (S3) buckets that we use widely across GNOME services. Based on our previous experience with the program and the people running it, we decided to request sponsorship from AWS for the entire infrastructure, which was kindly accepted.

I believe it's crucial to understand how AWS resolved the architectural challenges we had as a small SRE team (just two engineers!). Most importantly, the move dramatically reduced the maintenance toil we had:

  1. Using AWS's provided software-defined networking services, we no longer have to rely on an external team to apply changes to the underlying networking layout. This also gave us a way to use a redundant gateway and NAT without having to expose worker nodes to the internet.
  2. We now use AWS Elastic Load Balancing (ELB) instances (classic load balancers are the only type supported by OpenShift for now) as a traffic ingress for our OpenShift cluster. This reduces latency as we now operate within the same VPC instead of relying on an external load balancing provider. This also comes with the ability to have access to the security group APIs which we can use to dynamically add IP addresses. This is critical when we have individuals or organizations abusing specific GNOME services with thousands of queries per minute.
  3. We also use Amazon Elastic Block Store (EBS) and Amazon Elastic File System (EFS) via the OpenShift CSI driver. This allows us to avoid having to manage a Ceph cluster, which is a major win in terms of maintenance and operability.
  4. With AWS Graviton instances, we now have access to ARM64 machines, which we heavily leverage as they're generally cheaper than their Intel counterparts.
  5. Given how extensively we use Amazon S3 across the infrastructure, we were able to reduce latency and costs due to the use of internal VPC S3 endpoints.
  6. We took advantage of AWS Identity and Access Management (IAM) to provide granular access to AWS services, giving us the possibility to allow individual contributors to manage a limited set of resources without requiring higher privileges.
  7. We now have complete hardware management abstraction, which is vital for a team of only two engineers who are trying to avoid any additional maintenance burden.

Thank you, AWS!

I'd like to thank AWS for their sponsorship and the massive opportunity they are giving to the GNOME Infrastructure to provide resilient, stable and highly available workloads to GNOME's users and contributors across the globe.

10 Jun 2025 2:32pm GMT

09 Jun 2025

feedPlanet GNOME

Daniel García Moreno: Log Detective: Google Summer of Code 2025

I'm glad to say that I'll participate again in the GSoC, as mentor. This year we will try to improve the RPM packaging workflow using AI, as part of the openSUSE project.

So this summer I'll be mentoring an intern that will research how to integrate Log Detective with openSUSE tooling to improve the packager workflow to maintain rpm packages.

Log Detective

Log Detective is an initiative created by the Fedora project, with the goal of

"Train an AI model to understand RPM build logs and explain the failure in simple words, with recommendations how to fix it. You won't need to open the logs at all."

As a project that was promoted by Fedora, it's highly integrated with the build tools around this distribution and RPM packages. But RPM packages are used in a lot of different distributions, so this "expert" LLM will be helpful for everyone doing RPM, and everyone doing RPM, should contribute to it.

This is open source, so if, at openSUSE, we want to have something similar to improve the OBS, we don't need to reimplement it, we can collaborate. And that's the idea of this GSoC project.

We want to use Log Detective, but also collaborate with failures from openSUSE to improve the training and the AI, and this should benefit openSUSE but also will benefit Fedora and all other RPM based distributions.

The intern

The selected intern is Aazam Thakur. He studies at University of Mumbai, India. He has experience in using SUSE as he has previously worked on SLES 15.6 during his previous summer mentorship at OpenMainFrame Project for RPM packaging.

I'm sure that he will be able to achieve great things during these three months. The project looks very promising and it's one of the things where AI and LLM will shine, because digging into logs is always something difficult and if we train a LLM with a lot of data it can be really useful to categorize failures and give a short description of what's happening.

09 Jun 2025 12:00pm GMT

08 Jun 2025

feedPlanet GNOME

Tanmay Patil: Acrostic Generator for GNOME Crossword Editor

The experimental Acrostic Generator has finally landed inside the Crossword editor and is currently tagged as BETA.
I'd classify this as one of the trickiest and most interesting projects I've worked on.
Here's how an acrostic puzzle loaded inside Crossword editor looks like:

In my previous blog post (published about a year ago), I explained one part of the generator. Since then, there have been many improvements.
I won't go into detail about what an acrostic puzzle is, as I've covered that in multiple previous posts already.
If you're unfamiliar, please check out my earlier post for a brief idea.

Coming to the Acrostic Generator, I'll begin by showing an illustration that shows the input and the corresponding output generated by it. After that, I'll walk through the implementation and challenges I faced.

Let's take the quote: "CATS ALWAYS TAKE NAPS" whose author is a "CAT".

Here's what the Acrostic Generator essentially does

It generates answers like "CATSPAW", "ALASKAN" and "TYES" which, as you can probably guess from the color coding, are made up of letters from the original quote.

Core Components

Before explaining how the Acrostic generator works, I want to briefly explain some of the key components involved.
1. Word list
The word list is an important part of Crosswords. It provides APIs to efficiently search for words. Refer to the documentation to understand how it works.
2. IpuzCharset
The performance of the Acrostic Generator heavily depends on IpuzCharset, which is essentially a HashMap that stores characters and their frequencies.
We perform numerous ipuz_charset_add_text and ipuz_charset_remove_text operations on the QUOTE charset. I'd especially like to highlight ipuz_charset_remove_text, which used to be computationally very slow. Last year, charset was rewritten in Rust by Federico. Compared to the earlier implementation in C using a GTree, the Rust version turned out to be quite faster.
Here's Federico's blog post on rustifying libipuz's charset.

Why is ipuz_charset_remove_text latency so important? Let's consider the following example:

QUOTE: "CARNEGIE VISITED PRINCETON AND TOLD WILSON WHAT HIS YOUNG MEN NEEDED WAS NOT A LAW SCHOOL BUT A LAKE TO ROW ON IN ADDITION TO BEING A SPORT THAT BUILT CHARACTER AND WOULD LET THE UNDERGRADUATES RELAX ROWING WOULD KEEP THEM FROM PLAYING FOOTBALL A ROUGHNECK SPORT CARNEGIE DETESTED"
SOURCE: "DAVID HALBERSTAM THE AMATEURS"

In this case, the total number of maximum ipuz_charset_remove_text operations required in the worst case would be:

73205424239083486088110552395002236620343529838736721637033364389888000000

…which is a lot.

Terminology

I'd also like you guys to take a note of a few things.
1. Answers and Clues refer to the same thing, they are the solutions generated by the Acrostic Generator. I'll be using them interchangeably throughout.
2. We've set two constants in the engine: MIN_WORD_SIZE = 3 and MAX_WORD_SIZE = 20. These make sure the answers are not too short or too long and help stop the engine from running indefinitely.
3. Leading characters here are all the characters of source. Each one is the first letter of corresponding answer.

Setting up things

Before running the engine, we need to set up some data structures to store the results.

typedef struct {
/* Representing a answer */
gunichar leading_char;
const gchar *letters;
guint word_length;

/* Searching the answer */
gchar *filter;
WordList *word_list;
GArray *rand_offset;
} ClueEntry;

We use a ClueEntry structure to store the answer for each clue. It holds the leading character (from the source), the letters of the answer, the word length, and some additional word list information.
Oh wait, why do we need the word length since we are already storing letters of the answer?
Let's backtrack. Initially, I wrote the following brute-force recursive algorithm:

void
acrostic_generator_helper (AcrosticGenerator *self,
gchar nth_source_char)
{
// Iterate from min_word_size to max_word_size for every answer
for (word_length = min_word_size; word_length <= max_word_size; word_length++)
{
// get list of words starting from `nth_source_char`
// and with length equal to word_length
word_list = get_word_list (starting_letter = nth_source_char, word_length);

// Iterate throught the word list
for (guint i = 0; i < word_list_get_n_items (word_list); i++)
{
word = word_list[i];

// check if word is present in the quote charset
if (ipuz_charset_remove_text (quote_charset, word))
{
// if present we forward to the next source char
acrostic_generator_helper (self, nth_source_char + 1)
}
}
}
}

The problem with this approach is that it is too slow. We were iterating from MIN_WORD_SIZE to MAX_WORD_SIZE and trying to find a solution for every possible size. Yes, this would work and eventually we'll find a solution, but it would take a lot of time. Also, many of the answers for the initial source characters would end up having length equal to MIN_WORD_SIZE .
To quantify this, compared to the latest approach (which I'll discuss shortly), we would be performing roughly 20 times the current number (7.3 × 10⁷³) of ipuz_charset_remove_text operations.

To fix this, we added randomness by calculating and assigning random lengths to clue answers before running the engine.
To generate these random lengths, we break a number equal to the length of the quote string into n parts (where n is the number of source characters), each part having a random value.

static gboolean
generate_random_lengths (GArray *clues,
guint number,
guint min_word_size,
guint max_word_size)
{
if ((clues->len * max_word_size) < number)
return FALSE;

guint sum = 0;

for (guint i = 0; i < clues->len; i++)
{
ClueEntry *clue_entry;
guint len;
guint max_len = MAX (min_word_size,
MIN (max_word_size, number - sum));

len = rand() % (max_len - min_word_size + 1) + min_word_size;
sum += len;

clue_entry = &(g_array_index (clues, ClueEntry, i));
clue_entry->word_length = len;
}

return sum == number;
}

I have been continuously researching ways to generate random lengths that help the generator find answers as quickly as possible.
What I concluded is that the Acrostic Generator performs best when the word lengths follow a right-skewed distribution.

static void
fill_clue_entries (GArray *clues,
ClueScore *candidates,
WordListResource *resource)
{
for (guint i = 0; i < clues->len; i++)
{
ClueEntry *clue_entry;

clue_entry = &(g_array_index (clues, ClueEntry, i));

// Generate filter in order to get words with starting letter nth char of source string
// For eg. char = D, answer_len = 5
// filter = "D????"
clue_entry->filter = generate_individual_filter (clue_entry->leading_char,
clue_entry->word_length);


// Load all words with starting letter equal to nth char in source string
clue_entry->word_list = word_list_new ();
word_list_set_resource (clue_entry->word_list, resource);
word_list_set_filter (clue_entry->word_list, clue_entry->filter, WORD_LIST_MATCH);

candidates[i].index = i;
candidates[i].score = clue_entry->word_length;

// Randomise the word list which is sorted by default
clue_entry->rand_offset = generate_random_lookup (word_list_get_n_items (clue_entry->word_list));
}

Now that we have random lengths, we fill up the ClueEntry data structure.
Here, we generate individual filters for each clue, which are used to set the filter on each word list. For example, the filters for the example illustrated above are C??????, A??????, and T??? .
We also maintain a separate word list for each clue entry. Note that we do not store the huge word list individually for every clue. Instead, each word list object refers to the same memory-mapped word list resource.
Additionally, each clue entry contains a random offsets array, which stores a randomized order of indices. We use this to traverse the filtered word list in a random order. This randomness helps fix the problem where many answers for the initial source characters would otherwise end up with length equal to MIN_WORD_SIZE.
The advantage of pre-calculating all of this before running the engine is that the main engine loop only performs the heavy operations: ipuz_charset_remove_text and ipuz_charset_add_text.

static gboolean
acrostic_generator_helper (AcrosticGenerator *self,
GArray *clues,
guint index,
IpuzCharsetBuilder *remaining_letters,
ClueScore *candidates)
{
ClueEntry *clue_entry;

if (index == clues->len)
return TRUE;

clue_entry = &(g_array_index (clues, ClueEntry, candidates[index].index));

for (guint i = 0; i < word_list_get_n_items (clue_entry->word_list); i++)
{
const gchar *word;

g_atomic_int_inc (self->count);


// traverse based on random indices
word = word_list_get_word (clue_entry->word_list,
g_array_index (clue_entry->rand_offset, gushort, i));

clue_entry->letters = word;

if (ipuz_charset_builder_remove_text (remaining_letters, word + 1))
{
if (!add_or_skip_word (self, word) &&
acrostic_generator_helper (self, clues, index + 1, remaining_letters, candidates))
return TRUE;

clean_up_word (self, word);
ipuz_charset_builder_add_text (remaining_letters, word + 1);
clue_entry->letters = NULL;
}

}

clue_entry->letters = NULL;

return FALSE;
}

The approach is quite simple. As you can see in the code above, we perform ipuz_charset_remove_text many times, so it was crucial to make the ipuz_charset_remove_text operation efficient.
When all the characters in the charset have been used/removed and the index becomes equal to number of clues, it means we have found a solution. At this point, we return, store the answers in an array, and continue our search for new answers until we receive a stop signal.
We also maintain a skip list that is updated whenever we find an clue answer and is cleaned up during backtracking. This makes sure there are no duplicate answers in the answers list.

Performance Improvements

I compared the performance of the acrostic generator using the current Rust charset implementation against the previous C GTree implementation. I have used the following quote and source strings with the same RNG seed for both implementations:

QUOTE: "To be yourself in a world that is constantly trying to make you something else is the greatest accomplishment."
SOURCE: "TBYIWTCTMYSEGA"
Results:
+-----------------+--------------------+
| Implementation | Time taken(secs) |
+-----------------+--------------------+
| C GTree | 74.39 |
| Rust HashMap | 17.85 |
+-----------------+--------------------+

The Rust HashMap implementation is nearly 4 times faster than the original C GTree version for the same random seed and traversal order.

I have also been testing the generator to find small performance improvements. Here are some of them:

  1. When searching for answers, looking for answers for clues with longer word lengths first helps find solutions faster
  2. We switched to using nohash_hasher for the hashmap because we are essentially storing {char: frequency} pairs. Trace reports showed that significant time and resources were spent computing hash using Rust's default SipHash implementation which was unnecessary. MR
  3. Inside ipuz_charset_remove_text, instead of cloning the original data, we use a rollback mechanism that tracks all modifications and rolls back in case of failure. MR

I also remember running the generator on some quote and source input back in the early days. It ran continuously for four hours and still couldn't find a single solution. We even overflowed the gint counter which tracks number of words tried. Now, the same generator can return 10 solutions in under 10 seconds. We've come a long way! 😀

Crossword Editor

Now that I've covered the engine, I'll talk about the UI part.
We started off by sketching potential designs on paper. @jrb came up with a good design and we decided to move forward with it, making a few tweaks to it.

First, we needed to display a list of the generated answers.

For this, I implemented my own list model where each item stores a string for the answer and a boolean indicating whether the user wants to apply that answer.
To allow the user to run, stop the generator and then apply answers, we reused the compact version of the original autofill component used in normal crosswords. The answer list gets updated whenever the slider is moved.

We have tried to reuse as much code as possible for acrostics, keeping most of the code common between acrostics and normal crosswords.
Here's a quick demo of the acrostic editor in action:

We also maintain a cute little histogram on the right side of the bottom panel to summarize clue lengths.

You can also try out the Acrostic Generator using our CLI app, which I originally wrote to quickly test the engine. To use the binary, you'll need to build Crosswords Editor locally. Example usage:

$ ./_build/src/acrostic-generator -q "For most of history, Anonymous was a woman. I would venture to guess that Anon, who wrote so many poems without signing them, was often a woman. And it is for this reason that I would implore women to write all the more" -s "Virginia wolf"
Starting acrostic generator. Press Ctrl+C to cancel.
[ VASOTOMY ] [ IMFROMMISSOURI ] [ ROMANIANMONETARYUNIT ] [ GREATFEATSOFSTRENGTH ] [ ITHOUGHTWEHADADEAL ] [ NEWSSHOW ] [ INSTITUTION ] [ AWAYWITHWORDS ] [ WOOLSORTERSPNEUMONIA ] [ ONEWOMANSHOWS ] [ LOWMANONTHETOTEMPOLE ] [ FLOWOUT ]
[ VALOROUSNESS ] [ IMMUNOSUPPRESSOR ] [ RIGHTEOUSINDIGNATION ] [ GATEWAYTOTHEWEST ] [ IWANTYOUTOWANTME ] [ NEWTONSLAWOFMOTION ] [ IMTOOOLDFORTHISSHIT ] [ ANYONEWHOHADAHEART ] [ WOWMOMENT ] [ OMERS ] [ LAWUNTOHIMSELF ] [ FORMATWAR ]

Plans for the future

To begin with, we'd really like to improve the overall design of the Acrostic Editor and make it more user friendly. Let us know if you have any design ideas, we'd love to hear your suggestions!
I've also been thinking about different algorithms for generating answers in the Acrostic Generator. One idea is to use a divide-and-conquer approach, where we recursively split the quote until we find a set of sub-quotes that satisfy all constraints of answers.

To conclude, here's an acrostic for you all to solve, created using the Acrostic Editor! You can load the file in Crosswords and start playing.

Thanks for reading!

08 Jun 2025 9:42am GMT

Jordan Petridis: An update on the X11 GNOME Session Removal

A year and a half ago, shortly after the GNOME 45 release, I opened a pair of Pull Requests to deprecate and remove the X11 Session.

A lot has happened since. The GNOME 48 release addressed all the remaining blocking issues, mainly accessibility regressions, but it was too late in the development cycle to drop the session as well.

Now the time has come.

We went ahead and disabled the X11 session by default and from now on it needs to be explicitly enabled when building the affected modules. (gnome-session, GDM, mutter/gnome-shell). This does not affect XWayland, it's only about the X11/Xorg session and related functionality. GDM's ability to launch other X11 sessions will be also preserved.

Usually we release a single Alpha snapshot, but this time we have released earlier snapshots (49.alpha.0), 3 weeks ahead of the normal schedule, to gather as much feedback and testing as possible. (There will be another snapshot along the complete GNOME 49 Alpha release).

If you are a distributor, please try to not change the default or at least let us (or me directly) know why you'd need to still ship the X11 session.

As I mentioned in the tracking issue ticket, there 3 possible scenarios.

The most likely scenario is that all the X11 session code stays disabled by default for 49 with a planned removal for GNOME 50.

The ideal scenario is that everything is perfect, there are no more issues and bugs, we can go ahead and drop all the code before GNOME 49.beta.

And the very unlikely scenario is that we discover some deal-breaking issue, revert the changes and postpone the whole thing.

Having gathered feedback from our distribution partners, it now depends entirely on how well the early testing will go and what bugs will be uncovered.

You can test GNOME OS Nightly with all the changes today. We found a couple minor issues but everything is fixed in the alpha.0 snapshot. Given how smooth things are going so far I believe there is a high likely-hood there won't be any further issues and we might be able to proceed with the Ideal scenario.

TLDR: The X11 session for GNOME 49 will be disabled by default and it's scheduled for removal, either during this development cycle or more likely during the next one (GNOME 50). There are release snapshots of 49.alpha.0 for some modules already available. Go and try them out!

Happy Pride month and Free Palestine ✊

08 Jun 2025 5:57am GMT

07 Jun 2025

feedPlanet GNOME

Steven Deobald: 2025-06-06 Foundation Report

Imagine a punchy, news-broadcast-sounding intro tune and probably some 3D text swinging around a shiny, silver globe. Dun da da dun: The June 6th, 2025 GNOME Foundation Report!

Sorry. These reports need a little colour or I'm going to get bored of writing them. Also sorry this one is late again! Busy week.

## Fundraising

This week's big activity (for me) was preparing a fundraising proposal for the Board of Directors at a special meeting on Tuesday. The day before, everyone on staff patiently listened to me shout and spit and sweat and then patiently gave me feedback. Thanks y'all.

Sidenote: I love the notion of a "special meeting." I know it's not meant to feel cute and silly, but it feels very cute and silly. That said, we got a lot done!

The Board is on-board. Yay. We had a project kickoff the next day. We have a repo, we have some early work done already. I'm not allowed to make any promises. 😉 So you'll just have to watch this space, I guess.

## Treasurer

During the special meeting, it was voted that we would make an offer to a new Treasurer. I'm really looking forward to this announcement if they accept!

## Project Wall!

The Staff project wall is really taking off, and it feels like we have some momentum with it now. Too many things in flight and too many cards in the "blocked" column, but we're steadily improving.

## Vaultwarden

Bart's hooked us up with Vaultwarden for the Foundation's shared passwords, as our tooling was a little broken and/or scattered previously. Yay! Thanks Bart.

## Digital Wellbeing Frontend

We held a Digital Wellbeing meeting on Tuesday and we now have a Call for Proposals up:

https://discourse.gnome.org/t/request-for-proposals-digital-wellbeing-frontend/29289

If you still know C and you want to help take this project over the line, it's a neat piece of integration work.

## 501(c)3s

I met my friend Brihas, who is also an Executive Director of another 501(c)3. It was good to pick his brain about:

## Meeting The Matts

I had a chance to sit down with an old friend (Matt Godbolt, of Compiler Explorer fame) and a new friend (Matt Hartley, of Framework Computer fame). We talked variously about how to raise money, the future of the Linux desktop, the "sandwich problem" (that GNOME neither has the name recognition of Linux nor the product recognition of distros), and the fact that every cool kid at Strange Loop 2024 was running a Framework, not a Mac.

I left both calls super excited to talk to them both again. Great folks. (I also just noticed their respective website have very similar gear favicons.)

## Grants

I got to talk to Richard! He's still very busy. He had some grant suggestions. It was nice to see him.

## End of 10

I've still got an eye toward the https://endof10.org/ project. Increasingly, I have a fantasy of a simple, brightly-coloured A4 that explains how to get started with GNOME, in ~6 steps, if you're coming from Windows:

Extras for the back side of the paper:

What do you think? Would you want to help with this? Is this a silly idea? Does this already exist somewhere?

## Meeting People

I had a nice conversation with Lorenz, as he's the only Board candidate I hadn't spoken to yet. I met Sumana Harihareswara, who is extremely cool and I ran out of time while picking her brain about the various ways the GNOME Foundation can start its own grants program. I got some advice from Federico about how to improve our docs-creation process… among other things, he had the pretty sensible idea of just letting people barf streams of consciousness at me (or other folks comfortable with reStructuredText) and letting the documentation gnomes clean it up before publishing it. Seems legit! I had my first formal feedback session with Rosanna - she had prepared a 5-point structured document and I had to admit to her it was the most rigorous feedback I've ever received. 🙂

## UN Open Source Week

I found a couch to crash on in NYC and a cheap flight, so I'll be there! If you're in NYC the week of the 16th to the 20th, reach out!

That's all for this week. See you in the next one and I'm sorry I didn't make it in time for TWIG again.

07 Jun 2025 4:14am GMT

06 Jun 2025

feedPlanet GNOME

Luis Villa: book reports, mid-2025

Some brief notes on books, at the start of a summer that hopefully will allow for more reading.

Monk and Robot (Becky Chambers); Mossa and Pleiti (Malka Older)

Summer reading rec, and ask for more recs: "cozy sci-fi" is now a thing and I love it. Characters going through life, drinking hot beverages, trying to be comfortable despite (waves hands) everything. Mostly coincidentally, doing all those things in post-dystopian far-away planets (one fictional, one Jupiter).

Novellas, perfect for summer reads. Find a sunny nook (or better yet, a rainy summer day nook) and enjoy. (New Mossa and Pleiti comes out Tuesday, yay!)

Buzz Aldrin, in the Apollo 11 capsule, with a bright window visible and many dials and switches behind him. He is wearing white clothing with NASA patches, but not a full space suit, and is focused on whatever is in front of him, out of frame.
A complex socio-technical system, bounding boldly, perhaps foolishly, into the future. (Original via NASA)

Underground Empire (Henry Farrell and Abraham Newman)

This book is about things I know a fair bit about, like international trade sanctions, money transfers, and technology (particularly the intersection of spying and data pipes). So in some sense I learned very little.

But the book efficiently crystallizes all that knowledge into a very dense, smart, important observation: that some aspects of American so-called "soft" (i.e., non-military) power are in increasingly very "hard". To paraphrase, the book's core claim is that the US has, since 2001, amassed what amounts to several, fragmentary "Departments of Economic War". These mechanisms use control over financial and IP transfers to allow whoever is in power in DC to fight whoever it wants. This is primarily China, Russia, and Iran, but also to some extent entities as big as the EU and as small as individual cargo ship captains.

The results are many. Among other things, the authors conclude that because this change is not widely-noticed, it is undertheorized, and so many of the players lack the intellectual toolkit to reason about it. Relatedly, they argue that the entire international system is currently more fragile and unstable than it has been in a long time exactly because of this dynamic: the US's long-standing military power is now matched by globe-spanning economic control that previous US governments have mostly lacked, which in turn is causing the EU and China to try to build their own countervailing mechanisms. But everyone involved is feeling their way through it-which can easily lead to spirals. (Threaded throughout the book, but only rarely explicitly discussed, is the role of democracy in all of this-suffice to say that as told here, it is rarely a constraining factor.)

Tech as we normally think of it is not a big player here, but nevertheless plays several illustrative parts. Microsoft's historical turn from government fighter to Ukraine supporter, Meta's failed cryptocurrency, and various wiretapping comes up for discussion-but mostly in contexts that are very reactive to, or provocative irritants to, the 800lb gorillas of IRL governments.

Unusually for my past book reports on governance and power, where I've been known to stretch almost anything into an allegory for open, I'm not sure that this has many parallels. Rather, the relevance to open is that these are a series of fights that open may increasingly be drawn into-and/or destabilize. Ultimately, one way of thinking about this modern form of power dynamics is that it is a governmental search for "chokepoints" that can be used to force others to bend the knee, and a corresponding distaste for sources of independent power that have no obvious chokepoints. That's a legitimately complicated problem-the authors have some interesting discussion with Vitalik Buterin about it-and open, like everyone else, is going to have to adapt.

Dying Every Day: Seneca at the Court of Nero (James Romm)

Good news: this book documents that being a thoughtful person, seeking good in the world, in the time of a mad king, is not a new problem.

Bad news: this book mostly documents that the ancients didn't have better answers to this problem than we moderns do.

The Challenger Launch Decision (Diane Vaughan)

The research and history in this book are amazing, but the terminology does not quite capture what it is trying to share out as learnings. (It's also very dry.)

The key takeaway: good people, doing hard work, in systems that slowly learn to handle variation, can be completely unprepared for-and incapable of handling-things outside the scope of that variation.

It's definitely the best book about the political analysis of the New York Times in the age of the modern GOP. Also probably good for a lot of technical organizations handling the radical-but-seemingly-small changes detailed in Underground Empire.

Spacesuit: Fashioning Apollo (Nicholas De Monchaux)

A book about how interfaces between humans and technology is hard. (I mean clothes, but also everything else.) Delightful and wide-ranging; maybe won't really learn any deep lessons here but it'd be a great way to force undergrads to grapple with Hard Human Problems That Engineers Thought Would Be Simple.

06 Jun 2025 7:49pm GMT

Jonathan Blandford: Crosswords 0.3.15: Planet Crosswords

It's summer, which means its time for GSoC/Outreachy. This is the third year the Crosswords team is participating, and it has been fantastic. We had a noticeably large number of really strong candidates who showed up and wrote high-quality submissions - significantly more than previous years. There were a more candidates then we could handle, and it was a shame to have to turn some down.

In the end, Tanmay, Federico, and I got together and decided to stretch ourselves and accept three interns for the summer: Nancy, Toluwaleke, and Victor. They will be working on word lists, printing, and overlays respectively, and I'm so thrilled to have them helping out.

A result of this is that there will be a larger number of Crossword posts on planet.gnome.org this summer. I hope everyone is okay with that, and encourages them so they stay involved with GNOME and Free Software.

Release

This last release was mostly a bugfix release. The intern candidates outdid themselves this year by fixing a large number of bugs - so many that I'm releasing this to get them to users. Some highlights:

Arabic Crossword
Arabic Crossword
Divided Cells
Divided Cells

In addition, GSoC-alum Tanmay has kept plugging on his Acrostic editor. It's gotten a lot more sophisticated, and for the first time we're including it in the stable build (albeit as a Beta). This version can be used to create a simple acrostic puzzle. I'll let Tanmay post about it in the coming days.

Coordinates

Specs are hard, especially for file formats. We made an unfortunate discovery about the ipuz spec this cycle. The spec uses a coordinate system to refer to cells in a puzzle - but does not define what the coordinate system means. It provides an example with the upper left corner being (0,0) and that's intuitively a normal addressing system. However, they refer to (ROW1, COL1) in the spec, and there are a few examples in the spec that start the upper left at (1, 1).

When we ran across this issue while writing libipuz we tried a few puzzles in puzzazz (the original implementation) to confirm that (0,0) was the intended origin coordinate. However, we have run across some implementations and puzzles in the wild starting at (1,1). This is going to be pretty painful to untangle, as they two interpretations are largely incompatible. We have a plan to detect the coordinate system being used, but it'll be a rough heuristic at best until the spec gets clarified and revamped.

By the Numbers

With this release, I took a step back and took stock of my little project. The recent releases have seemed pretty substantial, and it's worth doing a little introspection. As of this release, we've reached:

All in all, not too shabby, and not so little anymore.

A Final Request

Crosswords has an official flatpak, an unofficial snap, and Fedora and Arch packages. People have built it on Macs, and there's even an APK that exists. However, there's still no Debian package. That distro is not my world: I'm hoping someone out there will be inspired to package this project for us.

06 Jun 2025 3:43pm GMT

Jussi Pakkanen: Custom C++ stdlib part 3: The bleedingest edge variant

Implementing a variant type in C++ is challenging to say the least. I tried looking into the libstd++ implementation and could not even decipher where the actual data is stored. There is a lot of inheritance going on and helper classes that seem to be doing custom vtable construction and other metaprogramming stuff. The only thing I could truly grasp was a comment saying // "These go to eleven". Sadly there was not a comment // Smell my glove! which would seem more suitable for this occasion.

A modern stdlib does need a variant, though, so I had to implement one. To make it feasible I made the following simplifying assumptions.

  1. All types handled must be noexcept default constructible, move constructible and movable. (i.e. the WellBehaved concept)
  2. If you have a properly allocated and aligned piece of memory, placement new'ing into it works (there may be UB-shenanigans here due to the memory model)
  3. The number of different types that a variant can hold has a predefined static maximum value.
  4. You don't need to support any C++ version older than c++26.

The last one of these is the biggest hurdle, as C++26 will not be released for at least a year. GCC 15 does have support for it, though, so all code below only works with that.

The implementation

At its core, a Pystd variant is nothing more than a byte buffer and an index specifying which type it holds:

template<typename T...>
class Variant {
<other stuff>
char buf[compute_size<0, T...>()] alignas(compute_alignment<0, T...>());
int8_t type_id;
};

The functions to compute max size and alignment requirements for types are simple to implement. The main problem lies elsewhere, specifically: going from a type to the corresponding index, going from a compile time index value to a type and going from a runtime index to the corresponding type.

The middle one is the simplest of these. As of C++26 you can directly index the argument pack like so:

using nth_type = T...[compile_time_constant];

Going from type to an index is only slightly more difficult:

Going from a runtime value to a type is the difficult one. I don't know how to do it "correctly", i.e. the way a proper stdlib implementation does it. However, since the number of possible types is limited at compile time, we can cheat (currently only 5 types are supported):

Unrolling (type) loops like its 1988! This means you can't have variants with hundreds of different types, but in that case you probably need an architectural redesign rather than a more capable variant.

With these primitives implementing public class methods is fairly simple.

The end result

The variant implementation in Pystd and its helper code took approximately 200 lines of code. It handles all the basic stuff in addition to being exception safe for copy operations (implemented as copy to a local variable + move). Compile times remain in fractions of a second per file even though Pystd only has a single public header.

It works in the sense that you can put different types in to it, switch between them and so on without any compiler warnings, sanitizer issues or Valgrind complaints. So be careful with the code, I have only tested it, not proven it correct.

No performance optimization or even measurements have been made.


06 Jun 2025 3:30pm GMT

This Week in GNOME: #203 Infinitely Proud

This Week in GNOME, and this entire month is dedicated to the joys and struggles of all two-spirit, lesbian, gay, bi, trans, queer, inter, pan, asexual, aromantic, and non-binary people.

We celebrate the invaluable work of all 2SLGBTQIA+ contributors and users, across all different backgrounds and experiences. As a special highlight this month and to feel proud all year round, we have worked together to create two new desktop backgrounds, released with GNOME 48.2.

If your distribution does not yet provide the new backgrounds, you can download them manually from here:

We can't afford to stay silent in times when history is literally being erased, and fundamental human rights are being revoked. Silence is complicity. We will not falter at this attempt to divide queer communities. We also encourage everyone to be as outspoken as they can be.

Never forget: We are stronger together.

In light of these circumstances it is especially encouraging to see the community of queer contributors growing steadily. We are here and we are not going anywhere - the GNOME community is and will always stand with queer people. We've got your back.

Events

Tobias Bernard reports

This summer we're asking the question: What if we just started using GNOME OS as our primary OS?

It's still early days for GNOME OS, but it's finally ready for wider testing by developers and early adopters, on real hardware. Join us for a 3-month challenge from today until September 1st, file and fix some issues, and win a a OnePlus 6 with Linux Mobile or a limited-edition shirt 🌈👕

Blog post with more details: https://blogs.gnome.org/tbernard/2025/06/01/summer-of-gnome-os

GNOME Core Apps and Libraries

GTK

Cross-platform widget toolkit for creating graphical user interfaces.

Alice (she/her) 🏳️‍⚧️🏳️‍🌈 says

Heads-up: GTK changed GtkImage behavior when displaying GdkPaintable to strictly use the :pixel-size property and/or -gtk-icon-size CSS property instead of stretching the paintable to the allocated size.

The change is available in the nightly SDK and will be in GTK 4.19.2 and eventually in GNOME 49 SDK, but not in any stable releases/SDK. If your app relies on that (such as for displaying covers or avatars), it may need an update.

GNOME Incubating Apps

Pablo Correa Gomez announces

After months of technical debt cleanups, architectural changes, and small UX improvements, Papers has landed a considerable rework of the user interface for creating and editing annotations. New simplified shorcuts have been added, the number of clicks to create highlight (and similar type) annotations has been reduced, and it's now possible to dynamically change color and annotation type just from the context menu! This has been a greatly requested feature and truly team work between all Papers maintainers: Qiu Wenbo, camelcasenick, lbaudin, and me, as well as other community member like our newest GSoC student Ahmed Fatthi. We hope you all enjoy it!

GNOME Circle Apps and Libraries

Gaphor

A simple UML and SysML modeling tool.

Arjan announces

Gaphor 3.1.0 has been released. Among the improvements are:

  • You can copy from a diagram and paste the diagram directly as SVG or PNG in another application.
  • Many UI improvements. Gaphor now feels more GNOME-ish than ever.
  • For those of you that run Gaphor on macOS: Gaphor now has a proper menu bar

Apostrophe

A distraction free Markdown editor.

Manu (he/they/she) reports

This past weeks I've implemented crash recovery in Apostrophe. If for some reason the application closes before a file has been properly saved or discarded, next time you open Apostrophe it'll be restored. Then you'll be able to save the changes, discard them or continue working in the file were you left.

Third Party Projects

Hari Rana (TheEvilSkeleton) reports

Starting from version 3.1.2, the GNU Image Manipulation Program will have the option to respect the system color scheme on Linux, thanks to XDG Desktop Portal and Niels De Graef's merge request that was used as a foundation. Every desktop that supports the Settings portal interface will be able to make use of that functionality.

Michael Terry announces

Multiplication Puzzle 15.0 is out, finally adding a portrait mode layout, making phone play more pleasant.

Alexander Vanhee says

This week Gradia got the largest update it will probably ever get. It most notably includes 2 core features:

  • Support for taking screenshots from within the app and launching via a custom keyboard shortcut that starts with the screenshot tool.
  • The ability to annotate images with the staples like a pen and text mode, but also some more domain-specific modes like "censor".

Thank you to all who contributed, including everyone who submitted translations.

You can find the app on Flathub

justinrdonnelly reports

I'm thrilled to announce the release of Bouncer! Bouncer is an application to help you choose the correct firewall zone for Wi-Fi networks. You may have seen other operating systems that, when you connect to a new Wi-Fi network, prompt for the type of network (e.g. home, public, work). That's what Bouncer does. When you choose the network type, it is associated with that network and automatically used in the future. This can be useful to keep people from connecting to your laptop while using coffee shop Wi-Fi!

Check it out on Flathub! Please note that there may be additional setup steps beyond just installation. Details are on Flathub and in the README.

[nyx] reports

This week, I released a template for developing GNOME applications using TypeScript!

What makes this template unique? It leverages esbuild to transpile TypeScript code into JavaScript, offering several advantages: the ability to use TypeScript paths for absolute imports, direct support for importing .ui files in your code (similar to the functionality provided by gjspack), seamless integration of npm dependencies (as long as they don't rely on Node.js or other runtimes), and support for modern syntax features like decorators.

In the future, I plan to develop a plugin for esbuild that will simplify the import of Blueprint files.

Without further delay, here are the links: GNOME TypeScript Template | GitHub Mirror

Crosswords

A crossword puzzle game and creator.

jrb announces

Crosswords 0.3.15 has been released (announcement)!

This is a quality-of-life release with a large number of bug fixes and improvements. It also includes the first version of the editor that can generate acrostic puzzles. You can download it at flathub, and it will be available in Fedora momentarily.

Highlights include:

  • Beta version of Acrostic editor
  • Use C-O to open files from everywhere in the game
  • Autodownload puzzle-sets on startup
  • Highlight the first letter of each clue answer for acrostics
  • Thumbnailer works with arrowwords
  • A cleaned up "Save As…" experience in the editor
  • Autofill selection vastly improved in the editor
  • Word list speedups and fixes
  • Barred puzzles render better
  • Dividers render correctly
  • Cell labels measure and layout text correctly

That last fix lets us display Arabic crosswords.

Happy Puzzling!

That's all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

06 Jun 2025 12:00am GMT

05 Jun 2025

feedPlanet GNOME

Matthew Garrett: How Twitter could (somewhat) fix their encrypted DMs

As I wrote in my last post, Twitter's new encrypted DM infrastructure is pretty awful. But the amount of work required to make it somewhat better isn't large.

When Juicebox is used with HSMs, it supports encrypting the communication between the client and the backend. This is handled by generating a unique keypair for each HSM. The public key is provided to the client, while the private key remains within the HSM. Even if you can see the traffic sent to the HSM, it's encrypted using the Noise protocol and so the user's encrypted secret data can't be retrieved.

But this is only useful if you know that the public key corresponds to a private key in the HSM! Right now there's no way to know this, but there's worse - the client doesn't have the public key built into it, it's supplied as a response to an API request made to Twitter's servers. Even if the current keys are associated with the HSMs, Twitter could swap them out with ones that aren't, terminate the encrypted connection at their endpoint, and then fake your query to the HSM and get the encrypted data that way. Worse, this could be done for specific targeted users, without any indication to the user that this has happened, making it almost impossible to detect in general.

This is at least partially fixable. Twitter could prove to a third party that their Juicebox keys were generated in an HSM, and the key material could be moved into clients. This makes attacking individual users more difficult (the backdoor code would need to be shipped in the public client), but can't easily help with the website version[1] even if a framework exists to analyse the clients and verify that the correct public keys are in use.

It's still worse than Signal. Use Signal.

[1] Since they could still just serve backdoored Javascript to specific users. This is, unfortunately, kind of an inherent problem when it comes to web-based clients - we don't have good frameworks to detect whether the site itself is malicious.

comment count unavailable comments

05 Jun 2025 1:18pm GMT

Matthew Garrett: Twitter's new encrypted DMs aren't better than the old ones

(Edit: Twitter could improve this significantly with very few changes - I wrote about that here. It's unclear why they'd launch without doing that, since it entirely defeats the point of using HSMs)

When Twitter[1] launched encrypted DMs a couple
of years ago, it was the worst kind of end-to-end
encrypted - technically e2ee, but in a way that made it relatively easy for Twitter to inject new encryption keys and get everyone's messages anyway. It was also lacking a whole bunch of features such as "sending pictures", so the entire thing was largely a waste of time. But a couple of days ago, Elon announced the arrival of "XChat", a new encrypted message platform built on Rust with (Bitcoin style) encryption, whole new architecture. Maybe this time they've got it right?

tl;dr - no. Use Signal. Twitter can probably obtain your private keys, and admit that they can MITM you and have full access to your metadata.

The new approach is pretty similar to the old one in that it's based on pretty straightforward and well tested cryptographic primitives, but merely using good cryptography doesn't mean you end up with a good solution. This time they've pivoted away from using the underlying cryptographic primitives directly and into higher level abstractions, which is probably a good thing. They're using Libsodium's boxes for message encryption, which is, well, fine? It doesn't offer forward secrecy (if someone's private key is leaked then all existing messages can be decrypted) so it's a long way from the state of the art for a messaging client (Signal's had forward secrecy for over a decade!), but it's not inherently broken or anything. It is, however, written in C, not Rust[2].

That's about the extent of the good news. Twitter's old implementation involved clients generating keypairs and pushing the public key to Twitter. Each client (a physical device or a browser instance) had its own private key, and messages were simply encrypted to every public key associated with an account. This meant that new devices couldn't decrypt old messages, and also meant there was a maximum number of supported devices and terrible scaling issues and it was pretty bad. The new approach generates a keypair and then stores the private key using the Juicebox protocol. Other devices can then retrieve the private key.

Doesn't this mean Twitter has the private key? Well, no. There's a PIN involved, and the PIN is used to generate an encryption key. The stored copy of the private key is encrypted with that key, so if you don't know the PIN you can't decrypt the key. So we brute force the PIN, right? Juicebox actually protects against that - before the backend will hand over the encrypted key, you have to prove knowledge of the PIN to it (this is done in a clever way that doesn't directly reveal the PIN to the backend). If you ask for the key too many times while providing the wrong PIN, access is locked down.

But this is true only if the Juicebox backend is trustworthy. If the backend is controlled by someone untrustworthy[3] then they're going to be able to obtain the encrypted key material (even if it's in an HSM, they can simply watch what comes out of the HSM when the user authenticates if there's no validation of the HSM's keys). And now all they need is the PIN. Turning the PIN into an encryption key is done using the Argon2id key derivation function, using 32 iterations and a memory cost of 16MB (the Juicebox white paper says 16KB, but (a) that's laughably small and (b) the code says 16 * 1024 in an argument that takes kilobytes), which makes it computationally and moderately memory expensive to generate the encryption key used to decrypt the private key. How expensive? Well, on my (not very fast) laptop, that takes less than 0.2 seconds. How many attempts to I need to crack the PIN? Twitter's chosen to fix that to 4 digits, so a maximum of 10,000. You aren't going to need many machines running in parallel to bring this down to a very small amount of time, at which point private keys can, to a first approximation, be extracted at will.

Juicebox attempts to defend against this by supporting sharding your key over multiple backends, and only requiring a subset of those to recover the original. I can't find any evidence that Twitter's does seem to be making use of this,Twitter uses three backends and requires data from at least two, but all the backends used are under x.com so are presumably under Twitter's direct control. Trusting the keystore without needing to trust whoever's hosting it requires a trustworthy communications mechanism between the client and the keystore. If the device you're talking to can prove that it's an HSM that implements the attempt limiting protocol and has no other mechanism to export the data, this can be made to work. Signal makes use of something along these lines using Intel SGX for contact list and settings storage and recovery, and Google and Apple also have documentation about how they handle this in ways that make it difficult for them to obtain backed up key material. Twitter has no documentation of this, and as far as I can tell does nothing to prove that the backend is in any way trustworthy. (Edit to add: The Juicebox API does support authenticated communication between the client and the HSM, but that relies on you having some way to prove that the public key you're presented with corresponds to a private key that only exists in the HSM. Twitter gives you the public key whenever you communicate with them, so even if they've implemented this properly you can't prove they haven't made up a new key and MITMed you the next time you retrieve your key)

On the plus side, Juicebox is written in Rust, so Elon's not 100% wrong. Just mostly wrong.

But ok, at least you've got viable end-to-end encryption even if someone can put in some (not all that much, really) effort to obtain your private key and render it all pointless? Actually no, since you're still relying on the Twitter server to give you the public key of the other party and there's no out of band mechanism to do that or verify the authenticity of that public key at present. Twitter can simply give you a public key where they control the private key, decrypt the message, and then reencrypt it with the intended recipient's key and pass it on. The support page makes it clear that this is a known shortcoming and that it'll be fixed at some point, but they said that about the original encrypted DM support and it never was, so that's probably dependent on whether Elon gets distracted by something else again. And the server knows who and when you're messaging even if they haven't bothered to break your private key, so there's a lot of metadata leakage.

Signal doesn't have these shortcomings. Use Signal.

[1] I'll respect their name change once Elon respects his daughter

[2] There are implementations written in Rust, but Twitter's using the C one with these JNI bindings

[3] Or someone nominally trustworthy but who's been compelled to act against your interests - even if Elon were absolutely committed to protecting all his users, his overarching goals for Twitter require him to have legal presence in multiple jurisdictions that are not necessarily above placing employees in physical danger if there's a perception that they could obtain someone's encryption keys

comment count unavailable comments

05 Jun 2025 11:02am GMT