08 Mar 2025

feedPlanet GNOME

Adetoye Anointing: More Than Code: Outreachy Gnome Experience

It has been a productive, prosperous, and career-building few months-from contemplating whether to apply for the contribution stage, to submitting my application at the last minute, to not finding a Go project, then sprinting through a Rust course after five days of deliberation. Eventually, I began actively contributing to librsvg in Rust, updated a documentation section, closed a couple of issues, and was ultimately selected for the Outreachy December 2024 - March 2025 cohort as an intern for the GNOME Foundation.

It has been a glorious journey, and I thank God for His love and grace throughout the application process up to this moment as I write this blog. I would love to delve into my journey to getting accepted into Outreachy, but since this blog is about reflecting on the experience as it wraps up, let's get to it.

Overcoming Fear and Doubt

You might think my fears began when I got accepted into the internship, but they actually started much earlier. Before even applying, I was hesitant. Then, when I got in for the contribution phase, I realized that the language I was most familiar with, Go, was not listed.I felt like I was stepping into a battlefield with thousands of applicants, and my current arsenal was irrelevant. I believed I would absolutely dominate with Go, but now I couldn't even find a project using it!

This fear lingered even after I got accepted. I kept wondering if I was going to mess things up terribly.
It takes time to master a programming language, and even more time to contribute to a large project. I worried about whether I could make meaningful contributions and whether I would ultimately fail.

And guess what? I did not fail. I'm still here, actively contributing to librsvg, and I plan to continue working on other GNOME projects. I'm now comfortable writing Rust, and most importantly, I've made huge progress on my project tasks. So how did I push past my fear? I initially didn't want to apply at all, but a lyric from Dave's song Survivor's Guilt stuck with me: "When you feel like givin' up, know you're close." Another saying that resonated with me was, "You never know if you'll fail or win if you don't try." I stopped seeing the application as a competition with others and instead embraced an open mindset: "I've always wanted to learn Rust, and this is a great opportunity." "I'm not the best at communication, but maybe I can grow in that area." Shifting my mindset from fear to opportunity helped me stay the course, and my fear of failing never materialized.

My Growth and Learning Process

For quite some time, I had been working exclusively with a single programming language, primarily building backend applications. However, my Outreachy internship experience opened me up to a whole new world of possibilities. Now, I program in Rust, and I have learned a lot about SVGs, the XML tree, text rendering, and much more.

My mentor has been incredibly supportive, and thanks to him, I believe I will be an excellent mentor when I find myself in a position to guide others. His approach to communication, active listening, and problem-solving has left a strong impression on me, and I've found myself subconsciously adopting his methods. I also picked up some useful Git tricks from him and improved my ability to visualize and break down complex problems.

I have grown in technical knowledge, soft skills, and networking-my connections within the open-source community have expanded significantly!

Project Progress and Next Steps

The project's core algorithms are now in place, including text-gathering, whitespace handling, text formatting, attribute collection, shaping, and more. The next step is to integrate these components to implement the full SVG2 text layout algorithm.

As my Outreachy internship with GNOME comes to an end today, I want to reflect on this incredible journey and express my gratitude to everyone who made it such a rewarding experience.

I am deeply grateful to God, the Outreachy organizers, my family, my mentor Federico (GNOME co-founder), Felipe Borges, and everyone who contributed to making this journey truly special. Thank you all for an unforgettable experience.

08 Mar 2025 4:31pm GMT

Carlos Garnacho: Embracing sysexts for system development under Silverblue

Due to my circumstances, I might be perhaps interested in dogfooding a larger number of GNOME system/session components on a daily basis than the average.

So far, I have been using jhbuild to help me with this deed, mostly in the form of jhbuild make to selectively build projects out of their git tree. See, there's a point in life where writing long-winded CLI commands stop making you feel smart and work the opposite way, jhbuild had a few advantages I liked:

This, combined with my habit to use Fedora Rawhide also meant I did not require to rebuild the world to get up-to-date dependencies, keeping the number of miscellaneous modules built to a minimum.

This was all true even after Silverblue came around, and Florian unlocked the "run GNOME as built from toolbox" achievement. I adopted this methodology, but still using jhbuild to build things inside that toolbox, for the sake of convenience.

Enter sysext-utils

Meanwhile, systemd sysexts came around as a way to install "extensions" to the base install, even over atomic distributions, paving a way for development of system components to happen in these distributions. More recently Martín Abente brought an excellent set of utilities to ease building such sysexts.

This is a great step in the direction towards sysexts as a developer testing method. However, there is a big drawback for users of atomic distributions: to build these sysexts you must have all necessary build dependencies in your host system. Basically, desecrating your small and lean atomic install with tens to hundreds of packages. While for GNOME OS it may be true that it comes "with batteries included", feels like a very big margin to miss the point with Silverblue, where the base install is minimal and you are supposed to carry development with toolbox, install apps with flatpak, etc etc.

What is necessary

Ideally, in these systems, we'd want:

  1. A toolbox matching the version of the host system.
  2. With all development tools and dependencies installed
  3. The sysexts to be created from inside the toolbox
  4. The sysexts to be installed in the host system
  5. But also, the installed sysexts need to be visible from inside the toolbox, so that we can build things depending on them

The most natural way to achieve both last points is building things so they install in /usr/local, as this will allow us to also mount this location from the host inside the toolbox, in order to build things that depend on our own sysexts.

And last, I want an easy way to manage these projects that does not get in the middle of things, is fast to type, etc.

Introducing gg

So I've made a small script to help myself on these tasks. It can be installed at ~/.local/bin along with sysext-utils, and be used in a host shell to generate, install and generally manage a number of sysexts.

Sysexts-utils is almost there for this, I however needed some local hacks to help me get by:

- Since I have these are installed at ~/.local, but they will be run with pkexec to do things as root, the python library lookup paths had to be altered in the executable scripts (sysext-utils#10).
- They are ATM somewhat implicitly prepared to always install things at /usr, I had to alter paths in code to e.g. generate GSettings schemas at the right location (sysext-utils#11).

Hopefully these will be eventually sorted out. But with this I got 1) a pristine atomic setup and 2) My tooling in ~/.local 3) all the development environment in my home dir, 4) a simple and fast way to manage a number of projects. Just most I ever wanted from jhbuild.

This tool is a hack to put things together, done mainly so it's intuitive and easy to myself. So far been using it for a week with few regrets except the frequent password prompts. If you think it's useful for you too, you're welcome.

08 Mar 2025 1:16pm GMT

07 Mar 2025

feedPlanet GNOME

Andy Wingo: whippet lab notebook: untagged mallocs, bis

Earlier this week I took an inventory of how Guile uses the Boehm-Demers-Weiser (BDW) garbage collector, with the goal of making sure that I had replacements for all uses lined up in Whippet. I categorized the uses into seven broad categories, and I was mostly satisfied that I have replacements for all except the last: I didn't know what to do with untagged allocations: those that contain arbitrary data, possibly full of pointers to other objects, and which don't have a header that we can use to inspect on their type.

But now I do! Today's note is about how we can support untagged allocations of a few different kinds in Whippet's mostly-marking collector.

inside and outside

Why bother supporting untagged allocations at all? Well, if I had my way, I wouldn't; I would just slog through Guile and fix all uses to be tagged. There are only a finite number of use sites and I could get to them all in a month or so.

The problem comes for uses of scm_gc_malloc from outside libguile itself, in C extensions and embedding programs. These users are loathe to adapt to any kind of change, and garbage-collection-related changes are the worst. So, somehow, we need to support these users if we are not to break the Guile community.

on intent

The problem with scm_gc_malloc, though, is that it is missing an expression of intent, notably as regards tagging. You can use it to allocate an object that has a tag and thus can be traced precisely, or you can use it to allocate, well, anything else. I think we will have to add an API for the tagged case and assume that anything that goes through scm_gc_malloc is requesting an untagged, conservatively-scanned block of memory. Similarly for scm_gc_malloc_pointerless: you could be allocating a tagged object that happens to not contain pointers, or you could be allocating an untagged array of whatever. A new API is needed there too for pointerless untagged allocations.

on data

Recall that the mostly-marking collector can be built in a number of different ways: it can support conservative and/or precise roots, it can trace the heap precisely or conservatively, it can be generational or not, and the collector can use multiple threads during pauses or not. Consider a basic configuration with precise roots. You can make tagged pointerless allocations just fine: the trace function for that tag is just trivial. You would like to extend the collector with the ability to make untagged pointerless allocations, for raw data. How to do this?

Consider first that when the collector goes to trace an object, it can't use bits inside the object to discriminate between the tagged and untagged cases. Fortunately though the main space of the mostly-marking collector has one metadata byte for each 16 bytes of payload. Of those 8 bits, 3 are used for the mark (five different states, allowing for future concurrent tracing), two for the precise field-logging write barrier, one to indicate whether the object is pinned or not, and one to indicate the end of the object, so that we can determine object bounds just by scanning the metadata byte array. That leaves 1 bit, and we can use it to indicate untagged pointerless allocations. Hooray!

However there is a wrinkle: when Whippet decides the it should evacuate an object, it tracks the evacuation state in the object itself; the embedder has to provide an implementation of a little state machine, allowing the collector to detect whether an object is forwarded or not, to claim an object for forwarding, to commit a forwarding pointer, and so on. We can't do that for raw data, because all bit states belong to the object, not the collector or the embedder. So, we have to set the "pinned" bit on the object, indicating that these objects can't move.

We could in theory manage the forwarding state in the metadata byte, but we don't have the bits to do that currently; maybe some day. For now, untagged pointerless allocations are pinned.

on slop

You might also want to support untagged allocations that contain pointers to other GC-managed objects. In this case you would want these untagged allocations to be scanned conservatively. We can do this, but if we do, it will pin all objects.

Thing is, conservative stack roots is a kind of a sweet spot in language run-time design. You get to avoid constraining your compiler, you avoid a class of bugs related to rooting, but you can still support compaction of the heap.

How is this, you ask? Well, consider that you can move any object for which we can precisely enumerate the incoming references. This is trivially the case for precise roots and precise tracing. For conservative roots, we don't know whether a given edge is really an object reference or not, so we have to conservatively avoid moving those objects. But once you are done tracing conservative edges, any live object that hasn't yet been traced is fair game for evacuation, because none of its predecessors have yet been visited.

But once you add conservatively-traced objects back into the mix, you don't know when you are done tracing conservative edges; you could always discover another conservatively-traced object later in the trace, so you have to pin everything.

The good news, though, is that we have gained an easier migration path. I can now shove Whippet into Guile and get it running even before I have removed untagged allocations. Once I have done so, I will be able to allow for compaction / evacuation; things only get better from here.

Also as a side benefit, the mostly-marking collector's heap-conservative configurations are now faster, because we have metadata attached to objects which allows tracing to skip known-pointerless objects. This regains an optimization that BDW has long had via its GC_malloc_atomic, used in Guile since time out of mind.

fin

With support for untagged allocations, I think I am finally ready to start getting Whippet into Guile itself. Happy hacking, and see you on the other side!

07 Mar 2025 1:47pm GMT

Sam Thursfield: Media playback tablet running GNOME and postmarketOS

A couple of years ago I set up a simple and independent media streaming server for my Bandcamp music collection using a Raspberry Pi 4, Fedora IoT and Jellyfin. It works nicely and I don't have to play any cloud rent to Spotify to listen to music at home.

But it's annoying having the music playback controls buried in my phone or laptop. How many times do you go to play a song and get distracted by a WhatsApp message instead?

So I started thinking about a tablet that would just control media playback. A tablet running a non-corporate operating system, because music is too important to allow Google to stick AI and adverts in the middle of it. Last month Pablo told me that postmarketOS had pretty decent support for a specific mainstream tablet and so I couldn't reset buying one second-hand and trying to set up GNOME there for media playback.

Read on and I will tell you how the setup procedure went, what is working nicely and what we could still improve.

What is the Xiaomi Pad 5 Pro tablet like?

I've never owned a tablet so all I can tell you is this: it looks like a shiny black mirror. I couldn't find the power button at first, but it turns out to be on the top.

The device specs claim that it has an analog headphone output, which is not true. It does come with a USB-C to headphone adapter in the box, though.

It comes with an antagonistic Android-based OS that seems to constantly prompt you to sign in to things and accept various terms and conditions. I guess they really want to get to know you.

I paid 240€ for it second hand. The seller didn't do a factory reset before posting it to me, but I'm a good citizen so I wiped it for them, before anyone could try to commit online fraud using their digital identity.

How easy is it to install postmarketOS + GNOME on the Xiaomi Pad 5 Pro?

I work on systems software but I prefer to stay away from the hardware side of things. Give me a computer that at least can boot to a shell, please. I am not an expert in this stuff. So how did I do at installing a custom OS on an Android tablet?

Figuring out the display model

The hardest part of the process was actually the first step: getting root access on the device so that I could see what type of display panel it has.

Xiaomi tablets have some sort of "bootloader lock", but thankfully this device was already unlocked. If you ever look at purchasing a Xiaomi device, be very wary that Xiaomi might have locked the bootloader such that you can't run custom software on your device. Unlocking a locked bootloader seems to require their permission. This kind of thing is a big red flag when buying computers.

One popular tool to root an Android device is Team Win's TWRP. However it didn't have support for the Pad 5 Pro, so instead I used Magisk.

I found rooting process with Magisck complicated. The only instructions I could find were in this video named "Xiaomi Pad 5 Rooting without the Use of TWRP | Magisk Manager" from Simply Tech-Key (Cris Apolinar). This gives you a two step process, which requires a PC with the Android debugging tools 'adb' and 'fastboot' installed and set up.

Step 1: Download and patch the boot.img file

  1. On the PC, download the boot.img file from the stock firmware. (See below).
  2. Copy it onto the tablet.
  3. On the tablet, download and install the Magisk Manager app from the Magisck Github Releases page.
  4. Open the Magisk app and select "Install" to patch the boot.img file.
  5. Copy the patched boot.img off the tablet back to your PC and rename it to patched_boot.img.

The boot.img linked from the video didn't work for me. Instead I searched online for "xiaomi pad 5 pro stock firmware rom" and found one that worked that way.

It's important to remember that downloading and running random binaries off the internet is very dangerous. It's possible that someone pretends the file is one thing, when it's actually malware that will help them steal your digital identity. The best defence is to factory reset the tablet before you start, so that there's nothing on there to steal in the first place.

Step 2: Boot the patched boot.img on the tablet

  1. Ensure developer mode is enabled on the tablet: go to "About this Device" and tap the box that shows the OS version 7 times.
  2. Ensure USB debugging is enabled: find the "Developer settings" dialog in the settings window and enable if needed.
  3. On the PC, run adb reboot fastboot to reboot the tablet and reach the bootloader menu.
  4. Run fastboot flash boot patched_boot.img to boot the patched boot image.

At this point, if the boot.img file was good, you should see the device boot back to Android and it'll now be "rooted". So you can follow the instructions in the postmarketOS wiki page to figure out if your device has the BOE or the CSOT display. What a ride!

Install postmarketOS

If we can find a way to figure out the display without needing root access, it'll make the process substantially easier, because the remaining steps worked like a charm.

Following the wiki page, you first install pmbootstrap and run pmbootstrap init to configure the OS image.

Laptop running pmbootstrap

A note for Fedora Silverblue users: the bootstrap process doesn't work inside a Toolbx container. At some point it tries to create /dev in the rootfs using mknod and fails. You'll have to install pmbootstrap on the host and run it there.

Next you use pmbootstrap flasher to install the OS image to the correct partition.

I wanted to install to the system_b partition but I seemed to get an 'out of disk space' error. The partition is 3.14 GiB in size. So I flashed the OS to the userdata partition.

The build and flashing process worked really well and I was surprised to see the postmarketOS boot screen so quickly.

Tablet showing postmarketOS boot screen

How well does GNOME work as a tablet interface?

The design side of GNOME have thought carefully about making GNOME work well on touch-screen devices. This doesn't mean specifically optimising it for touch-screen use, it's more about avoiding a hard requirement on you having a two-button mouse available.

To my knowledge, nobody is paying to optimise the "GNOME on tablets" experience right now. So it's certainly lacking in polish. In case it wasn't clear, this one is for the real headz.

Login to the machine was tricky because there's no on-screen keyboard on the GDM screen. You can work around that by SSH'ing to the machine directly and creating a GDM config file to automatically log in:

$ cat /etc/gdm/custom.conf 
# GDM configuration storage

[daemon]
AutomaticLogin=media
AutomaticLoginEnable=True

It wasn't possible to push the "Skip" button in initial setup, for whatever reason. But I just rebooted the system to get round that.

Tablet showing GNOME Shell with "welcome to postmarketOS edge" popup

Enough things work that I can already use the tablet for my purposes of playing back music from Jellyfin, from Bandcamp and from elsewhere on the web.

The built-in speakers audio output doesn't work, and connecting a USB-to-headphone adapter doesn't work either. What does work is Bluetooth audio, so I can play music that way already. [Update: as of 2025-03-07, built-in audio also works. I haven't investigated what changed]

I disabled the automatic screen lock, as this device is never leaving my house anyway. The screen seems to stay on and burn power quickly, which isn't great. I set the screen blank interval to 1 minute, which should save power, but I haven't found a nice way to "un-blank" the screen again. Touch events don't seem to do anything. At present I work around by pressing the power button (which suspends the device and stops audio), then pressing it again to resume, at which point the display comes back. [Update: see the comments; it's possible to reconfigure the power button so that it doesn't suspend the device].

Apart from this, everything works surprisingly great. Wi-fi and Bluetooth are reliable. The display sometimes glitches when resuming from suspend but mostly works fine. Multitouch gestures work perfectly - this is first time I've ever used GNOME with a touch screen and it's clear that there's a lot of polish. The system is fast. The Alpine + postmarketOS teams have done a great job packaging GNOME, which is commendable given that they had to literally port systemd.

What's next?

I'd like to figure out how un-blank the screen without suspending and resuming the device.

It might be nice to fix audio output via the USB-C port. But more likely I might set up a DIY "smart speaker" network around the house, using single-board computers with decent DAC chips connected to real amplifiers. Then the tablet would become more of a remote control.

I already donate to postmarketOS on Opencollective.com, and I might increase the amount as I am really impressed by how well all of this has come together.

Maenwhile I'm finally able to hang out with my cat listening to my favourite Vladimir Chicken songs.

Updates:

07 Mar 2025 11:48am GMT

This Week in GNOME: #190 Cross Platform

Update on what happened across the GNOME project in the week from February 28 to March 07.

GNOME Core Apps and Libraries

GTK

Cross-platform widget toolkit for creating graphical user interfaces.

sp1rit announces

The GTK Android backend gained preliminary support for OpenGL. While not fully implemented yet, most applications that make use of Gtk.GLArea should work now and other applications should see noticeable performance improvements, especially on shadows.

Emmanuele Bassi reports

Thanks to the work of Arjan GTK applications on macOS will use native window controls starting the 4.18 release. To preserve backward compatibility, this behaviour is opt-in; application developers can use native controls by setting the GtkHeaderBar:use-native-controls property, either in code or in UI definition files.

GNOME Circle Apps and Libraries

Apostrophe

A distraction free Markdown editor.

Manu (he/they/she) says

I've been spending some time reworking all the regex expressions that Apostrophe uses for its markdown syntax highlighting and document statistics. They're now more accurate and less prone to performance issues. They should be easier to maintain from now on, as I've documented them thoroughly and written new tests. They now adhere more to pandoc's markdown flavour, as it's the default for Apostrophe, instead of commonmark.

Third Party Projects

Hari Rana | TheEvilSkeleton reports

Introducing Refine 0.5.0, the GNOME Tweaks alternative leveraging the data-driven and composition paradigms. This version re-adds the Document font option, and renames "Middle Click Paste" to "Middle Click to Paste Text" with an accompanying subtitle.

0.5.0 also adds the capability to rearrange the titlebar's window buttons. This new feature also lets you add the minimize and maximize buttons.

While we thoroughly tested right-to-left (RTL) direction and keyboard navigation with a screen reader, it's worth noting that we're no experts. We welcome feedback from those who use Refine in RTL and/or with a keyboard and screen reader.

You can get Refine 0.5.0 on Flathub

fabrialberio announces

Pins version 2.1 is now available! With this release the app grid will be more complete, thanks to fixes and improvements in loading apps, and more colorful, since Pins can now display app icons from non-standard locations. I have also added an option to show or hide system apps. Checkout the app on Flathub: https://flathub.org/apps/io.github.fabrialberio.pinapp

Shell Extensions

Pedro Sader Azevedo reports

Foresight is a new GNOME Shell Extension that automagically opens the activities view on empty workspaces. It uses callbacks to monitor windows and workspaces (instead of actively checking on them on certain time intervals), which makes it very efficient and responsive.

Miscellaneous

Iverson Briones (any) says

Welcome onboard the GNOME l10n fleet, Filipino l10n team! Filipino, while having over 80 million speakers worldwide, had no localization effort up 'til now. But fret not-you can now join the newly baked Filipino l10n team and translate away. #ItsMoreFunInGNOME🇵🇭

The Loupe image viewer app was the first to be completely localized to Filipino this week-set your system language then grab Loupe's latest nightly build and take a peek! Second in the race is Audio Sharing which also sports a complete localization on the latest nightly build. Last but not the least, the GNOME UI for XDG desktop portals has also been fully localized. GNOME Software, Weather, and Amberol are next, with their localizations currently being molded in the oven. More apps and components to be 🇵🇭-ized soon!

P.S. Bisaya speakers, you will be next =)

That's all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

07 Mar 2025 12:00am GMT

05 Mar 2025

feedPlanet GNOME

Michael Meeks: 2025-03-05 Wednesday

05 Mar 2025 11:34am GMT

04 Mar 2025

feedPlanet GNOME

Michael Meeks: 2025-03-04 Tuesday

04 Mar 2025 9:00pm GMT

Andy Wingo: whippet lab notebook: on untagged mallocs

Salutations, populations. Today's note is more of a work-in-progress than usual; I have been finally starting to look at getting Whippet into Guile, and there are some open questions.

inventory

I started by taking a look at how Guile uses the Boehm-Demers-Weiser collector's API, to make sure I had all my bases covered for an eventual switch to something that was not BDW. I think I have a good overview now, and have divided the parts of BDW-GC used by Guile into seven categories.

implicit uses

Firstly there are the ways in which Guile's run-time and compiler depend on BDW-GC's behavior, without actually using BDW-GC's API. By this I mean principally that we assume that any reference to a GC-managed object from any thread's stack will keep that object alive. The same goes for references originating in global variables, or static data segments more generally. Additionally, we rely on GC objects not to move: references to GC-managed objects in registers or stacks are valid across a GC boundary, even if those references are outside the GC-traced graph: all objects are pinned.

Some of these "uses" are internal to Guile's implementation itself, and thus amenable to being changed, albeit with some effort. However some escape into the wild via Guile's API, or, as in this case, as implicit behaviors; these are hard to change or evolve, which is why I am putting my hopes on Whippet's mostly-marking collector, which allows for conservative roots.

defensive uses

Then there are the uses of BDW-GC's API, not to accomplish a task, but to protect the mutator from the collector: GC_call_with_alloc_lock, explicitly enabling or disabling GC, calls to sigmask that take BDW-GC's use of POSIX signals into account, and so on. BDW-GC can stop any thread at any time, between any two instructions; for most users is anodyne, but if ever you use weak references, things start to get really gnarly.

Of course a new collector would have its own constraints, but switching to cooperative instead of pre-emptive safepoints would be a welcome relief from this mess. On the other hand, we will require client code to explicitly mark their threads as inactive during calls in more cases, to ensure that all threads can promptly reach safepoints at all times. Swings and roundabouts?

precise tracing

Did you know that the Boehm collector allows for precise tracing? It does! It's slow and truly gnarly, but when you need precision, precise tracing nice to have. (This is the GC_new_kind interface.) Guile uses it to mark Scheme stacks, allowing it to avoid treating unboxed locals as roots. When it loads compiled files, Guile also adds some sliced of the mapped files to the root set. These interfaces will need to change a bit in a switch to Whippet but are ultimately internal, so that's fine.

What is not fine is that Guile allows C users to hook into precise tracing, notably via scm_smob_set_mark. This is not only the wrong interface, not allowing for copying collection, but these functions are just truly gnarly. I don't know know what to do with them yet; are our external users ready to forgo this interface entirely? We have been working on them over time, but I am not sure.

reachability

Weak references, weak maps of various kinds: the implementation of these in terms of BDW's API is incredibly gnarly and ultimately unsatisfying. We will be able to replace all of these with ephemerons and tables of ephemerons, which are natively supported by Whippet. The same goes with finalizers.

The same goes for constructs built on top of finalizers, such as guardians; we'll get to reimplement these on top of nice Whippet-supplied primitives. Whippet allows for resuscitation of finalized objects, so all is good here.

misc

There is a long list of miscellanea: the interfaces to explicitly trigger GC, to get statistics, to control the number of marker threads, to initialize the GC; these will change, but all uses are internal, making it not a terribly big deal.

I should mention one API concern, which is that BDW's state is all implicit. For example, when you go to allocate, you don't pass the API a handle which you have obtained for your thread, and which might hold some thread-local freelists; BDW will instead load thread-local variables in its API. That's not as efficient as it could be and Whippet goes the explicit route, so there is some additional plumbing to do.

Finally I should mention the true miscellaneous BDW-GC function: GC_free. Guile exposes it via an API, scm_gc_free. It was already vestigial and we should just remove it, as it has no sensible semantics or implementation.

allocation

That brings me to what I wanted to write about today, but am going to have to finish tomorrow: the actual allocation routines. BDW-GC provides two, essentially: GC_malloc and GC_malloc_atomic. The difference is that "atomic" allocations don't refer to other GC-managed objects, and as such are well-suited to raw data. Otherwise you can think of atomic allocations as a pure optimization, given that BDW-GC mostly traces conservatively anyway.

From the perspective of a user of BDW-GC looking to switch away, there are two broad categories of allocations, tagged and untagged.

Tagged objects have attached metadata bits allowing their type to be inspected by the user later on. This is the happy path! We'll be able to write a gc_trace_object function that takes any object, does a switch on, say, some bits in the first word, dispatching to type-specific tracing code. As long as the object is sufficiently initialized by the time the next safepoint comes around, we're good, and given cooperative safepoints, the compiler should be able to ensure this invariant.

Then there are untagged allocations. Generally speaking, these are of two kinds: temporary and auxiliary. An example of a temporary allocation would be growable storage used by a C run-time routine, perhaps as an unbounded-sized alternative to alloca. Guile uses these a fair amount, as they compose well with non-local control flow as occurring for example in exception handling.

An auxiliary allocation on the other hand might be a data structure only referred to by the internals of a tagged object, but which itself never escapes to Scheme, so you never need to inquire about its type; it's convenient to have the lifetimes of these values managed by the GC, and when desired to have the GC automatically trace their contents. Some of these should just be folded into the allocations of the tagged objects themselves, to avoid pointer-chasing. Others are harder to change, notably for mutable objects. And the trouble is that for external users of scm_gc_malloc, I fear that we won't be able to migrate them over, as we don't know whether they are making tagged mallocs or not.

what is to be done?

One conventional way to handle untagged allocations is to manage to fit your data into other tagged data structures; V8 does this in many places with instances of FixedArray, for example, and Guile should do more of this. Otherwise, you make new tagged data types. In either case, all auxiliary data should be tagged.

I think there may be an alternative, which would be just to support the equivalent of untagged GC_malloc and GC_malloc_atomic; but for that, I am out of time today, so type at y'all tomorrow. Happy hacking!

04 Mar 2025 3:42pm GMT

28 Feb 2025

feedPlanet GNOME

Aryan Kaushik: Create Custom System Call on Linux 6.8

Ever wanted to create a custom system call? Whether it be as an assignment, just for fun or learning more about the kernel, system calls are a cool way to learn more about our system.

Note - crossposted from my article on Medium

Why follow this guide?

There are various guides on this topic, but the problem occurs due to the pace of kernel development. Most guides are now obsolete and throw a bunch of errors, hence I'm writing this post after going through the errors and solving them :)

Set system for kernel compile

On Red Hat / Fedora / Open Suse based systems, you can simply do

Sudo dnf builddep kernel
Sudo dnf install kernel-devel

On Debian / Ubuntu based

sudo apt-get install build-essential vim git cscope libncurses-dev libssl-dev bison flex

Get the kernel

Clone the kernel source tree, we'll be cloning specifically the 6.8 branch but instructions should work on newer ones as well (till the kernel devs change the process again).

git clone --depth=1 --branch v6.8 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

Ideally, the cloned version should be equal to or higher than your current kernel version.

You can check the current kernel version through the command

uname -r

Create the new syscall

Perform the following

cd linux
make mrproper
mkdir hello
cd hello
touch hello.c
touch Makefile

This will create a folder called "hello" inside the downloaded kernel source code, and create two files in it - hello.c with the syscall code and Makefile with the rules on compiling the same.

Open hello.c in your favourite text editor and put the following code in it

#include <linux/kernel.h>
#include <linux/syscalls.h>
SYSCALL_DEFINE0(hello) {
 pr_info("Hello World\n");
 return 0;
}

It prints "Hello World" in the kernel log.

As per kernel.org docs

"SYSCALL_DEFINEn() macro rather than explicitly. The 'n' indicates the number of arguments to the system call, and the macro takes the system call name followed by the (type, name) pairs for the parameters as arguments."

As we are just going to print, we use n=0

Now add the following to the Makefile

obj-y := hello.o

Now

cd ..
cd include/linux/

Open the file "syscalls.h" inside this directory, and add

asmlinkage long sys_hello(void)

captionless image

This is a prototype for the syscall function we created earlier.

Open the file "Kbuild" in the kernel root (cd ../..) and to the bottom of it add

obj-y += hello/

captionless image

This tells the kernel build system to also compile our newly included folder.

Once done, we then need to also add the syscall entry to the architecture-specific table.

Each CPU architecture could have specific syscalls and we need to let them know for which architecture ours is made.

For x86_64 the file is

arch/x86/entry/syscalls/syscall_64.tbl

Add your syscall entry there, keeping in mind to only use a free number and not use any numbers prohibited in the table comments.

For me 462 was free, so I added the new entry as such

462 common hello sys_hello

captionless image

Here 462 is mapped to our syscall which is common for both architectures, our syscall is named hello and its entry function is sys_hello.

Compiling and installing the new kernel

Perform the following commands

NOTE: I in no way or form guarantee the safety, security, integrity and stability of your system by following this guide. All instructions listed here have been my own experience and doesn't represent the outcome on your systems. Proceed with caution and care.

Now that we have the legal stuff done, let's proceed ;)

cp /boot/config-$(uname -r) .config
make olddefconfig
make -j $(nproc)
sudo make -j $(nproc) modules_install
sudo make install

Here we are copying the current booted kernel's config file, asking the build system to use the same values as that and set default for anything else. Then we build the kernel with parallel processing based on the number of cores given by nproc. After which we install our custom kernel (at own risk).

Kernel compilation takes a lot of time, so get a coffee or 10 and enjoy lines of text scrolling by on the terminal.

It can take a few hours based on system speed so your mileage may vary. Your fan might also scream at this stage to keep temperatures under check (happened to me too).

The fun part, using the new syscall

Now that our syscall is baked into our kernel, reboot the system and make sure to select the new custom kernel from grub while booting

captionless image

Once booted, let's write a C program to leverage the syscall

Create a file, maybe "test.c" with the following content

#include <stdio.h>
#include <sys/syscall.h>
#include <unistd.h>
int main(void) {
  printf("%ld\n", syscall(462));
  return 0;
}

Here replace 462 with the number you chose for your syscall in the table.

Compile the program and then run it

make test
chmod +x test
./test

If all goes right, your terminal will print a "0" and the syscall output will be visible in the kernel logs.

Access the logs by dmesg

sudo dmesg | tail

And voila, you should be able to see your syscall message printed there.

Congratulations if you made it 🎉

Please again remember the following points:

28 Feb 2025 11:08am GMT

Alexandru Băluț: Practical intro to fiber-optic networks

I was looking into how to link a remote point in my mansion to the network and checked how it could work with a fiber-optic connection, since the router had a SFP+ socket.

TL/DR: I'll go with a SFP+ bidi single-mode connection.

First of all, stay safe. Don't look directly into a fiber with your eye. The light/laser is not powerful, but why do it? You won't see anything anyway, as the light is in the infrared.

28 Feb 2025 7:30am GMT

Thibault Martin: Prosthetics that don't betray

Tech takes a central place in our lives. Banking, and administrative tasks are happening more and more online. It's becoming increasingly difficult to get through life without a computer or a smartphone. They have become external organs necessary to live our life.

Steve Jobs called the computer the bicycle for the mind. I believe computers & smartphones have become prosthetics, extensions of people that should unconditionally and entirely belong to them. We must produce devices and products the general public can trust.

Microsoft, Google and Apple are three American companies that build the operating systems our computers, phones, and servers run on. This American hegemony on ubiquitous devices is dangerous both for all citizens worldwide, especially under an unpredictable, authoritarian American administration.

Producing devices and an operating system for them is a gigantic task. Fortunately, it is not necessary to start from zero. In this post I share what I think is the best foundation for a respectful operating system and how to get it into European, and maybe American, hands. In a follow-up post I will talk more about distribution channels for older devices.

[!warning] The rest of the world matters

In this post I take a European-centric view. The rest of the world matters, but I am not familiar with what their needs are nor how to address them.

We're building prosthetics

Prosthetics are extension of ourselves as individuals. They are deeply personal. We must ensure our devices & products are:

I believe that the GNOME project is one of the best placed to answer those challenges, especially when working in coordination with the excellent postmarketOS people who work on resurrecting older devices abandoned by their manufacturers. There is real stagnation in the computing industry that we must see as a social opportunity.

Constraints are good

GNOME is a computing environment aiming for simplicity and efficiency. Its opinionated approach benefits both users and developers:

GNOME is a solid foundation to build respectful tech. It doesn't betray people by doing things behind their back. It aims for simplicity and stability, although it could use some more user research to back design decisions if there was funding to do so, like this has successfully been the case for GNOME 40.

Mobile matters

GNOME's Human Interface Guidelines and development tooling make it easy to run GNOME apps on mobile devices. Some volunteers are also working on making GNOME Shell (the "desktop" view) render well on mobile devices.

postmarketOS already offers it as one of the UIs you can install on your phone. With mobile taking over traditional computers usage, it is critical to consider the mobile side of computing too.

Hackability and safety

As an open source project, GNOME remains customizable by advanced users who know they are bringing unsupported changes, can break their system in the process, and deal with it. It doesn't make customization easy for those advanced users, because it doesn't optimize for them.

The project also has its fair share of criticism, some valid, and some not. I agree that sometimes the project can be too opinionated and rigid, optimizing for extreme consistency at the expense of user experience. For example, while I agree that system trays are suboptimal, they're also a pattern people have been used to it for decades and removing them is very frustrating for many.

But some criticism is also coming from people who want to tinker with their system and spend countless hours building a system that's the exact fit for their needs. Those are valid use cases, but GNOME is not built to serve them. GNOME aims to be easy to use for the general public, which includes people who are not tech-experts and don't want to be.

We're actually building prototypes

As mighty as the GNOME volunteers might be, there is still a long way before the general public can realistically use it. GNOME needs to become a fully fledged product shipped on mainstream devices, rather than an alternative system people install. It also needs to involve representatives of the people it intends to serve.

You just need to simply be tech-savvy

GNOME is not (yet) an end user product. It is a desktop environment that needs to be shipped as part of a Linux distribution. There are many distributions to chose from. They are not shipping the same version of GNOME, and some patch it more or less heavily. This kind of fragmentation is one of the main factors holding the Linux desktop back.

The general public doesn't want to have to pick a distribution and bump into every edge cases that creates. They need a system that works predictably, that lets them install the apps they need, and that gives them safe ways to customize it as a user.

That means they need a system that doesn't let them shoot themselves in the foot in the name of customizability, and that prevents them from doing some things unless they sign with their blood that they know it could make it unusable. I share Adrian Vovk's vision for A Desktop for All and I think it's the best way to productize GNOME and make it usable by the general public.

People don't want to have to install an "alternative" system. They want to buy a computer or a smartphone and use it. For GNOME to become ubiquitous, it needs to be shipped on devices people can buy.

For GNOME to really take off, it needs to become a system people can use both in their personal life and at work. It must become a compelling product in entreprise deployments, both to route enough money towards development and maintenance, to make it an attractive platform for vendors to build software for, and to make it an attractive platform for devices manufacturers to ship.

What about the non tech-savvy?

GNOME aims to build a computing platform everyone can trust. But it doesn't have a clear, scalable governance model with representatives of those it serves. GNOME has rudimentary governance to define what is part of the project and what is not thanks to its Release Team, but it is largely a do-ocracy as highlighted in the Governance page of GNOME's Handbook as well was in GNOME Designer Tobias Bernard's series Community Power.

A do-ocracy is a very efficient way to onboard volunteers and empower people who can give away their free time to get things done fast. It is however not a great way to get work done on areas that matter to a minority who can't afford to give away free time or pay someone to work on it.

The GNOME Foundation is indeed not GNOME's vendor today, and it doesn't contribute the bulk of the design and code of the project. It maintains the infrastructure (technical and organizational) the project builds on. A critical, yet little visible task.

To be a meaningful, fair, inclusive project for more than engineers with spare time and spare computers, the project needs to improve in two areas:

  1. It needs a Product Committee to set a clear product direction so GNOME can meaningfully address the problems of its intended audience. The product needs a clear purpose, a clear audience, and a robust governance to enforce decisions. It needs a committee with representatives of the people it intends to serve, designers, and solution architects. Of course it also critically needs a healthy set of public and private organizations funding it.
  2. It needs a Development Team to implement the direction the committee has set. This means doing user research and design, technical design, implementing the software, doing advocacy work to promote the project to policymakers, manufacturers, private organizations' IT department and much more.

[!warning] Bikeshedding is a real risk

A Product Committee can be a useful structure for people to express their needs, draft a high-level and realistic solution with designers and solution architects, and test it. Designers and technical architects must remain in charge of designing and implementing the solution.

The GNOME Foundation appears as a natural host for these organs, especially since it's already taking care of the assets of the project like its infrastructure and trademark. A separate organization could more easily pull the project in a direction that serves its own interests.

Additionally, the GNOME Foundation taking on this kind of work doesn't conflict with the present do-ocracy, since volunteers and organizations could still work on what matters to them. But it would remain a major shift in the project's organization and would likely upset some volunteers who would feel that they have less control over the project.

I believe this is a necessary step to make the public and private sector invest in the project, generate stable employment for people working on it, and ultimately make GNOME have a systemic, positive impact on society.

[!warning] GNOME needs solution architects

The GNOME community has designers who have a good product vision. It is also full of experts on their module, but has a shortage of people with a good technical overview of the project, who can turn product issues into technical ones at the scale of the whole project.

So what now?

"The year of the Linux desktop" has become a meme now for a reason. The Linux community, if such a nebulous thing exists, is very good at solving technical problems. But building a project bigger than ourselves and putting it in the hands of the millions of people who need it is not just a technical problem.

Here are some critical next steps for the GNOME Community and Foundation to reclaim personal computing from the trifecta of tech behemoths, and fulfill an increasingly important need for democracies.

Learn from experience

Last year, a team of volunteers led by Sonny Piers and Tobias Bernard wrote a grant bid for the Sovereign Tech Fund, and got granted €1M. There are some major takeaways from this adventure.

At risk of stating the obvious, money does solve problems! The team tackled significant technical issues not just for GNOME but for the free desktop in general. I urge organizations and governments that take their digital independence seriously to contribute meaningfully to the project.

Finally and unsurprisingly, one-offs are not sustainable. The Foundation needs to build sustainable revenue streams from a diverse portfolio to grow its team. A €1M grant is extremely generous from a single organization. It was a massive effort from the Sovereign Tech Agency, and a significant part of their 2024 budget. But it is also far from enough to sustain a project like GNOME if every volunteer was paid, let alone paid a fair wage.

Tread carefully, change democratically

Governance and funding are a chicken and egg problem. Funders won't send money to the project if they are not confident that the project will use it wisely, and if they can't weigh in on the project's direction. Without money to support the effort, only volunteers can set up the technical governance processes on their spare time.

Governance changes must be done carefully though. Breaking the status quo without a plan comes with significant risks. It can demotivate current volunteers, make the project lose tractions for newcomers, and die before enough funding makes it to the project to sustain it. A lot of people have invested significant amounts of time and effort into GNOME, and this must be treated with respect.

Build a focused MVP

For the STF project, the GNOME Foundation relied on contractors and consultancies. To be fully operational and efficient it must get in a position of hiring people with the most critical skills. I believe right now the most critical profile is the solution architect one. With more revenue, developers and designers can join the team as it grows.

But for that to happen, the Foundation needs to:

  1. Define who GNOME is for in priority, bearing in mind that "everyone" doesn't exist.
  2. Build a team of representatives of that audience, and a product roadmap: what problems do these people have that GNOME could solve, how could GNOME solve it for them, how could people get to using GNOME, and what tradeoffs would they have to make when using GNOME.
  3. Build the technical roadmap (the steps to make it happen).
  4. Fundraise to implement the roadmap, factoring in the roadmap creation costs.
  5. Implement, and test

The Foundation can then build on this success and start engaging with policymakers, manufacturers, vendors to extent its reach.

Alternative proposals

The model proposed has a significant benefit: it gives clarity. You can give money to the GNOME Foundation to contribute to the maintenance and evolution of GNOME project, instead of only supporting its infrastructure costs. It unlocks the possibility to fund user research that would also benefit all the downstreams.

It is possible to take the counter-point and argue that GNOME doesn't have to be an end-user product, but should remain an upstream that several organizations use for their own product and contribute to.

The "upstream only" model is status-quo, and the main advantage of this model is that it lets contributing organizations focus on what they need the most. The GNOME Foundation would need to scale down to a minimum to only support the shared assets and infrastructure of the project and minimize its expenses. Another (public?) organization would need to tackle the problem of making GNOME a well integrated end-user product.

In the "upstream only" model, there are two choices:

It's an investment

Building an operating system usable by the masses is a significant effort and requires a lot of expertise. It is tempting to think that since Microsoft, Google and Apple are already shipping several operating systems each, that we don't need one more.

However, let's remember that these are all American companies, building proprietary ecosystems that they have complete control over. In these uncertain times, Europe must not treat the USA as a direct enemy, but the current administration makes it clear that it would be reckless to continue treating it as an ally.

Building an international, transparent operating system that provides an open platform for people to use and for which developers can distribute apps will help secure EU's digital sovereignty and security, at a cost that wouldn't even make a dent in the budget. It's time for policymakers to take their responsibilities and not let America control the digital public space.

28 Feb 2025 7:00am GMT

This Week in GNOME: #189 Global Shortcuts

Update on what happened across the GNOME project in the week from February 21 to February 28.

GNOME Core Apps and Libraries

Emmanuele Bassi announces

Thanks to the work of many people across multiple components, the GNOME desktop portal now supports the Global Shortcuts interface. Applications can register desktop-wide shortcuts, and users can edit and revoke them through the system settings.

Emmanuele Bassi reports

Lukáš Tyrychtr finished working on the keyboard monitoring support in Mutter, Orca and libatspi. This means that Orca shortcuts will just finally work, including Caps lock as the Orca key, under Wayland, closing one of the last major blockers for the full transition away from X11.

Libmanette

Simple GObject game controller library.

Alice (she/her) reports

after a long period of inactivity, libmanette has been ported to gi-docgen. The new docs are available at https://gnome.pages.gitlab.gnome.org/libmanette/doc/main/

GTK

Cross-platform widget toolkit for creating graphical user interfaces.

Matthias Clasen says

Both GTK and mutter support the cursor shape protocol now. This will improve the consistency of cursor themes and sizing, and the interoperability with other compositors.

Third Party Projects

Televido

Access German-language public TV

d-k-bo says

Televido 0.5.0 is available on Flathub.

Televido is an app to access German-language public broadcasting live streams and archives based on APIs provided by the MediathekView project.

As a major change in version 0.5.0, Televido now provides an integrated video player based on Clapper.

Gir.Core

Gir.Core is a project which aims to provide C# bindings for different GObject based libraries.

Marcel Tiede announces

GirCore 0.6.3 was released. This release adds some missing bits to GObject-2.0.Integration, adds IDisposable support on interfaces and fixes a bug in several async methods. Check the release notes for details.

Gameeky

Play, create and learn.

Martín Abente Lahaye announces

Gameeky 0.6.5 is out 🚀

This new release brings complete translations for Dutch and Hindi, thanks to Heimen Stoffels and Scrambled777 respectively. Additionally, it has upgraded its GNOME runtime and fixed some rendering issues.

If you're interested in this mix of video games, coding and learning, I invite you to watch Gameeky's GUADEC presentation from last year.

Archives

Create and view web archives

Evangelos "GeopJr" Paterakis announces

Archives 0.4.0 is out with the ability to archive a right clicked link, all links in text selection and all links from a webpage individually. Additionally, it can now open ZIM files through Kiwix. Lastly, a search bar for searching in page was added, progress bars got redesigned and all third-party tools were updated to their latest versions.

Documentation

Emmanuele Bassi says

gi-docgen, the GIR-based C documentation generator got a new release. The most important change reflects a change in the GIR data introduced by gobject-introspection that allows "static" virtual functions (functions in the class structure that have no instance parameter). Some small QoL improvements to support narrow layouts, as well as cleanups in the generated HTML and styles. As usual, this release is available on both download.gnome.org and on PyPI.

Shell Extensions

Pedro Sader Azevedo announces

A few days ago, I released Blocker, my first GNOME Shell Extension.

It allows users to easily toggle system-wide content blocking. Behind the scenes, it uses a program named hBlock to change the computer's DNS settings, so it does not connect to domains that are known for serving adverts, trackers, and malware. This strategy of content blocking has its limitations, and you can read more about them here.

Give it a go if that sounds interesting to you!

Miscellaneous

Thib says

I published a blog post about how the EU can use GNOME to kickstart an international, transparent, collaborative operating system and reduce its dependency on American corporations, and what role the Foundation and community can play in it.

https://ergaster.org/posts/2025/02/28-prosthetics-that-dont-betray/

That's all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

28 Feb 2025 12:00am GMT

27 Feb 2025

feedPlanet GNOME

Felipe Borges: GNOME is participating in Google Summer of Code 2025!

The Google Summer of Code 2025 mentoring organizations have just been announced and we are happy that GNOME's participation has been accepted!

If you are interested in having a internship with GNOME, check gsoc.gnome.org for our project ideas and getting started information.

27 Feb 2025 6:42pm GMT

Jussi Pakkanen: The price of statelessness is eternal waiting

Most CI systems I have seen have been stateless. That is, they start by getting a fresh Docker container (or building one from scratch), doing a Git checkout, building the thing and then throwing everything away. This is simple and matematically pure, but really slow. This approach is further driven by the way how in cloud computing CPU time and network transfers are cheap but storage is expensive (or at least it is possible to get almost infinite CI build time for open source projects but not persistent storage). Probably because the cloud vendor needs to take care of things like backups, they can't dispatch the task on any machine on the planet but instead on the one that already has the required state and so on.

How much could you reduce resource usage (or, if you prefer, improve CI build speed) by giving up on statelessness? Let's find out by running some tests. To get a reasonably large code base I used LLVM. I did not actually use any cloud or Docker in the tests, but I simulated them on a local media PC. I used 16 cores to compile and 4 to link (any more would saturate the disk). Tests were not run.

Baseline

Creating a Docker container with all the build deps takes a few minutes. Alternatively you can prebuild it, but then you need to download a 1 GB image.

Doing a full Git checkout would be wasteful. There are basically three different ways of doing a partial checkout: shallow clone, blobless and treeless. They take the following amount of time and space:

Doing a full build from scratch takes 42 minutes.

With CCache

Using CCache in Docker is mostly a question of bind mounting a persistent directory in the container's cache directory. A from-scratch build with an up to date CCache takes 9m 30s.

With stashed Git repo

Just like the CCache dir, the Git checkout can also be persisted outside the container. Doing a git pull on an existing full checkout takes only a few seconds. You can even mount the repo dir read only to ensure that no state leaks from one build invocation to another.

With Danger Zone

One main thing a CI build ensures is that the code keeps on building when compiled from scratch. It is quite possible to have a bug in your build setup that manifests itself so that the build succeeds if a build directory has already been set up, but fails if you try to set it up from scratch. This was especially common back in ye olden times when people used to both write Makefiles by hand and to think that doing so was a good idea.

Nowadays build systems are much more reliable and this is not such a common issue (though it can definitely still occur). So what if you would be willing to give up full from-scratch checks on merge requests? You could, for example, still have a daily build that validates that use case. For some organizations this would not be acceptable, but for others it might be reasonable tradeoff. After all, why should a CI build take noticeably longer than an incremental build on the developer's own machine. If anything it should be faster, since servers are a lot beefier than developer laptops. So let's try it.

The implementation for this is the same as for CCache, you just persist the build directory as well. To run the build you do a Git update, mount the repo, build dir and optionally CCache dirs to the container and go.

I tested this by doing a git pull on the repo and then doing a rebuild. There were a couple of new commits, so this should be representative of the real world workloads. An incremental build took 8m 30s whereas a from scratch rebuild using a fully up to date cache took 10m 30s.

Conclusions

The amount of wall clock time used for the three main approaches were:

Similarly the amount of data transferred was:

The differences are quite clear. Just by using CCache the build time drops by almost 80%. Persisting the build dir reduces the time by a further 15%. It turns out that having machines dedicated to a specific task can be a lot more efficient than rebuilding the universe from atoms every time. Fancy that.

The final 2 minute improvement might not seem like that much, but on the other hand do you really want your developers to spend 2 minutes twiddling their thumbs for every merge request they create or update? I sure don't. Waiting for CI to finish is one of the most annoying things in software development.

27 Feb 2025 4:55pm GMT

26 Feb 2025

feedPlanet GNOME

Sebastian Pölsterl: scikit-survival 0.24.0 released

It's my pleasure to announce the release of scikit-survival 0.24.0.

A highlight of this release the addition of cumulative_incidence_competing_risks() which implements a non-parameteric estimator of the cumulative incidence function in the presence of competing risks. In addition, the release adds support for scikit-learn 1.6, including the support for missing values for ExtraSurvivalTrees.

Analysis of Competing Risks

In classical survival analysis, the focus is on the time until a specific event occurs. If no event is observed during the study period, the time of the event is considered censored. A common assumption is that censoring is non-informative, meaning that censored subjects have a similar prognosis to those who were not censored.

Competing risks arise when each subject can experience an event due to one of $K$ ($K \geq 2$) mutually exclusive causes, termed competing risks. Thus, the occurrence of one event prevents the occurrence of other events. For example, after a bone marrow transplant, a patient might relapse or die from transplant-related causes (transplant-related mortality). In this case, death from transplant-related mortality precludes relapse.

The bone marrow transplant data from Scrucca et al., Bone Marrow Transplantation (2007) includes data from 35 patients grouped into two cancer types: Acute Lymphoblastic Leukemia (ALL; coded as 0), and Acute Myeloid Leukemia (AML; coded as 1).

from sksurv.datasets import load_bmt
bmt_features, bmt_outcome = load_bmt()
diseases = bmt_features["dis"].cat.rename_categories(
{"0": "ALL", "1": "AML"}
)
diseases.value_counts().to_frame()
dis count
AML 18
ALL 17

During the follow-up period, some patients might experience a relapse of the original leukemia or die while in remission (transplant related death). The outcome is defined similarly to standard time-to-event data, except that the event indicator specifies the type of event, where 0 always indicates censoring.

import pandas as pd
status_labels = {
0: "Censored",
1: "Transplant related mortality",
2: "Relapse",
}
risks = pd.DataFrame.from_records(bmt_outcome).assign(
label=lambda x: x["status"].replace(status_labels)
)
risks["label"].value_counts().to_frame()
label count
Relapse 15
Censored 11
Transplant related mortality 9

The table above shows the number of observations for each status.

Non-parametric Estimator of the Cumulative Incidence Function

If the goal is to estimate the probability of relapse, transplant-related death is a competing risk event. This means that the occurrence of relapse prevents the occurrence of transplant-related death, and vice versa. We aim to estimate curves that illustrate how the likelihood of these events changes over time.

Let's begin by estimating the probability of relapse using the complement of the Kaplan-Meier estimator. With this approach, we treat deaths as censored observations. One minus the Kaplan-Meier estimator provides an estimate of the probability of relapse before time $t$.

import matplotlib.pyplot as plt
from sksurv.nonparametric import kaplan_meier_estimator
times, km_estimate = kaplan_meier_estimator(
bmt_outcome["status"] == 1, bmt_outcome["ftime"]
)
plt.step(times, 1 - km_estimate, where="post")
plt.xlabel("time $t$")
plt.ylabel("Probability of relapsing before time $t$")
plt.ylim(0, 1)
plt.grid()

However, this approach has a significant drawback: considering death as a censoring event violates the assumption that censoring is non-informative. This is because patients who died from transplant-related mortality have a different prognosis than patients who did not experience any event. Therefore, the estimated probability of relapse is often biased.

The cause-specific cumulative incidence function (CIF) addresses this problem by estimating the cause-specific hazard of each event separately. The cumulative incidence function estimates the probability that the event of interest occurs before time $t$, and that it occurs before any of the competing causes of an event. In the bone marrow transplant dataset, the cumulative incidence function of relapse indicates the probability of relapse before time $t$, given that the patient has not died from other causes before time $t$.

from sksurv.nonparametric import cumulative_incidence_competing_risks
times, cif_estimates = cumulative_incidence_competing_risks(
bmt_outcome["status"], bmt_outcome["ftime"]
)
plt.step(times, cif_estimates[0], where="post", label="Total risk")
for i, cif in enumerate(cif_estimates[1:], start=1):
plt.step(times, cif, where="post", label=status_labels[i])
plt.legend()
plt.xlabel("time $t$")
plt.ylabel("Probability of event before time $t$")
plt.ylim(0, 1)
plt.grid()

The plot shows the estimated probability of experiencing an event at time $t$ for both the individual risks and for the total risk.

Next, we want to to estimate the cumulative incidence curves for the two cancer types - acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML) - to examine how the probability of relapse depends on the original disease diagnosis.

_, axs = plt.subplots(2, 2, figsize=(7, 6), sharex=True, sharey=True)
for j, disease in enumerate(diseases.unique()):
mask = diseases == disease
event = bmt_outcome["status"][mask]
time = bmt_outcome["ftime"][mask]
times, cif_estimates, conf_int = cumulative_incidence_competing_risks(
event,
time,
conf_type="log-log",
)
for i, (cif, ci, ax) in enumerate(
zip(cif_estimates[1:], conf_int[1:], axs[:, j]), start=1
):
ax.step(times, cif, where="post")
ax.fill_between(times, ci[0], ci[1], alpha=0.25, step="post")
ax.set_title(f"{disease}: {status_labels[i]}", size="small")
ax.grid()
for ax in axs[-1, :]:
ax.set_xlabel("time $t$")
for ax in axs[:, 0]:
ax.set_ylim(0, 1)
ax.set_ylabel("Probability of event before time $t$")

The left column shows the estimated cumulative incidence curves (solid lines) for patients diagnosed with ALL, while the right column shows the curves for patients diagnosed with AML, along with their 95% pointwise confidence intervals. The plot indicates that the estimated probability of relapse at $t=40$ days is more than three times higher for patients diagnosed with ALL compared to AML.

If you want to run the examples above yourself, you can execute them interactively in your browser using binder.

26 Feb 2025 9:26pm GMT

25 Feb 2025

feedPlanet GNOME

Aryan Kaushik: GNOME in GSoC 2025

Hi Everyone!

Google Summer of Code 2025 is here! Interested in being a part of it? Read on!

GNOME Foundation has been a part of Google Summer of Code for almost every iteration, and we have applied for this year as well, and are waiting for it's confirmation!

Our tentative projects list is now available at GNOME GSoC Website

To make it easier for newcomers, we've built resources to help navigate both GSoC and the GNOME ecosystem:

You can also watch the awesome video on GNOME's impact and history on YouTube - GUADEC 2017 - Jonathan Blandford - The History of GNOME

From my experience, GNOME has been an incredible community filled with inspiring people. If you're looking to make an impact with one of the largest, oldest and most influential free software communities, I'd highly recommend giving GNOME a try.

You might just find a second home here while honing your skills alongside some of the best engineers around.

GNOME was my intro into the larger FOSS community when I became a GSoC 2022 Intern there, and has helped me on countless occasions, and I hope it will be the same for you!

If you have been a part of GNOME and want to contribute as a mentor, then let us know as well, GNOME can always utilise some great mentors!

For any questions, feel free to join the chat :D

Also, you can check out my previous post on the GSoC process for more insights on LinkedIn

Looking forward to seeing you in GSoC 2025!

25 Feb 2025 2:48pm GMT