18 Nov 2025

feedPlanet GNOME

Lennart Poettering: Mastodon Stories for systemd v258

Already on Sep 17 we released systemd v258 into the wild.

In the weeks leading up to that release I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd258 hash tag. It was my intention to post a link list here on this blog right after completing that series, but I simply forgot! Hence, in case you aren't using Mastodon, but would like to read up, here's a list of all 37 posts:

I intend to do a similar series of serieses of posts for the next systemd release (v259), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.

We intend to shorten the release cycle a bit for the future, and in fact managed to tag v259-rc1 already yesterday, just 2 months after v258. Hence, my series for v259 will begin soon, under the #systemd259 hash tag.

In case you are interested, here is the corresponding blog story for systemd v257, and here for v256.

18 Nov 2025 12:00am GMT

17 Nov 2025

feedPlanet GNOME

Christian Hergert: Status Week 46

Ptyxis

VTE

Foundry

Buider

CentOS

GtkSourceView

17 Nov 2025 8:24pm GMT

15 Nov 2025

feedPlanet GNOME

Code of Conduct Committee: Transparency report for May 2025 to October 2025

GNOME's Code of Conduct is our community's shared standard of behavior for participants in GNOME. This is the Code of Conduct Committee's periodic summary report of its activities from May 2025 to October 2025.

The current members of the CoC Committee are:

All the members of the CoC Committee have completed Code of Conduct Incident Response training provided by Otter Tech, and are professionally trained to handle incident reports in GNOME community events.

The committee has an email address that can be used to send reports: conduct@gnome.org as well as a website for report submission: https://conduct.gnome.org/

Reports

Since May 2025, the committee has received reports on a total of 25 possible incidents. Many of these were not actionable; all the incidents listed here were resolved during the reporting period.

Meetings of the CoC committee

The CoC committee has two meetings each month for general updates, and weekly ad-hoc meetings when they receive reports. There are also in-person meetings during GNOME events.

Ways to contact the CoC committee

15 Nov 2025 6:19pm GMT

14 Nov 2025

feedPlanet GNOME

Allan Day: GNOME Foundation Update, 2025-11-14

This post is another in my series of GNOME Foundation updates, each of which provides an insight into what's happened at the GNOME Foundation over the past week. If you are new to these posts I would encourage you to look over some of the previous entries - there's a fair amount going on at the Foundation right now, and my previous posts provide some useful background.

Old business

It has been another busy week at the GNOME Foundation. Here's a quick summary:

Most of these items are a continuation of activities that I've described in more detail in previous posts, and I'm a bit light on new news this week, but I think that's to be expected sometimes!

Post

This is the tenth in my series of GNOME Foundation updates, and this seems like a good point to reflect on how they are going. The weekly posting cadance made sense in the beginning, and wrapping up the week on a Friday afternoon is quite enjoyable, but I am unsure if a weekly post is too much reading for some.

So, I'd love to hear feedback: do you like the weekly updates, or do you find it hard to keep up? Would you prefer a higher-level monthly update? Do you like hearing about background operational details, or are you more interested in programs, events and announcements? Answers to these questions would be extremely welcome! Please let me know what you think, either in the comments or by reaching out on Matrix.

That's it from me for now. Thanks for reading, and have a great day.

14 Nov 2025 6:09pm GMT

Gedit Technology blog: Mid-November News

Misc news about the gedit text editor, mid-November edition!

Website: new design

Probably the highlight this month is the new design of the gedit website.

If it looks familiar to some of you, it's normal, it's because it's an adaptation of the previous GtkSourceView website that was developed in the old gnomeweb-wml repository. gnomeweb-wml (projects.gnome.org) is what predates all the wiki pages for Apps and Projects. The wiki has been retired, so another solution had to be found.

For the timeline, projects.gnome.org was available until 2013/2014 where all the content had been migrated to the wiki. Then the wiki has been retired in 2024.

Note that there are still rough edges on the gedit website, and more importantly some efforts still need to be done to bring the old CSS stylesheet forward to the new(-ish) responsive web design world.

For the most nostalgic of you:

And for the least nostalgic of you:

What we can say is that the gedit project has stood the test of time!

Enter TeX: improved search and replace

Some context: I would like some day to unify the search and replace feature between Enter TeX and gedit. It needs to retain the best of each.

In Enter TeX it's a combined horizontal bar, something that I would like in gedit too to replace the dialog window that occludes part of the text.

In gedit the strengths include: the search-as-you-type possibility, and a history of past searches. Both are missing in Enter TeX. (These are not the only things that need to be retained; the same workflows, keyboard shortcuts etc. are also an integral part of the functionality).

So to work towards that goal, I started in Enter TeX. I merged around 50 commits in the git repository for this change already, rewriting in C (from Vala) some parts and improving the UI along the way. The code needs to be in C because it'll be moved to libgedit-tepl so that it can be consumed by gedit easily.

Here is how it looks:

Screenshot of the search and replace in Enter TeX

Internal refactoring for GeditWindow and its statusbar

GeditWindow is what we can call a god class. It is too big, both in the number of lines and the number of instance variables.

So this month I've continued to refactor it, to extract a GeditWindowStatus class. There was already a GeditStatusbar class, but its features have now been moved to libgedit-tepl as TeplStatusbar.

GeditWindowStatus takes up the responsibility to create the TeplStatusbar, to fill it with the indicators and other buttons, and to make the connection with GeditWindow and the current tab/document.

So as a result, GeditWindow is a little less omniscient ;-)

As a conclusion

gedit does not materialize out of empty space; it takes time to develop and maintain. To demonstrate your appreciation of this piece of software and help its future development, remember that you can fund the project. Your support is critical and much appreciated.

14 Nov 2025 10:00am GMT

This Week in GNOME: #225 Volume Levels

Update on what happened across the GNOME project in the week from November 07 to November 14.

GNOME Core Apps and Libraries

Settings

Configure various aspects of your GNOME desktop.

Zoey Ahmed 🏳️‍⚧️ 💙💜🩷 reports

GNOME Settings volume levels page received a change to fix applications inputs and outputs being hard to distinguish. This change separates the applications with outputs and inputs streams into separate lists, and adds a microphone icon to the inputs list.

Thank you to Hari Rana and Matthijs Velsink for helping me with my first MR, and Jeff Fortin for nudging me to persue this change!

volume_levels.png

Files

Providing a simple and integrated way of managing your files and browsing your file system.

Tomasz Hołubowicz says

Nautilus now supports Ctrl+Insert and Shift+Insert for copying and pasting files, matching the behavior of other GTK applications, browsers, and file managers like Dolphin and Thunar. These CUA keybindings were previously only functional in Nautilus's location bar, creating an inconsistency. The addition also benefits users with keyboards that have dedicated copy/paste keys, which typically emit these key combinations. These shortcuts are particularly useful for left-handed users and also allow the same bindings to work across applications, file managers, and terminal emulators, where Ctrl+Shift+C/V are typically required. The Ctrl+V paste shortcut is now also visible in the context menu.

GLib

The low-level core library that forms the basis for projects such as GTK and GNOME.

Philip Withnall announces

In https://gitlab.gnome.org/GNOME/glib/-/merge_requests/4900, Philip Chimento has added a G_GNUC_FLAG_ENUM macro to GLib, which can be used in an enum definition to tell the compiler it's for a flag type (i.e. enum values which can be bitwise combined). This allows for better error reporting, particularly when building with -Wswitch (which everyone should be using!).

So now we can have enums which look like this, for example:

typedef enum {
  G_CONVERTER_NO_FLAGS     = 0,         /*< nick=none >*/
  G_CONVERTER_INPUT_AT_END = (1 << 0),  /*< nick=input-at-end >*/
  G_CONVERTER_FLUSH        = (1 << 1)   /*< nick=flush >*/
} G_GNUC_FLAG_ENUM GConverterFlags;

GNOME Circle Apps and Libraries

Gaphor

A simple UML and SysML modeling tool.

Dan Yeaw announces

Gaphor, the simple modeling tool, version 3.2.0 is now out! Some highlights include:

  • Troubleshooting info can now be found in the About dialog
  • Introduction of CSS classes: .item for all items you put on the diagram
  • Improved updates in Model Browser for attribute/parameter types
  • macOS: native window decorations and window menu

Grab the new version on Flathub.

Third Party Projects

Haydn reports

Typesetter, a minimalist desktop application for creating beautiful documents with Typst, is now available on Flathub.

Features include:

  • Adaptive, user-friendly interface: Focus on writing. Great for papers, reports, slides, books, and any structured writing.
  • Powered by Typst: A modern markup-based typesetting language, combining the simplicity of Markdown with the power of LaTeX.
  • Local-first: Your files stay on your machine. No cloud lock-in.
  • Package support: Works offline, but can fetch and update packages online when needed.
  • Automatic preview: See your rendered document update as you write.
  • Click-to-jump: Click on a part of the preview to jump to the corresponding position in the source file.
  • Centered scrolling: Keeps your writing visually anchored as you type.
  • Syntax highlighting: Makes your documents easier to read and edit.
  • Fast and native: Built in Rust and GTK following the GNOME human interface guidelines.

Get Typesetter on Flathub

typesetter-dark-preview.png

typesetter-light-editor.png

Vladimir Kosolapov announces

Lenspect 1.0.2 has just been released on Flathub

This version features some quality-of-life improvements:

  • Improved drag-and-drop design
  • Increased file size limit to 650MB
  • Added more result items from VirusTotal
  • Added notifications for background scans
  • Added file opener integration
  • Added key storage using secrets provider

Check out the project on GitHub

lenspect.png

GNOME Websites

Sophie (she/her) reports

The API to access information about GNOME projects has moved from apps.gnome.org to static.gnome.org/catalog. Everything based on the old API links has to move to the new links. The format of the API also slightly changed.

Pages like apps.gnome.org, welcome.gnome.org, developer.gnome.org/components/, and others are based on the API data. The separation will help with maintainability of the code.

More information can be found in the catalog's git repository.

Shell Extensions

Dudu Maroja reports

The 2 Wallpapers GNOME extension is a neat tool that changes your wallpaper whenever you open a window. You can choose to set a darker, blurry, desaturated, or completely different image, whatever suits your preference. This extension was designed to help you focus on your active windows while letting your desktop shine when you want it.

The main idea behind this extension is to allow the use of transparent windows without relying on heavy processing or on-the-fly effects like blur, which can consume too much battery or GPU resources.

Grab it here: 2 Wallpapers Extension

dagimg-dot says

I have been working on Veil - a modern successor to the Hide items extension. which lets you hide all or chosen items on the gnome panel with auto-hide feature and smooth animations. you can check out the demo on GNOME's reddit https://www.reddit.com/r/gnome/comments/1orr1co/veil_a_cleaner_quieter_gnome_panel_hide_items/

Dmy3k announces

Adaptive Brightness Extension

This week the extension received a big update to preferences UI.

Interactive Brightness Configuration

  • You can now customize how your screen brightness responds to different lighting conditions using an easy-to-use graphical interface
  • Configure brightness levels for 5 different light ranges (from night to bright outdoor)
  • See a visual graph showing your brightness curve

Improved Settings Layout

  • Settings are now organized into 3 clear tabs: Calibration, Preview, and Keyboard
  • Each lighting condition can be expanded to adjust its range and brightness level
  • Live preview shows you exactly how brightness will respond to ambient light

Better Keyboard Backlight Control

  • Choose specific lighting conditions where keyboard backlight turns on (instead of just on/off)

Available at extensions.gnome.org and github.

gnome_extensions_adaptive_brightness_prefs.png

Miscellaneous

GNOME OS

The GNOME operating system, development and testing platform

Ada Magicat ❤️🧡🤍🩷💜 reports

Tulip Blossom from Project Bluefin has been working on building bootc images of different Linux systems, including GNOME OS. To ensure bootc users have the best experience possible with our system, Jordan Petridis and Valentin David from the GNOME OS team are working on building an OCI image that can be directly used by bootc. It is currently a work in progress, but we expect to land it soon. This collaboration is a great opportunity to expand our community, contributor base and share our vision for how to build operating systems.

Note that this does not represent a change in our plans for GNOME OS itself; It will continue using the same systemd tools for deploying and updating the system.

gnomeos-bootc.png

Ada Magicat ❤️🧡🤍🩷💜 reports

In Ignacy's update on his Digital Wellbeing work this week, you might have noticed he shared the progress of his work in a complete system image. That image is based on GNOME OS and built on the same infrastructure as our main images.

This shows the power of GNOME OS as a development platform, especially for features that involve changes in many different parts of our stack. It also allows anyone with a machine, virtual or physical, to test these new features easier than ever before.

We hope to further improve our tools so that they are useful to more developers and make it easier and more convenient to test changes like this.

GNOME Foundation

Allan Day says

Another weekly Foundation update is available this week, with a summary of everything that's been happening at the GNOME Foundation. It's been a mixed week, with a Board meeting, ongoing finance work, GNOME.Asia preparations, and digital wellbeing planning.

Digital Wellbeing Project

Ignacy Kuchciński (ignapk) announces

As part of the Digital Wellbeing project, sponsored by the GNOME Foundation, there is an initiative to redesign the Parental Controls to bring it on par with modern GNOME apps and implement new features such as Screen Time monitoring, Bedtime Schedule and Web Filtering. Recently the child account overview gained screen time usage information, the Screen Time page was added with session limits controls, the wellbeing panel in Settings was integrated with parental controls, and screen limits were introduced in the Shell. There's more to come, see https://blogs.gnome.org/ignapk/2025/11/10/digital-wellbeing-contract-screen-time-limits/ for more information.

That's all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

14 Nov 2025 12:00am GMT

Martin Pitt: Learning about MCP servers and AI-assisted GitHub issue triage

Today is another Red Hat day of learning. I've been hearing about MCP (Model Context Protocol) servers for a while now - the idea of giving AI assistants standardized "eyes and arms" to interact with external tools and data sources. I tried it out, starting with a toy example and then moving on to something actually useful for my day job. First steps: Querying photo EXIF data I started with a local MCP server to query my photo library.

14 Nov 2025 12:00am GMT

13 Nov 2025

feedPlanet GNOME

Andy Wingo: the last couple years in v8's garbage collector

Let's talk about memory management! Following up on my article about 5 years of developments in V8's garbage collector, today I'd like to bring that up to date with what went down in V8's GC over the last couple years.

methodololology

I selected all of the commits to src/heap since my previous roundup. There were 1600 of them, including reverts and relands. I read all of the commit logs, some of the changes, some of the linked bugs, and any design document I could get my hands on. From what I can tell, there have been about 4 FTE from Google over this period, and the commit rate is fairly constant. There are very occasional patches from Igalia, Cloudflare, Intel, and Red Hat, but it's mostly a Google affair.

Then, by the very rigorous process of, um, just writing things down and thinking about it, I see three big stories for V8's GC over this time, and I'm going to give them to you with some made-up numbers for how much of the effort was spent on them. Firstly, the effort to improve memory safety via the sandbox: this is around 20% of the time. Secondly, the Oilpan odyssey: maybe 40%. Third, preparation for multiple JavaScript and WebAssembly mutator threads: 20%. Then there are a number of lesser side quests: heuristics wrangling (10%!!!!), and a long list of miscellanea. Let's take a deeper look at each of these in turn.

the sandbox

There was a nice blog post in June last year summarizing the sandbox effort: basically, the goal is to prevent user-controlled writes from corrupting memory outside the JavaScript heap. We start from the assumption that the user is somehow able to obtain a write-anywhere primitive, and we work to mitigate the effect of such writes. The most fundamental way is to reduce the range of addressable memory, notably by encoding pointers as 32-bit offsets and then ensuring that no host memory is within the addressable virtual memory that an attacker can write. The sandbox also uses some 40-bit offsets for references to larger objects, with similar guarantees. (Yes, a sandbox really does reserve a terabyte of virtual memory).

But there are many, many details. Access to external objects is intermediated via type-checked external pointer tables. Some objects that should never be directly referenced by user code go in a separate "trusted space", which is outside the sandbox. Then you have read-only spaces, used to allocate data that might be shared between different isolates, you might want multiple cages, there are "shared" variants of the other spaces, for use in shared-memory multi-threading, executable code spaces with embedded object references, and so on and so on. Tweaking, elaborating, and maintaining all of these details has taken a lot of V8 GC developer time.

I think it has paid off, though, because the new development is that V8 has managed to turn on hardware memory protection for the sandbox: sandboxed code is prevented by the hardware from writing memory outside the sandbox.

Leaning into the "attacker can write anything in their address space" threat model has led to some funny patches. For example, sometimes code needs to check flags about the page that an object is on, as part of a write barrier. So some GC-managed metadata needs to be in the sandbox. However, the garbage collector itself, which is outside the sandbox, can't trust that the metadata is valid. We end up having two copies of state in some cases: in the sandbox, for use by sandboxed code, and outside, for use by the collector.

The best and most amusing instance of this phenomenon is related to integers. Google's style guide recommends signed integers by default, so you end up with on-heap data structures with int32_t len and such. But if an attacker overwrites a length with a negative number, there are a couple funny things that can happen. The first is a sign-extending conversion to size_t by run-time code, which can lead to sandbox escapes. The other is mistakenly concluding that an object is small, because its length is less than a limit, because it is unexpectedly negative. Good times!

oilpan

It took 10 years for Odysseus to get back from Troy, which is about as long as it has taken for conservative stack scanning to make it from Oilpan into V8 proper. Basically, Oilpan is garbage collection for C++ as used in Blink and Chromium. Sometimes it runs when the stack is empty; then it can be precise. But sometimes it runs when there might be references to GC-managed objects on the stack; in that case it runs conservatively.

Last time I described how V8 would like to add support for generational garbage collection to Oilpan, but that for that, you'd need a way to promote objects to the old generation that is compatible with the ambiguous references visited by conservative stack scanning. I thought V8 had a chance at success with their new mark-sweep nursery, but that seems to have turned out to be a lose relative to the copying nursery. They even tried sticky mark-bit generational collection, but it didn't work out. Oh well; one good thing about Google is that they seem willing to try projects that have uncertain payoff, though I hope that the hackers involved came through their OKR reviews with their mental health intact.

Instead, V8 added support for pinning to the Scavenger copying nursery implementation. If a page has incoming ambiguous edges, it will be placed in a kind of quarantine area for a while. I am not sure what the difference is between a quarantined page, which logically belongs to the nursery, and a pinned page from the mark-compact old-space; they seem to require similar treatment. In any case, we seem to have settled into a design that was mostly the same as before, but in which any given page can opt out of evacuation-based collection.

What do we get out of all of this? Well, not only can we get generational collection for Oilpan, but also we unlock cheaper, less bug-prone "direct handles" in V8 itself.

The funny thing is that I don't think any of this is shipping yet; or, if it is, it's only in a Finch trial to a minority of users or something. I am looking forward in interest to seeing a post from upstream V8 folks; whole doctoral theses have been written on this topic, and it would be a delight to see some actual numbers.

shared-memory multi-threading

JavaScript implementations have had the luxury of a single-threadedness: with just one mutator, garbage collection is a lot simpler. But this is ending. I don't know what the state of shared-memory multi-threading is in JS, but in WebAssembly it seems to be moving apace, and Wasm uses the JS GC. Maybe I am overstating the effort here-probably it doesn't come to 20%-but wiring this up has been a whole thing.

I will mention just one patch here that I found to be funny. So with pointer compression, an object's fields are mostly 32-bit words, with the exception of 64-bit doubles, so we can reduce the alignment on most objects to 4 bytes. V8 has had a bug open forever about alignment of double-holding objects that it mostly ignores via unaligned loads.

Thing is, if you have an object visible to multiple threads, and that object might have a 64-bit field, then the field should be 64-bit aligned to prevent tearing during atomic access, which usually means the object should be 64-bit aligned. That is now the case for Wasm structs and arrays in the shared space.

side quests

Right, we've covered what to me are the main stories of V8's GC over the past couple years. But let me mention a few funny side quests that I saw.

the heuristics two-step

This one I find to be hilariousad. Tragicomical. Anyway I am amused. So any real GC has a bunch of heuristics: when to promote an object or a page, when to kick off incremental marking, how to use background threads, when to grow the heap, how to choose whether to make a minor or major collection, when to aggressively reduce memory, how much virtual address space can you reasonably reserve, what to do on hard out-of-memory situations, how to account for off-heap mallocated memory, how to compute whether concurrent marking is going to finish in time or if you need to pause... and V8 needs to do this all in all its many configurations, with pointer compression off or on, on desktop, high-end Android, low-end Android, iOS where everything is weird, something called Starboard which is apparently part of Cobalt which is apparently a whole new platform that Youtube uses to show videos on set-top boxes, on machines with different memory models and operating systems with different interfaces, and on and on and on. Simply tuning the system appears to involve a dose of science, a dose of flailing around and trying things, and a whole cauldron of witchcraft. There appears to be one person whose full-time job it is to implement and monitor metrics on V8 memory performance and implement appropriate tweaks. Good grief!

mutex mayhem

Toon Verwaest noticed that V8 was exhibiting many more context switches on MacOS than Safari, and identified V8's use of platform mutexes as the problem. So he rewrote them to use os_unfair_lock on MacOS. Then implemented adaptive locking on all platforms. Then... removed it all and switched to abseil.

Personally, I am delighted to see this patch series, I wouldn't have thought that there was juice to squeeze in V8's use of locking. It gives me hope that I will find a place to do the same in one of my projects :)

ta-ta, third-party heap

It used to be that MMTk was trying to get a number of production language virtual machines to support abstract APIs so that MMTk could slot in a garbage collector implementation. Though this seems to work with OpenJDK, with V8 I think the churn rate and laser-like focus on the browser use-case makes an interstitial API abstraction a lose. V8 removed it a little more than a year ago.

fin

So what's next? I don't know; it's been a while since I have been to Munich to drink from the source. That said, shared-memory multithreading and wasm effect handlers will extend the memory management hacker's full employment act indefinitely, not to mention actually landing and shipping conservative stack scanning. There is a lot to be done in non-browser V8 environments, whether in Node or on the edge, but it is admittedly harder to read the future than the past.

In any case, it was fun taking this look back, and perhaps I will have the opportunity to do this again in a few years. Until then, happy hacking!

13 Nov 2025 3:21pm GMT

Jussi Pakkanen: Creating valid PDF/A-4 with CapyPDF

PDF/A is a specific version of PDF designed for long term archival of electronic data. The idea being that PDF/A files are both self contained and fully specified, so they can be opened in the future without any loss of fidelity.

Implementing PDF/A export is complicated by the fact that the specification is an ISO standard, which is not publicly available. Fortunately, there are PDF/A validators that will tell you if (and sometimes how) your generated PDF/A is invalid. So, given sufficient patience, you can keep throwing PDF files at the validator, fixing the issues reported and repeating this loop over and over until validation passes. Like this:

This will be available in the next release of CapyPDF.

13 Nov 2025 12:49pm GMT

Jiri Eischmann: How We Streamed OpenAlt on Vhsky.cz

The blog post was originally published on my Czech blog.

When we launched Vhsky.cz a year ago, we did it to provide an alternative to the near-monopoly of YouTube. I believe video distribution is so important today that it's a skill we should maintain ourselves.

To be honest, it's bothered me for the past few years that even open-source conferences simply rely on YouTube for streaming talks, without attempting to secure a more open path. We are a community of tech enthusiasts who tinker with everything and take pride in managing things ourselves, yet we just dump our videos onto YouTube, even when we have the tools to handle it internally. Meanwhile, it's common for conferences abroad to manage this themselves. Just look at FOSDEM or Chaos Communication Congress.

This is why, from the moment Vhsky.cz launched, my ambition was to broadcast talks from OpenAlt-a conference I care about and help organize. The first small step was uploading videos from previous years. Throughout the year, we experimented with streaming from OpenAlt meetups. We found that it worked, but a single stream isn't quite the stress test needed to prove we could handle broadcasting an entire conference.

For several years, Michal Vašíček has been in charge of recording at OpenAlt, and he has managed to create a system where he handles recording from all rooms almost single-handedly (with assistance from session chairs in each room). All credit to him, because other conferences with a similar scope of recordings have entire teams for this. However, I don't have insight into this part of the process, so I won't focus on it. Michal's job was to get the streams to our server; our job was to get them to the viewers.

OpenAlt's AV background with running streams. Author: Michal Stanke.

Stress Test

We only got to a real stress test the weekend before the conference, when Bashy prepared a setup with seven streams at 1440p resolution. This was exactly what awaited us at OpenAlt. Vhsky.cz runs on a fairly powerful server with a 32-core i9-13900 processor and 96 GB of RAM. However, it's not entirely dedicated to PeerTube. It has to share the server with other OSCloud services (OSCloud is a community hosting of open source web services).

We hadn't been limited by performance until then, but seven 1440p streams were truly at the edge of the server's capabilities, and streams occasionally dropped. In reality, this meant 14 continuous transcoding processes, as we were streaming in both 1440p and 480p. Even if you don't change the resolution, you still need to transcode the video to leverage useful distribution features, which I'll cover later. The 480p resolution was intended for mobile devices and slow connections.

Remote Runner

We knew the Vhsky.cz server alone couldn't handle it. Fortunately, PeerTube allows for the use of "remote runners". The PeerTube instance sends video to these runners for transcoding, while the main instance focuses only on distributing tasks, storage, and video distribution to users. However, it's not possible to do some tasks locally and offload others. If you switch transcoding to remote runners, they must handle all the transcoding. Therefore, we had to find enough performance somewhere to cover everything.

I reached out to several hosting providers known to be friendly to open-source activities. Adam Štrauch from Roští.cz replied almost immediately, saying they had a backup machine that they had filed a warranty claim for over the summer and hadn't tested under load yet. I wrote back that if they wanted to see how it behaved under load, now was a great opportunity. And so we made a deal.

It was a truly powerful machine: a 48-core Ryzen with 1 TB of RAM. Nothing else was running on it, so we could use all its performance for video transcoding. After installing the runner on it, we passed the stress test. As it turned out, the server with the runner still had a large reserve. For a moment, I toyed with the idea of adding another resolution to transcode the videos into, but then I decided we'd better not tempt fate. The stress test showed us we could keep up with transcoding, but not how it would behave with all the viewers. The performance reserve could come in handy.

Load on the runner server during the stress test. Author: Adam Štrauch.

Smart Video Distribution

Once we solved the transcoding performance, it was time to look at how PeerTube would handle video distribution. Vhsky.cz has a bandwidth of 1 Gbps, which isn't much for such a service. If we served everyone the 1440p stream, we could serve a maximum of 100 viewers. Fortunately, another excellent PeerTube feature helps with this: support for P2P sharing using HLS and WebRTC.

Thanks to this, every viewer (unless they are on a mobile device and data) also becomes a peer and shares the stream with others. The more viewers watch the stream, the more they share the video among themselves, and the server load doesn't grow at the same rate.

A two-year-old stress test conducted by the PeerTube developers themselves gave us some idea of what Vhsky could handle. They created a farm of 1,000 browsers, simulating 1,000 viewers watching the same stream or VOD. Even though they used a relatively low-performance server (quad-core i7-8700 CPU @ 3.20GHz, slow hard drive, 4 GB RAM, 1 Gbps connection), they managed to serve 1,000 viewers, primarily thanks to data sharing between them. For VOD, this saved up to 98% of the server's bandwidth; for a live stream, it was 75%:

If we achieved a similar ratio, then even after subtracting 200 Mbps for overhead (running other services, receiving streams, data exchange with the runner), we could serve over 300 viewers at 1440p and multiples of that at 480p. Considering that OpenAlt had about 160 online viewers in total last year, this was a more than sufficient reserve.

Live Operation

On Saturday, Michal fired up the streams and started sending video to Vhsky.cz via RTMP. And it worked. The streams ran smoothly and without stuttering. In the end, we had a maximum of tens of online viewers at any one time this year, which posed no problem from a distribution perspective.

In practice, the server data download savings were large even with just 5 peers on a single stream and resolution.

Our solution, which PeerTube allowed us to flexibly assemble from servers in different data centers, has one disadvantage: it creates some latency. In our case, however, this meant the stream on Vhsky.cz was about 5-10 seconds behind the stream on YouTube, which I don't think is a problem. After all, we're not broadcasting a sports event.

Diagram of the streaming solution for OpenAlt. Labels in Czech, but quite self-explanatory.

Minor Problems

We did, however, run into minor problems and gained experience that one can only get through practice. During Saturday, for example, we found that the stream would occasionally drop from 1440p to 480p, even though the throughput should have been sufficient. This was because the player felt that the delivery of stream chunks was delayed and preemptively switched to a lower resolution. Setting a higher cache increased the stream delay slightly, but it significantly reduced the switching to the lower resolution.

Subjectively, even 480p wasn't a problem. Most of the screen was taken up by the red frame with the OpenAlt logo and the slides. The speaker was only in a small window. The reduced resolution only caused slight blurring of the text on the slides, which I wouldn't even have noticed as a problem if I wasn't focusing on it. I could imagine streaming only in 480p if necessary. But it's clear that expectations regarding resolution are different today, so we stream in 1440p when we can.

Over the whole weekend, the stream from one room dropped for about two talks. For some rooms, viewers complained that the stream was too quiet, but that was an input problem. This issue was later fixed in the recordings.

When uploading the talks as VOD (Video on Demand), we ran into the fact that PeerTube itself doesn't support bulk uploads. However, tools exist for this, and we'd like to use them next time to make uploading faster and more convenient. Some videos also uploaded with the wrong orientation, which was likely a problem in their metadata, as PeerTube wasn't the only player that displayed them that way. YouTube, however, managed to handle it. Re-encoding them solved the problem.

On Saturday, to save performance, we also tried transcoding the first finished talk videos on the external runner. For these, a bar is displayed with a message that the video failed to save to external storage, even though it is clearly stored in object storage. In the end we had to reupload them because they were available to watch, but not indexed.

A small interlude - my talk about PeerTube at this year's OpenAlt. Streamed, of course, via PeerTube:

Thanks and Support

I think that for our very first time doing this, it turned out very well, and I'm glad we showed that the community can stream such a conference using its own resources. I would like to thank everyone who participated. From Michal, who managed to capture footage in seven lecture rooms at once, to Bashy, who helped us with the stress test, to Archos and Schmaker, who did the work on the Vhsky side, and Adam Štrauch, who lent us the machine for the external runner.

If you like what we do and appreciate that someone is making OpenAlt streams and talks available on an open platform without ads and tracking, we would be grateful if you supported us with a contribution to one of OSCloud's accounts, under which Vhsky.cz runs. PeerTube is a great tool that allows us to operate such a service without having Google's infrastructure, but it doesn't run for free either.

13 Nov 2025 11:37am GMT

12 Nov 2025

feedPlanet GNOME

Michael Meeks: 2025-11-12 Wednesday

12 Nov 2025 9:00pm GMT

11 Nov 2025

feedPlanet GNOME

Michael Meeks: 2025-11-11 Tuesday

11 Nov 2025 9:00pm GMT

10 Nov 2025

feedPlanet GNOME

Christian Hergert: Status Week 45

Ptyxis

VTE

Libdex

Foundry

Builder

Flathub

Text Editor

Libpanel

Manuals

10 Nov 2025 8:32pm GMT

Ignacy Kuchciński: Digital Wellbeing Contract: Screen Time Limits

It's been four months since my last Digital Wellbeing update. In that previous post I talked about the goals of the Digital Wellbeing project. I also described our progress improving and extending the functionality of the GNOME Parental Controls application, as well as redesigning the application to meet the current design guidelines.

Introducing Screen Time Limits

Following our work on the Parental Controls app, the next major work item was to implement screen time limits functionality, offering the parents ability to check the child's screen time usage, set the time limits, and lock the child account outside of a specified curfew. This feature actually spanned across *three* different GNOME projects:

Out of all of the three above, the Parental Controls and Shell changes have been already merged, while the Settings integration has been through unwritten review during the bi-weekly Settings meeting and adjusted to the feedback, so it's only a matter of time now before it reaches the main branch as well. You can find the screenshots of the added functionalities below, and the reference designs can be find in the app-mockups and os-mockups tickets.

Child screen usage

When viewing a managed account, a summary of screen time is shown with actions for changing further settings, as well as actions to access additional settings for restrictions and filtering.

Child account view with added screen time overview and action for more options

The Screen Time view shows an overview of the child's account's screen time as well as controls which mirror those of the Settings panel to control screen limits and downtime for the child.

Screen Time page with detailed screen time records and time limit controls

Settings integration

On the Settings side, a child account will see a banner in the Wellbeing panel that lets them know some settings cannot be changed, with a link to the Parental Controls app.

Wellbeing panel with a banner informing that limits can only be changed in Parental Controls

Screen limits in GNOME Shell

We have implemented the locking mechanism in GNOME Shell. When a Screen Time limit is reached, the session locks, so that the child can't use the computer for the rest of the day.

Following is a screen cast of the Shell functionality:

Preventing children from unlocking has not been implemented yet. However, fortunately, the hardest part was implementing the framework for the rest of the code, so hopefully the easier graphical change will take less to implement and the next update will be much sooner than this one.

GNOME OS images

You don't have to take my word for it, especially since one can notice I've had to cut the recording at one point (forgot that one can't switch users in the lock screen :P) - you can check out all of these features in the very same GNOME OS live image I've used in the recording, that you can either run in GNOME Boxes, or try on your hardware if you know what you're doing 🙂

Malcontent changes

While all of these user facing changes look cool, none of them would be actually possible without the malcontent backend, which Philip Withnall has been working on. While the daily schedule had already been implemented, the daily limit session limit had to be added, as well as malcontent timer daemon API for Shell to use. There has been many other improvements, web filtering daemon has been added, which I'll use in the future for implementing Web Filtering page in Parental Controls app.

Conclusion

Our work for the GNOME Foundation is funded by Endless and Dalio Philanthropies, so kudos to them! I want to thank Florian Müllner for his patience too, during the very educative for me merge request review, and answering to all of my Shell development wonderings. I also want to thank Matthijs Velsink and Felipe Borges for finding time to review the Settings integration.

Now that this foundation has been made, we'll be focusing on finishing the last remaining bit of the session limits support in Shell, which is tweaking the appearance of lock screen when the limit is reached, and implementing the ignore button for extending screen limit, as well as notifications, followed by Web Filtering support in Parental Controls. Until next update!

10 Nov 2025 5:31pm GMT

Luis Villa: Three LLM-assisted projects

Some notes on my first serious coding projects in something like 20 years, possibly longer. If you're curious what these projects mean, more thoughts over on the OpenML.fyi newsletter.

TLDR

A GitHub contribution graph, showing a lot of activity in the past three weeks after virtually none the rest of the year.

News, Fixed

The "Fix The News" newsletter is a pillar of my mental health these days, bringing me news that the world is not entirely going to hell in a handbasket. And my 9yo has repeatedly noted that our family news diet is "broken" in exactly the way Fix The News is supposed to fix-hugely negative, hugely US-centric. So I asked Claude to create a "newspaper" version of FTN - a two page pdf of some highlights. It was a hit.

So I've now been working with Claude Code to create and gradually improve a four-days-a-week "News, Fixed" newspaper. This has been super-fun for the whole family-my wife has made various suggestions over my shoulder, my son devours it every morning, and it's the first serious coding project I've tackled in ages. It is almost entirely strictly personal (it still has hard-coded Duke Basketball schedules) but nevertheless is public and FOSS. (It is even my first usage of reuse.software-and also of SonarQube Server!)

Example newspaper here.

No matter how far removed you are from practical coding experience, I cannot recommend enough finding a simple, fun project like this that scratches a human itch in your life, and using the project to experiment with the new code tools.

Getting Things Done assistant

While working on News, Fixed a friend pointed out Steve Yegge's "beads", which reimagines software issue tracking as an LLM-centric activity - json-centric, tracked in git, etc. At around the same time, I was also pointed at Superpowers-essentially, canned "skills" like "teach the LLM, temporarily, how to brainstorm".

The two of these together in my mind screamed "do this for your overwhelmed todo list". I've long practiced various bastardized versions of Getting Things Done, but one of the hangups has been that I'm inconsistent about doing the daily/weekly/nth-ly reviews that good GTD really relies on. I might skip a step, or not look through all my huge "someday-maybe" list, or… any of many reasons one can be tired and human when faced with a wall of text. Also, while there are many tools out there to do GTD, in my experience they either make some of the hardest parts (like the reviews) your problem, or they don't quite fit with how I want to do GTD, or both. Hacking on my own prompts to manage the reviews seems to fit these needs to a T.

I currently use Amazing Marvin as my main GTD tool. It is funky and weird and I've stuck with it much longer than any other task tracker I've ever used. So what I've done so far:

This is all read-only right now because of limitations in the Marvin API but for various reasons I'm not yet ready to embark on building my own entire UI. So this will do for now. But this code, therefore, is very limited to me. The prompts on the other hand…

Note that my emphasis is not on "do tasks", it is on helping me stay on priority. Less "chief of staff", more "executive assistant"-both incredibly valuable when done well, but different roles. This is different from some of the use examples for Yegge's Beads, which really are around agents.

Also note: the results have been outstanding. I'm getting more easily into my doing zone, I think largely because I have less anxiety about staring at the Giant Wall of Tasks that defines the life of any high-level IC. And my projects are better organized and todos feel more accurate than they have been in a long time, possibly ever.

a note on LLMs and issue/TODO tracking

It is worth noting that while LLMs are probabilistic/lossy, so they can't find the "perfect" next TODO to work on, that's OK. Personal TODO and software issue tracking are inherently subjective, probabilistic activities-there is no objectively perfect "next best thing to work on", "most important thing to work on", etc. So the fact that an LLM is only probabilistic in identifying the next task to work on is fine-no human can do substantially better. In fact I'm pretty sure that once an issue list is past a certain point, the LLM is likely to be able to do better- if (and like many things LLM, this is a big if) you can provide it with documented standards explaining how you want to do prioritization. (Literally one of the first things I did at my first job was write standards on how to prioritize bugs-the forerunner of this doc-so I have strong opinions, and experience, here.)

Skills for license "concluded"

While at a recent Linux Foundation event, I was shocked to realize how many very smart people haven't internalized the skills/prompts/context stuff. It's either "you chat with it" or "you train a model". This is not their fault; it is hard to keep up!

Of course this came up most keenly in the context of the age-old problem of "how do I tell what license an open source project is under". In other words, what is the difference between "I have scanned this" and "I have reached the zen state of SPDX's 'concluded' field".

So … yes, I've started playing with scripts and prompts on this. It's much less further along than the other two projects above, but I think it could be very fruitful if structured correctly. Some potentially big benefits above and beyond the traditional scanning and/or throw a lawyer at it approaches:

ClearlyDefined offers test data on this, by the way - I'm really looking forward to seeing if this can be made actually reliable or not. (And then we can hook up reuse.software on the backend to actually improve the upstream metadata…)

But even then, I may not ever release this. There's a lot of real risks here and I still haven't thought them through enough to be comfortable with them. That's true even though I think the industry has persistently overstated its ability to reach useful conclusions about licensing, since it so persistently insists on doing licensing analysis without ever talking to maintainers.

More to come?

I'm sure there will be more of these. That said, one of the interesting temptations of this is that it is very hard to say "something is done" because it is so easy to add more. (eg, once my personal homebrew News Fixed is done… why not turn it into a webapp? once my GTD scripts are done… why not port the backend? etc. etc.) So we'll see how that goes.

10 Nov 2025 7:24am GMT

07 Nov 2025

feedPlanet GNOME

Allan Day: GNOME Foundation Update, 2025-11-07

It's Friday, so it's time to provide an update on what's been happening at the GNOME Foundation over the past week. Here's my summary of the main activities and events, covering what both Board and staff members have been up to.

GNOME.Asia

I mentioned GNOME.Asia 2025 in my last post, but I'll mention it again since it's only a month until the event in Tokyo, which is being co-hosted with LibreOffice Asia.

As you'd expect, there is a lot of activity happening as GNOME.Asia 2025 approaches. Kristi has been busy with a plethora of organisational tasks, including scheduling, printing, planning for the day trip, and more.

Travel has also been a focus this week. The Travel Committee has approved sponsorship for a number of attendees, and we have moved on to providing assistance to those who need documentation for visas.

Finally, registration is now open! There are two registration sites: one for in-person attendees, and one for remote attendees. If you plan on attending, please do take the time to register!

Transitions

This week was a big week for us, with the announcement of Rosanna's departure from the organisation. Internally transition arrangements have been in progress for a little while, with responsibilities being redistributed, accounts being handed over, and infrastucture that was physically managed by Rosanna being replaced (such as our mailing address and phone number). This work continued this week.

I'd like to thank Rosanna for her extremely helpful assistance during this transition. I'd also like to thank everyone who was pitched in this week, particularly around travel (thank you Kristi, Julian, Maria, Asmit!), as well as Cassidy and Arun for picking up tasks as they have arisen.

The Foundation is running smoothly despite our recent staffing change. Payments are being processed quickly and reliably, events and sysadmin work are happening as normal, and accounting tasks are being taken care of. I'm also confident that we'll continue to work reliably and effectively as we move forward. There are improvements that we have planned which help with this, such as the streamlining of our financial systems and processes.

Ongoing tasks

It has become a common refrain in my updates that there is lots going on behind the scenes that doesn't make it into these posts. This week I thought that I'd call some of those more routine activities out, so readers can get a sense of what those background tasks are.

It turns out that there is indeed quite a lot of them, so I've broken them down into sections.

Finances and accounting

It's the beginning of the month, which is when most invoices tend to get submitted to us, so this week has involved a fair amount of payments processing. We use a mix of platforms for payments, and have a shared tracker for payments tasks. At the time of writing all invoices received since the beginning of the month have been paid, except for a couple of items where we needed additional information.

As mentioned in previous posts, we are in the process of deploying a set of improvements to our banking arrangements, and this continued this week. The changes are coming in bit by bit, and there are tasks for us to do at each step. It will be a number of weeks before the changes are completed.

Dawn who joined us last week has been doing research as part of her work to improve our finance systems. This has involved doing calls with team members and stakeholders, and is nearly complete.

Meetings!

Kristi booked the room for our regular pre-FOSDEM Advisory Board meeting, and I've invited representatives. Thanks to everyone who has sent an RSVP so far!

Next week we have another regular Board meeting scheduled, so there has been the routine work of preparing the agenda and sending out invitations.

Sysadmin work

Bart has been busy as usual, and it's hard to capture everything he does. Recent activity includes improvements to donate.gnome.org, improvements to Flathub build pipelines, and working through a troublesome issue with the geolocation data used by GNOME apps.

That's it for this week! Thanks for reading, and see you next week.

07 Nov 2025 5:46pm GMT