25 Nov 2025

feedPlanet GNOME

Sam Thursfield: Status update — 23rd November 2025

Bo día.

I am writing this from a high speed train heading towards Madrid, en route to Manchester. I have a mild hangover, and a two hundred page printout of "The STPA Handbook"… so I will have no problem sleeping through the journey. I think the only thing keeping me awake is the stunning view.

Sadly I havent got time to go all the way by train, in Madrid i will transfer to Easy Jet. It is indeed easy compared to trying to get from Madrid into France by train. Apparently this is mainly the fault of France's SNCF.

On the Spain side, fair play. The ministro de fomento (I think this translates as "Guy in charge of trains?") just announced major works in Barcelona, including a new station in La Sagrera with space for more trains than they have now, and a more direct access from Madrid, and a speed boost via some new type of railway sleeper, which would make the top speed 350km/h instead of 300km/h as it is now. And some changes in Madrid, which would reduce the transfer time when arriving from the west and heading out further east. You can argue with many things about the trains in Spain… perhaps it would be useful if the regional trains here ran more than once per day… but you cant argue with the commitment to fast inter-city travel.

If only we had similar investment to fix the cross border links between Spain and France, which are something of a joke. Engineers around the world will know this story. The problem is social: two different organizations, who speak different languages, have to agree on something. There is already a perfectly usable modern train line across the border. How many trains per day? Uh… two. Hope you planned your trip in advance because they're fully booked next week.

Anyway, this isn't meant to a post on the status of the railways of western Europe.

Digital Resilience Forum

Last month I hopped on another Madrid-bound train to attend the Digital Resilience Forum. It's a one day conference organized by Bitergia who you might know as world leaders in open source community analysis.

I have mixed feelings about "community metrics" projects. As Nick Wellnhofer said regarding libxml, when you participate as a volunteer in a project that is being monitored, its easy to feel like you're being somehow manipulated by the corporations who sponsor these things. How come you guys will spend time and money analyzing my project's development processes and Git history, but you won't spend time actually fixing bugs and improvements upstream? As the ffmpeg developers said: How come you will pay top calibre security researchers to read our code and find very specific exploits, but then wait for volunteers to fix them?

The Bitergia team are great people who genuinely care about open source, and I really enjoyed the conference. The main themes were: digital sovereignty, geopolitics, the rise of open source, and that XKCD where all our digital
infrastructure depends on a single unpaid volunteer in Nebraska. (https://xkcd.com/2347/). (Coincidentally, one of the Bitergia guys actually does live in Nebraska).

It was a day in a world where I am not used to participating: less engineering, more politics and campaigning. Yes, the Sovereign Tech Agency were around. We played a cool role play game simulating various hypothetical software crisis that might happen in the year 2027 (spoiler: in most cases a vendor-neutral, state-funded organization focused on open source was able to save the day : -). It is amazing what they've done so far with a relatively small investment, but it is a small organization and they maintain that citizens of every country should be campaigning and organizing to setting up an equivalent. Let's not tie the health of open source infrastructure too closely to German politics.

Also present, various campaign groups with "Open" at the start of their name: OpenForum Europe, OpenUK, OpenIreland, OpenRail. When I think about the future of Free Software platforms, such as our beloved GNOME, my mind always goes to funding contributors. There's very little money here and meanwhile Apple and Microsoft have nearly all of the money and I feel like still GNOME succeeds largely thanks to the evenings and weekends of a small core of dedicated hackers; including some whose day job involves working on some other part of GNOME. It's a bit depressing sometimes to see things this way, because the global economy gets more unequal every day, and how do you convince people who are already squeezed for cash to pay for something that's freely available online? How do you get students facing a super competitive job market to hack on GTK instead of studying for university exams?

There's another side which I talk about less, and that's education. There are more desktop Linux users than ever - apparently 5% of all desktop users or something - but there's still very little agreement or understanding what "open source" is. Most computer users couldn't tell you what an "operating system" does, and don't know why "source code" can be an interesting thing to share and modify.

I don't like to espouse any dogmatic rule that the right way to solve any problem is to release software under the GPLv3. I think the problems society has today with technology come from over-complexity and under-study. (See also, my rant from last month. ). To tackle that, it is important to havesoftware infrastructure like drivers and compilers available under free software licenses. The Free Software movement has spent the last 40 years doing a pretty amazing job of that, and I think its surprising how widely software engineers accept that as normal and fight to maintain it. Things could easily be worse. But this effort is one part of a larger problem, of helping those people who think of themselves as "non-technical" to understand the fundamentals of computing and not see it as a magic box. Most people alive today have learned to read and write one or more languages, to do mathematics, to operate a car, to build spreadsheets, and operate a smartphone. Most people I know under 45 have learned to prompt a large language model in the last few years.

With a basic grounding in how a computer operates, you can understand what an operating system does. And then you can see that whoever controls your OS has complete control over your digital life. And you will start to think twice about leaving that control to Apple, Google and Microsoft - big piles of cash where the concept of "ethics" barely has a name.

Reading was once a special skill reserved largely for monks. And it was difficult: we only started spaces between the words later on. Now everyone knows what a capital letter is. We need to teach how computers work, we need to stop making them so complicated, and the idea of open development will come into focus for everyone.

(and yes i realize this sounds a bit like the permacomputing manifesto).

Codethink work

This is a long rant, isn't it? My train only just left Zamora and I didnt fall asleep yet, so there's more to come.

I had a nice few months hacking on Endless OS 7, which has progressed from an experiment to a working system, bootable on bare metal, albeit with a various open issues that would block a stable release as yet. The overview docs in the repo tell you how to play with it.

This is now fully in the hands of the team at Endless, and my next move is going to be in some internal research that has been ongoing for a number of years. Not much of it is secret, in fact quite a lot is being developed in the open, and it relates in part to regulatory compliance and safety-critical softare systems.

Codethink dedicates more to open source than most companies its size. We never have trouble getting sponsorship for events like GUADEC. But I do wish I could spend more time maintaining open infrastructure that I use every day, like, you know, GNOME.

This project isn't going to solve that tomorrow, but it does occupy an interesting space in the intersection between industry and open source. The education gap I talked to you above is very much present in some industries where we work. Back in February a guy in a German car firm told me, "Nobody here wants open source. What they want is somebody to blame when the thing goes wrong."

Open source software comes with a big disclaimer that says, roughly, that if it breaks you get to keep both pieces. You get to blame yourself.

And that's a good thing! The people who understand a final, integrated system are the only people who can really define "correct behaviour". If you've worked in the same industries I have you might recognise a common anti-pattern: teams who spend all their time arguing about ownership of a particular bug, and team A are convinced it's a misbehaviour of component B and team B will try to prove the exact opposite. Meanwhile nobody actually spends the 15 minutes it would take to actually fix the bug. Another anti-pattern: team A would love to fix the bug in component B, but team B won't let them even look at the source code. This happens muuuuuuuch more than you might think.

So we're not trying to teach the world how computers work, on this project, but we are trying to increase adoption and understanding at least in the software industry. There are some interesting ideas. Looking at software systems from new angles. This is where STPA comes in, by the way - it's a way of breaking a system down not into components but rather into one or more control loops. Its going to take a while to make sense of everything in this new space… but you can expect some more 1500 word blog posts on the topic.

25 Nov 2025 12:03am GMT

24 Nov 2025

feedPlanet GNOME

Christian Hergert: Status Week 47

Ptyxis

Foundry

Builder

Manuals

Systemd

CentOS

GTK

GNOME Settings Daemon

GLib

Red Hat

This week also had me spending a significant amount of time on Red Hat related things.

24 Nov 2025 7:17pm GMT

Jussi Pakkanen: 3D models in PDF documents

PDF can do a lot of things. One them is embedding 3D models in the file and displaying them. The user can orient them freely in 3D space and even choose how they should be rendered (wireframe, solid, etc). The main use case for this is engineering applications.

Supporting 3D annotations is, as expected, unexpectedly difficult because:

  1. No open source PDF viewer seems to support 3D models.
  2. Even though the format specification is available, no open source software seems to support generating files in this format (by which I mean Blender does not do it by default). [1]

But, again, given sufficient effort and submitting data to not-at-all-sketchy-looking 3D model conversion web sites, you can get 3D annotations to work. Almost.

As you can probably tell, the picture above is not a screenshot. I had to take it with a cell phone camera, because while Acrobat Reader can open the file and display the result, it hard crashes before you can open the Windows screenshot tool.

[1] Update: apparently KiCad nightly can export U3D files that can be used in PDFs.

24 Nov 2025 6:13pm GMT

21 Nov 2025

feedPlanet GNOME

Jakub Steiner: 12 months instead of 12 minutes

Hey Kids! Other than raving about GNOME.org being a static HTML, there's one more aspect I'd like to get back to in this writing exercise called a blog post.

Share card gets updated every release too

I've recently come across an apalling genAI website for a project I hold deerly so I thought I'd give a glimpse on how we used to do things in the olden days. It is probably not going to be done this way anymore in the enshittified timeline we ended up in. The two options available these days are - a quickly generated slop website or no website at all, because privately owned social media is where it's at.

The wanna-be-catchy title of this post comes from the fact the website underwent numerous iterations (iterations is the core principle of good design) spanning over a year before we introduced the redesign.

So how did we end up with a 3D model of a laptop for the hero image on the GNOME website, rather than something generated in a couple of seconds and a small town worth of drinking water or a simple SVG illustration?

The hero image is static now, but used to be a scroll based animation at the early days. It could have become a simple vector style illustration, but I really enjoy the light interaction of the screen and the laptop, especially between the light and dark variants. Toggling dark mode has been my favorite fidget spinner.

Creating light/dark variants is a bit tedious to do manually every release, but automating still a bit too hard to pull off (the taking screenshots of a nightly OS bit). There's also the fun of picking a theme for the screenshot rather than doing the same thing over and over. Doing the screenshooting manually meant automating the rest, as a 6 month cycle is enough time to forget how things are done. The process is held together with duct tape, I mean a python script, that renders the website image assets from the few screenshots captured using GNOME OS running inside Boxes. Two great invisible things made by amazing individuals that could go away in an instant and that thought gives me a dose of anxiety.

This does take a minute to render on a laptop (CPU only Cycles), but is a matter of a single invocation and a git commit. So far it has survived a couple of Blender releases, so fingers crossed for the future.

Sophie has recently been looking into translations, so we might reconsider that 3D approach if translated screenshots become viable (and have them contained in an SVG similar to how os.gnome.org is done). So far the 3D hero has always been in sync with the release, unlike in our Wordpress days. Fingers crossed.

21 Nov 2025 7:44am GMT

This Week in GNOME: #226 Exporting Events

Update on what happened across the GNOME project in the week from November 14 to November 21.

GNOME Core Apps and Libraries

Calendar

A simple calendar application.

Hari Rana | TheEvilSkeleton (any/all) 🇮🇳 🏳️‍⚧️ says

Thanks to FineFindus, who previously worked on exporting events as .ics files, GNOME Calendar can now export calendars as .ics files, courtesy of merge request !615! This will be available in GNOME 50.

export-calendar-button-row.png

Hari Rana | TheEvilSkeleton (any/all) 🇮🇳 🏳️‍⚧️ says

After two long and painful years, several design iterations, and more than 50 rebases later, we finally merged the infamous, trauma-inducing merge request !362 on GNOME Calendar. This changes the entire design of the quick-add popover by merging both pages into one and updating the style to conform better with modern GNOME designs. Additionally, it remodels the way the popover retrieves and displays calendars, reducing 120 lines of code.

The calendars list in the quick-add popover has undergone accessibility improvements, providing a better experience for assistive technologies and keyboard users. Specifically: tabbing from outside the list will focus the selected calendar in the list; tabbing from inside the list will skip the entire list; arrow keys automatically select the focused calendar; and finally, assistive technologies now inform the user of the checked/selected state.

Admittedly, the quick-add popover is currently unreachable via keyboard because we lack the resources to implement keyboard focus for month and week cells. We are currently trying to address this issue in merge request !564, and hope to get it merged for GNOME 50, but it's a significant undertaking for a single unpaid developer. If it is not too much trouble, I would really appreciate some donations, to keep me motivated to improve accessibility throughout GNOME and sustain myself: https://tesk.page/#donate

This merge request allowed us to close 4 issues, and will be available in GNOME 50.

new-multi-day-event.png

Files

Providing a simple and integrated way of managing your files and browsing your file system.

Peter Eisenmann says

Files landed two big changes by Khalid Abu Shawarib this week.

The first change adds a bunch of tests, bringing the total coverage of the huge code base close to 30%. This will prevent regressions in previously uncovered areas such as bookmarking or creating files.

The second change is more noticeable as the way thumbnails are loaded was largely rewritten to finally make full use of GTK4's recycling views. It took a lot of code detangling to get thumbnails to load asynchronously, but the result is a great speedup, making thumbnails show as fast as never before. 🚀

Attached is a comparison of reloading a folder before and after the change

Libadwaita

Building blocks for modern GNOME apps using GTK4.

Alice (she/her) 🏳️‍⚧️🏳️‍🌈 announces

as of today, libadwaita has support for the new reduced motion preference, both supporting the @media (prefers-reduced-motion: reduce) query from CSS, and using simple crossfade transitions where appropriate (e.g. in AdwDialog, AdwNavigationView and AdwTabOverview

Alice (she/her) 🏳️‍⚧️🏳️‍🌈 reports

libadwaita has deprecated the style-dark.css, style-hc.css and style-hc-dark.css resources that AdwApplication automatically loads. They still work, but will be removed in 2.0. Applications are recommended to switch to style.css and media queries for dark and high contrast styles

GTK

Cross-platform widget toolkit for creating graphical user interfaces.

Matthias Clasen reports

This weeks GTK 4.21.2 release includes initial support for the CSS backdrop-filter property. The GSK APIs enabling this are new copy/paste and composite render nodes, which allow flexible reuse of the 'background' at any point in the scene graph. We are looking forward to your experiments with this!

GLib

The low-level core library that forms the basis for projects such as GTK and GNOME.

Philip Withnall says

Luca Bacci has dug into an intermittent output buffering issue with GLib on Windows, which should fix some CI issues and opt various GLib utilities into more modern features on Windows - https://gitlab.gnome.org/GNOME/glib/-/merge_requests/4788

Third Party Projects

Alain announces

Planify 4.16.0 - Natural dates, smoother flows, and smarter task handling

This week, Planify released version 4.16.0, bringing several improvements that make task management faster, more intuitive, and more predictable on GNOME.

The highlight of this release is natural language date parsing, now enabled by default in Quick Add. You can type things like "tomorrow 3pm", "next Monday", "25/12/2024", or "ahora", and Planify will automatically convert it into a proper scheduled date. Spanish support has also been added, including expressions like mañana, pasado mañana, próxima semana, and more.

Keyboard navigation got a boost too:

  • Ctrl + D now opens the date picker instantly
  • Ctrl + K toggles "Keep adding" mode
  • And several shortcuts were cleaned up for more predictable behavior

Planify also adds label management in the task context menu, making it easier to add or remove labels without opening the full editor.

For calendar users, event items now open a richer details popover, with automatic detection of Google Meet and Microsoft Teams links, making online meetings just one click away.

As always, translations, bug fixes, and general UI refinements round out the update.

Planify 4.16.0 is available now on Flathub

Jan-Willem reports

This week I released Java-GI version 0.13.0, a Java language binding for GNOME and other libraries that support GObject-Introspection, based on OpenJDK's new FFM functionality. Some of the highlights in this release are:

  • Bindings for LibRsvg, GstApp (for GStreamer) and LibSecret have been added
  • The website for Java-GI has its own domain name now: java-gi.org, and this is also used in all module- and package names
  • Thanks to GObject-Introspection's extensive testsuite, I've implemented over 900 testcases to test the Java bindings, and fixed many bugs along the way.

I hope that Java-GI will help Java (or Kotlin, Scala, Clojure, …) developers to create awesome new GNOME apps!

Quadrapassel

Fit falling blocks together.

Will Warner says

Quadrapassel 49.2 is out! Here is whats new:

  • Updated translations: Ukrainian, Russian, Brazilian Portuguese, Chinese (China), Slovenian, Georgian
  • Made the 'P' key pause the game
  • Replaced the user help docs with a 'Game Rules' dialog
  • Stopped the menu button taking focus
  • Fixed a bug where the game's score would not be recorded when the app was quit
  • Added total rows and level information to scores

Phosh

A pure wayland shell for mobile devices.

Guido announces

Phosh 0.51.0 is out:

There's a new quick setting that allows to toggle location services on/off and the ☕ quick setting can now disable itself after a certain amount of time (check here on how to configure the intervals). We also add added a toggle to enable automatic brightness from the top panel and when enabled the brightness slider acts as an offset to the current brightness value.

phosh-brightness.png

The minimum brightness of the 🔦 brightness slider can now be configured via hwdb/udev allowing one go to lower values then the former hard coded 40%. The configuration is maintained in gmobile.

If you're using Phosh on a Google Pixel 3A XL you can now enjoy haptic feedback when typing on the on screen keyboard (like users on other devices) and creating notch configurations for new devices should now be simpler as our tooling can take screen shots of the resulting UI element layout in Phosh for you.

There's more, see the full details at here

phosh-torch-brightness.png

GNOME Websites

Emmanuele Bassi says

After a long time, the new user help website is now available and up to date with the latest content. The new help website replaces the static snapshot of the old library-web project, but it is still a work in progress, and contributions are welcome. Just like in the past, the content is sourced from each application, as well as from the gnome-user-docs repository. If you want to improve the documentation of GNOME components and core applications, make sure to join the #docs:gnome.org room.

Shell Extensions

Pedro Sader Azevedo announces

Foresight is a GNOME Shell extension that automatically enters the activities view on empty workspaces, making it faster to open apps and start using your computer!

This week, it gained support for GNOME 49, courtesy of gabrielpalassi. This is the second time in a row that Foresight gained support for a newer GNOME Shell version thanks to community contributions, which I'm immensely grateful for. I'm also very grateful to Just Perfection, who single-handedly holds so many responsibilities in the GNOME Shell extensions ecosystem.

The latest version of Foresight is available at EGO: https://extensions.gnome.org/extension/7901/foresight/

Happy foretelling 🔮👣

Miscellaneous

revisto reports

The Persian GNOME community was featured at the Debian 13 Release Party at Sharif University in Iran. The talk introduced GNOME, explained how the Persian community came together, highlighted its contributions (GTK/libadwaita apps, GNOME Circle involvement, translations, and fa.gnome.org), and invited newcomers to participate and contribute.

Recording available (Farsi): https://youtu.be/UPmNNygNQuc

debian-13-gnome-persian-poster.png

GNOME Foundation

ramcq reports

The GNOME Foundation board has shared details about our recently-approved balanced budget for 2024-25, as well as a note to share our thanks to Karen Sandler, as she has decided to step down from the board.

That's all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

21 Nov 2025 12:00am GMT

19 Nov 2025

feedPlanet GNOME

Philip Withnall: Parental controls screen time limits backend

Ignacy blogged recently about all the parts of the user interface for screen time limits in parental controls in GNOME. He's been doing great work pulling that all together, while I have been working on the backend side of things. We're aiming for this screen time limits feature to appear in GNOME 50.

High level design

There's a design document which is the canonical reference for the design of the backend, but to summarise it at a high level: there's a stateless daemon, malcontent-timerd, which receives logs of the child user's time usage of the computer from gnome-shell in the child's session. For example, when the child stops using the computer, gnome-shell will send the start and end times of the most recent period of usage. The daemon deduplicates/merges and stores them. The parent has set a screen time policy for the child, which says how much time they're allowed on the computer per day (for example, 4h at most; or only allowed to use the computer between 15:00 and 17:00). The policy is stored against the child user in accounts-service.

malcontent-timerd applies this policy to the child's usage information to calculate an 'estimated end time' for the child's current session, assuming that they continue to use the computer without taking a break. If they stop or take a break, their usage - and hence the estimated end time - is updated.

The child's gnome-shell is notified of changes to the estimated end time and, once it's reached, locks the child's session (with appropriate advance warning).

Meanwhile, the parent can query the child's computer usage via a separate API to malcontent-timerd. This returns the child's total screen time usage per day, which allows the usage chart to be shown to the parent in the parental controls user interface (malcontent-control). The daemon imposes access controls on which users can query for usage information. Because the daemon can be accessed by the child and by the parent, and needs to be write-only for the child and read-only for the parent, it has to be a system daemon.

There's a third API flow which allows the child to request an extension to their screen time for the day, but that's perhaps a topic for a separate post.

IPC diagram of screen time limits support in malcontent. Screen time limit extensions are shown in dashed arrows.

So, at its core, malcontent-timerd is a time range store with some policy and a couple of D-Bus interfaces built on top.

Per-app time limits

Currently it only supports time limits for login sessions, but it is built in such a way that adding support for time limits for specific apps would be straightforward to add to malcontent-timerd in future. The main work required for that would be in gnome-shell - recording usage on a per-app basis (for apps which have limits applied), and enforcing those limits by freezing or blocking access to apps once the time runs out. There are some interesting user experience questions to think about there before anyone can implement it - how do you prevent a user from continuing to use an app without risking data loss (for example, by killing it)? How do you unambiguously remind the user they're running out of time for a specific app? Can we reliably find all the windows associated with a certain app? Can we reliably instruct apps to save their state when they run out of time, to reduce the risk of data loss? There are a number of bits of architecture we'd need to get in place before per-app limits could happen.

Wrapping up

As it stands though, the grant funding for parental controls is coming to an end. Ignacy will be continuing to work on the UI for some more weeks, but my time on it is basically up. With the funding, we've managed to implement digital wellbeing (screen time limits and break reminders for adults) including a whole UI for it in gnome-control-center and a fairly complex state machine for tracking your usage in gnome-shell; a refreshed UI for parental controls; parental controls screen time limits as described above; the backend for web filtering (but more on that in a future post); and everything is structured so that the extra features we want in future should bolt on nicely.

While the features may be simple to describe, the implementation spans four projects, two buses, contains three new system daemons, two new system data stores, and three fairly unique new widgets. It's tackled all sorts of interesting user design questions (and continues to do so). It's fully documented, has some unit tests (but not as many as I'd like), and can be integration tested using sysexts. The new widgets are localisable, accessible, and work in dark and light mode. There are even man pages. I'm quite pleased with how it's all come together.

It's been a team effort from a lot of people! Code, design, input and review (in no particular order): Ignacy, Allan, Sam, Florian, Sebastian, Matthijs, Felipe, Rob. Thank you Endless for the grant and the original work on parental controls. Administratively, thank you to everyone at the GNOME Foundation for handling the grant and paperwork; and thank you to the freedesktop.org admins for providing project hosting for malcontent!

19 Nov 2025 11:39pm GMT

18 Nov 2025

feedPlanet GNOME

Lennart Poettering: Mastodon Stories for systemd v258

Already on Sep 17 we released systemd v258 into the wild.

In the weeks leading up to that release I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd258 hash tag. It was my intention to post a link list here on this blog right after completing that series, but I simply forgot! Hence, in case you aren't using Mastodon, but would like to read up, here's a list of all 37 posts:

I intend to do a similar series of serieses of posts for the next systemd release (v259), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.

We intend to shorten the release cycle a bit for the future, and in fact managed to tag v259-rc1 already yesterday, just 2 months after v258. Hence, my series for v259 will begin soon, under the #systemd259 hash tag.

In case you are interested, here is the corresponding blog story for systemd v257, and here for v256.

18 Nov 2025 12:00am GMT

17 Nov 2025

feedPlanet GNOME

Christian Hergert: Status Week 46

Ptyxis

VTE

Foundry

Buider

CentOS

GtkSourceView

17 Nov 2025 8:24pm GMT

15 Nov 2025

feedPlanet GNOME

Code of Conduct Committee: Transparency report for May 2025 to October 2025

GNOME's Code of Conduct is our community's shared standard of behavior for participants in GNOME. This is the Code of Conduct Committee's periodic summary report of its activities from May 2025 to October 2025.

The current members of the CoC Committee are:

All the members of the CoC Committee have completed Code of Conduct Incident Response training provided by Otter Tech, and are professionally trained to handle incident reports in GNOME community events.

The committee has an email address that can be used to send reports: conduct@gnome.org as well as a website for report submission: https://conduct.gnome.org/

Reports

Since May 2025, the committee has received reports on a total of 25 possible incidents. Many of these were not actionable; all the incidents listed here were resolved during the reporting period.

Meetings of the CoC committee

The CoC committee has two meetings each month for general updates, and weekly ad-hoc meetings when they receive reports. There are also in-person meetings during GNOME events.

Ways to contact the CoC committee

15 Nov 2025 6:19pm GMT

14 Nov 2025

feedPlanet GNOME

Allan Day: GNOME Foundation Update, 2025-11-14

This post is another in my series of GNOME Foundation updates, each of which provides an insight into what's happened at the GNOME Foundation over the past week. If you are new to these posts I would encourage you to look over some of the previous entries - there's a fair amount going on at the Foundation right now, and my previous posts provide some useful background.

Old business

It has been another busy week at the GNOME Foundation. Here's a quick summary:

Most of these items are a continuation of activities that I've described in more detail in previous posts, and I'm a bit light on new news this week, but I think that's to be expected sometimes!

Post

This is the tenth in my series of GNOME Foundation updates, and this seems like a good point to reflect on how they are going. The weekly posting cadance made sense in the beginning, and wrapping up the week on a Friday afternoon is quite enjoyable, but I am unsure if a weekly post is too much reading for some.

So, I'd love to hear feedback: do you like the weekly updates, or do you find it hard to keep up? Would you prefer a higher-level monthly update? Do you like hearing about background operational details, or are you more interested in programs, events and announcements? Answers to these questions would be extremely welcome! Please let me know what you think, either in the comments or by reaching out on Matrix.

That's it from me for now. Thanks for reading, and have a great day.

14 Nov 2025 6:09pm GMT

Gedit Technology blog: Mid-November News

Misc news about the gedit text editor, mid-November edition!

Website: new design

Probably the highlight this month is the new design of the gedit website.

If it looks familiar to some of you, it's normal, it's because it's an adaptation of the previous GtkSourceView website that was developed in the old gnomeweb-wml repository. gnomeweb-wml (projects.gnome.org) is what predates all the wiki pages for Apps and Projects. The wiki has been retired, so another solution had to be found.

For the timeline, projects.gnome.org was available until 2013/2014 where all the content had been migrated to the wiki. Then the wiki has been retired in 2024.

Note that there are still rough edges on the gedit website, and more importantly some efforts still need to be done to bring the old CSS stylesheet forward to the new(-ish) responsive web design world.

For the most nostalgic of you:

And for the least nostalgic of you:

What we can say is that the gedit project has stood the test of time!

Enter TeX: improved search and replace

Some context: I would like some day to unify the search and replace feature between Enter TeX and gedit. It needs to retain the best of each.

In Enter TeX it's a combined horizontal bar, something that I would like in gedit too to replace the dialog window that occludes part of the text.

In gedit the strengths include: the search-as-you-type possibility, and a history of past searches. Both are missing in Enter TeX. (These are not the only things that need to be retained; the same workflows, keyboard shortcuts etc. are also an integral part of the functionality).

So to work towards that goal, I started in Enter TeX. I merged around 50 commits in the git repository for this change already, rewriting in C (from Vala) some parts and improving the UI along the way. The code needs to be in C because it'll be moved to libgedit-tepl so that it can be consumed by gedit easily.

Here is how it looks:

Screenshot of the search and replace in Enter TeX

Internal refactoring for GeditWindow and its statusbar

GeditWindow is what we can call a god class. It is too big, both in the number of lines and the number of instance variables.

So this month I've continued to refactor it, to extract a GeditWindowStatus class. There was already a GeditStatusbar class, but its features have now been moved to libgedit-tepl as TeplStatusbar.

GeditWindowStatus takes up the responsibility to create the TeplStatusbar, to fill it with the indicators and other buttons, and to make the connection with GeditWindow and the current tab/document.

So as a result, GeditWindow is a little less omniscient ;-)

As a conclusion

gedit does not materialize out of empty space; it takes time to develop and maintain. To demonstrate your appreciation of this piece of software and help its future development, remember that you can fund the project. Your support is critical and much appreciated.

14 Nov 2025 10:00am GMT

This Week in GNOME: #225 Volume Levels

Update on what happened across the GNOME project in the week from November 07 to November 14.

GNOME Core Apps and Libraries

Settings

Configure various aspects of your GNOME desktop.

Zoey Ahmed 🏳️‍⚧️ 💙💜🩷 reports

GNOME Settings volume levels page received a change to fix applications inputs and outputs being hard to distinguish. This change separates the applications with outputs and inputs streams into separate lists, and adds a microphone icon to the inputs list.

Thank you to Hari Rana and Matthijs Velsink for helping me with my first MR, and Jeff Fortin for nudging me to persue this change!

Files

Providing a simple and integrated way of managing your files and browsing your file system.

Tomasz Hołubowicz says

Nautilus now supports Ctrl+Insert and Shift+Insert for copying and pasting files, matching the behavior of other GTK applications, browsers, and file managers like Dolphin and Thunar. These CUA keybindings were previously only functional in Nautilus's location bar, creating an inconsistency. The addition also benefits users with keyboards that have dedicated copy/paste keys, which typically emit these key combinations. These shortcuts are particularly useful for left-handed users and also allow the same bindings to work across applications, file managers, and terminal emulators, where Ctrl+Shift+C/V are typically required. The Ctrl+V paste shortcut is now also visible in the context menu.

GLib

The low-level core library that forms the basis for projects such as GTK and GNOME.

Philip Withnall announces

In https://gitlab.gnome.org/GNOME/glib/-/merge_requests/4900, Philip Chimento has added a G_GNUC_FLAG_ENUM macro to GLib, which can be used in an enum definition to tell the compiler it's for a flag type (i.e. enum values which can be bitwise combined). This allows for better error reporting, particularly when building with -Wswitch (which everyone should be using!).

So now we can have enums which look like this, for example:

typedef enum {
  G_CONVERTER_NO_FLAGS     = 0,         /*< nick=none >*/
  G_CONVERTER_INPUT_AT_END = (1 << 0),  /*< nick=input-at-end >*/
  G_CONVERTER_FLUSH        = (1 << 1)   /*< nick=flush >*/
} G_GNUC_FLAG_ENUM GConverterFlags;

GNOME Circle Apps and Libraries

Gaphor

A simple UML and SysML modeling tool.

Dan Yeaw announces

Gaphor, the simple modeling tool, version 3.2.0 is now out! Some highlights include:

  • Troubleshooting info can now be found in the About dialog
  • Introduction of CSS classes: .item for all items you put on the diagram
  • Improved updates in Model Browser for attribute/parameter types
  • macOS: native window decorations and window menu

Grab the new version on Flathub.

Third Party Projects

Haydn reports

Typesetter, a minimalist desktop application for creating beautiful documents with Typst, is now available on Flathub.

Features include:

  • Adaptive, user-friendly interface: Focus on writing. Great for papers, reports, slides, books, and any structured writing.
  • Powered by Typst: A modern markup-based typesetting language, combining the simplicity of Markdown with the power of LaTeX.
  • Local-first: Your files stay on your machine. No cloud lock-in.
  • Package support: Works offline, but can fetch and update packages online when needed.
  • Automatic preview: See your rendered document update as you write.
  • Click-to-jump: Click on a part of the preview to jump to the corresponding position in the source file.
  • Centered scrolling: Keeps your writing visually anchored as you type.
  • Syntax highlighting: Makes your documents easier to read and edit.
  • Fast and native: Built in Rust and GTK following the GNOME human interface guidelines.

Get Typesetter on Flathub

Vladimir Kosolapov announces

Lenspect 1.0.2 has just been released on Flathub

This version features some quality-of-life improvements:

  • Improved drag-and-drop design
  • Increased file size limit to 650MB
  • Added more result items from VirusTotal
  • Added notifications for background scans
  • Added file opener integration
  • Added key storage using secrets provider

Check out the project on GitHub

GNOME Websites

Sophie (she/her) reports

The API to access information about GNOME projects has moved from apps.gnome.org to static.gnome.org/catalog. Everything based on the old API links has to move to the new links. The format of the API also slightly changed.

Pages like apps.gnome.org, welcome.gnome.org, developer.gnome.org/components/, and others are based on the API data. The separation will help with maintainability of the code.

More information can be found in the catalog's git repository.

Shell Extensions

Dudu Maroja reports

The 2 Wallpapers GNOME extension is a neat tool that changes your wallpaper whenever you open a window. You can choose to set a darker, blurry, desaturated, or completely different image, whatever suits your preference. This extension was designed to help you focus on your active windows while letting your desktop shine when you want it.

The main idea behind this extension is to allow the use of transparent windows without relying on heavy processing or on-the-fly effects like blur, which can consume too much battery or GPU resources.

Grab it here: 2 Wallpapers Extension

dagimg-dot says

I have been working on Veil - a modern successor to the Hide items extension. which lets you hide all or chosen items on the gnome panel with auto-hide feature and smooth animations. you can check out the demo on GNOME's reddit https://www.reddit.com/r/gnome/comments/1orr1co/veil_a_cleaner_quieter_gnome_panel_hide_items/

Dmy3k announces

Adaptive Brightness Extension

This week the extension received a big update to preferences UI.

Interactive Brightness Configuration

  • You can now customize how your screen brightness responds to different lighting conditions using an easy-to-use graphical interface
  • Configure brightness levels for 5 different light ranges (from night to bright outdoor)
  • See a visual graph showing your brightness curve

Improved Settings Layout

  • Settings are now organized into 3 clear tabs: Calibration, Preview, and Keyboard
  • Each lighting condition can be expanded to adjust its range and brightness level
  • Live preview shows you exactly how brightness will respond to ambient light

Better Keyboard Backlight Control

  • Choose specific lighting conditions where keyboard backlight turns on (instead of just on/off)

Available at extensions.gnome.org and github.

Miscellaneous

GNOME OS

The GNOME operating system, development and testing platform

Ada Magicat ❤️🧡🤍🩷💜 reports

Tulip Blossom from Project Bluefin has been working on building bootc images of different Linux systems, including GNOME OS. To ensure bootc users have the best experience possible with our system, Jordan Petridis and Valentin David from the GNOME OS team are working on building an OCI image that can be directly used by bootc. It is currently a work in progress, but we expect to land it soon. This collaboration is a great opportunity to expand our community, contributor base and share our vision for how to build operating systems.

Note that this does not represent a change in our plans for GNOME OS itself; It will continue using the same systemd tools for deploying and updating the system.

Ada Magicat ❤️🧡🤍🩷💜 reports

In Ignacy's update on his Digital Wellbeing work this week, you might have noticed he shared the progress of his work in a complete system image. That image is based on GNOME OS and built on the same infrastructure as our main images.

This shows the power of GNOME OS as a development platform, especially for features that involve changes in many different parts of our stack. It also allows anyone with a machine, virtual or physical, to test these new features easier than ever before.

We hope to further improve our tools so that they are useful to more developers and make it easier and more convenient to test changes like this.

GNOME Foundation

Allan Day says

Another weekly Foundation update is available this week, with a summary of everything that's been happening at the GNOME Foundation. It's been a mixed week, with a Board meeting, ongoing finance work, GNOME.Asia preparations, and digital wellbeing planning.

Digital Wellbeing Project

Ignacy Kuchciński (ignapk) announces

As part of the Digital Wellbeing project, sponsored by the GNOME Foundation, there is an initiative to redesign the Parental Controls to bring it on par with modern GNOME apps and implement new features such as Screen Time monitoring, Bedtime Schedule and Web Filtering. Recently the child account overview gained screen time usage information, the Screen Time page was added with session limits controls, the wellbeing panel in Settings was integrated with parental controls, and screen limits were introduced in the Shell. There's more to come, see https://blogs.gnome.org/ignapk/2025/11/10/digital-wellbeing-contract-screen-time-limits/ for more information.

That's all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

14 Nov 2025 12:00am GMT

13 Nov 2025

feedPlanet GNOME

Andy Wingo: the last couple years in v8's garbage collector

Let's talk about memory management! Following up on my article about 5 years of developments in V8's garbage collector, today I'd like to bring that up to date with what went down in V8's GC over the last couple years.

methodololology

I selected all of the commits to src/heap since my previous roundup. There were 1600 of them, including reverts and relands. I read all of the commit logs, some of the changes, some of the linked bugs, and any design document I could get my hands on. From what I can tell, there have been about 4 FTE from Google over this period, and the commit rate is fairly constant. There are very occasional patches from Igalia, Cloudflare, Intel, and Red Hat, but it's mostly a Google affair.

Then, by the very rigorous process of, um, just writing things down and thinking about it, I see three big stories for V8's GC over this time, and I'm going to give them to you with some made-up numbers for how much of the effort was spent on them. Firstly, the effort to improve memory safety via the sandbox: this is around 20% of the time. Secondly, the Oilpan odyssey: maybe 40%. Third, preparation for multiple JavaScript and WebAssembly mutator threads: 20%. Then there are a number of lesser side quests: heuristics wrangling (10%!!!!), and a long list of miscellanea. Let's take a deeper look at each of these in turn.

the sandbox

There was a nice blog post in June last year summarizing the sandbox effort: basically, the goal is to prevent user-controlled writes from corrupting memory outside the JavaScript heap. We start from the assumption that the user is somehow able to obtain a write-anywhere primitive, and we work to mitigate the effect of such writes. The most fundamental way is to reduce the range of addressable memory, notably by encoding pointers as 32-bit offsets and then ensuring that no host memory is within the addressable virtual memory that an attacker can write. The sandbox also uses some 40-bit offsets for references to larger objects, with similar guarantees. (Yes, a sandbox really does reserve a terabyte of virtual memory).

But there are many, many details. Access to external objects is intermediated via type-checked external pointer tables. Some objects that should never be directly referenced by user code go in a separate "trusted space", which is outside the sandbox. Then you have read-only spaces, used to allocate data that might be shared between different isolates, you might want multiple cages, there are "shared" variants of the other spaces, for use in shared-memory multi-threading, executable code spaces with embedded object references, and so on and so on. Tweaking, elaborating, and maintaining all of these details has taken a lot of V8 GC developer time.

I think it has paid off, though, because the new development is that V8 has managed to turn on hardware memory protection for the sandbox: sandboxed code is prevented by the hardware from writing memory outside the sandbox.

Leaning into the "attacker can write anything in their address space" threat model has led to some funny patches. For example, sometimes code needs to check flags about the page that an object is on, as part of a write barrier. So some GC-managed metadata needs to be in the sandbox. However, the garbage collector itself, which is outside the sandbox, can't trust that the metadata is valid. We end up having two copies of state in some cases: in the sandbox, for use by sandboxed code, and outside, for use by the collector.

The best and most amusing instance of this phenomenon is related to integers. Google's style guide recommends signed integers by default, so you end up with on-heap data structures with int32_t len and such. But if an attacker overwrites a length with a negative number, there are a couple funny things that can happen. The first is a sign-extending conversion to size_t by run-time code, which can lead to sandbox escapes. The other is mistakenly concluding that an object is small, because its length is less than a limit, because it is unexpectedly negative. Good times!

oilpan

It took 10 years for Odysseus to get back from Troy, which is about as long as it has taken for conservative stack scanning to make it from Oilpan into V8 proper. Basically, Oilpan is garbage collection for C++ as used in Blink and Chromium. Sometimes it runs when the stack is empty; then it can be precise. But sometimes it runs when there might be references to GC-managed objects on the stack; in that case it runs conservatively.

Last time I described how V8 would like to add support for generational garbage collection to Oilpan, but that for that, you'd need a way to promote objects to the old generation that is compatible with the ambiguous references visited by conservative stack scanning. I thought V8 had a chance at success with their new mark-sweep nursery, but that seems to have turned out to be a lose relative to the copying nursery. They even tried sticky mark-bit generational collection, but it didn't work out. Oh well; one good thing about Google is that they seem willing to try projects that have uncertain payoff, though I hope that the hackers involved came through their OKR reviews with their mental health intact.

Instead, V8 added support for pinning to the Scavenger copying nursery implementation. If a page has incoming ambiguous edges, it will be placed in a kind of quarantine area for a while. I am not sure what the difference is between a quarantined page, which logically belongs to the nursery, and a pinned page from the mark-compact old-space; they seem to require similar treatment. In any case, we seem to have settled into a design that was mostly the same as before, but in which any given page can opt out of evacuation-based collection.

What do we get out of all of this? Well, not only can we get generational collection for Oilpan, but also we unlock cheaper, less bug-prone "direct handles" in V8 itself.

The funny thing is that I don't think any of this is shipping yet; or, if it is, it's only in a Finch trial to a minority of users or something. I am looking forward in interest to seeing a post from upstream V8 folks; whole doctoral theses have been written on this topic, and it would be a delight to see some actual numbers.

shared-memory multi-threading

JavaScript implementations have had the luxury of a single-threadedness: with just one mutator, garbage collection is a lot simpler. But this is ending. I don't know what the state of shared-memory multi-threading is in JS, but in WebAssembly it seems to be moving apace, and Wasm uses the JS GC. Maybe I am overstating the effort here-probably it doesn't come to 20%-but wiring this up has been a whole thing.

I will mention just one patch here that I found to be funny. So with pointer compression, an object's fields are mostly 32-bit words, with the exception of 64-bit doubles, so we can reduce the alignment on most objects to 4 bytes. V8 has had a bug open forever about alignment of double-holding objects that it mostly ignores via unaligned loads.

Thing is, if you have an object visible to multiple threads, and that object might have a 64-bit field, then the field should be 64-bit aligned to prevent tearing during atomic access, which usually means the object should be 64-bit aligned. That is now the case for Wasm structs and arrays in the shared space.

side quests

Right, we've covered what to me are the main stories of V8's GC over the past couple years. But let me mention a few funny side quests that I saw.

the heuristics two-step

This one I find to be hilariousad. Tragicomical. Anyway I am amused. So any real GC has a bunch of heuristics: when to promote an object or a page, when to kick off incremental marking, how to use background threads, when to grow the heap, how to choose whether to make a minor or major collection, when to aggressively reduce memory, how much virtual address space can you reasonably reserve, what to do on hard out-of-memory situations, how to account for off-heap mallocated memory, how to compute whether concurrent marking is going to finish in time or if you need to pause... and V8 needs to do this all in all its many configurations, with pointer compression off or on, on desktop, high-end Android, low-end Android, iOS where everything is weird, something called Starboard which is apparently part of Cobalt which is apparently a whole new platform that Youtube uses to show videos on set-top boxes, on machines with different memory models and operating systems with different interfaces, and on and on and on. Simply tuning the system appears to involve a dose of science, a dose of flailing around and trying things, and a whole cauldron of witchcraft. There appears to be one person whose full-time job it is to implement and monitor metrics on V8 memory performance and implement appropriate tweaks. Good grief!

mutex mayhem

Toon Verwaest noticed that V8 was exhibiting many more context switches on MacOS than Safari, and identified V8's use of platform mutexes as the problem. So he rewrote them to use os_unfair_lock on MacOS. Then implemented adaptive locking on all platforms. Then... removed it all and switched to abseil.

Personally, I am delighted to see this patch series, I wouldn't have thought that there was juice to squeeze in V8's use of locking. It gives me hope that I will find a place to do the same in one of my projects :)

ta-ta, third-party heap

It used to be that MMTk was trying to get a number of production language virtual machines to support abstract APIs so that MMTk could slot in a garbage collector implementation. Though this seems to work with OpenJDK, with V8 I think the churn rate and laser-like focus on the browser use-case makes an interstitial API abstraction a lose. V8 removed it a little more than a year ago.

fin

So what's next? I don't know; it's been a while since I have been to Munich to drink from the source. That said, shared-memory multithreading and wasm effect handlers will extend the memory management hacker's full employment act indefinitely, not to mention actually landing and shipping conservative stack scanning. There is a lot to be done in non-browser V8 environments, whether in Node or on the edge, but it is admittedly harder to read the future than the past.

In any case, it was fun taking this look back, and perhaps I will have the opportunity to do this again in a few years. Until then, happy hacking!

13 Nov 2025 3:21pm GMT

Jiri Eischmann: How We Streamed OpenAlt on Vhsky.cz

The blog post was originally published on my Czech blog.

When we launched Vhsky.cz a year ago, we did it to provide an alternative to the near-monopoly of YouTube. I believe video distribution is so important today that it's a skill we should maintain ourselves.

To be honest, it's bothered me for the past few years that even open-source conferences simply rely on YouTube for streaming talks, without attempting to secure a more open path. We are a community of tech enthusiasts who tinker with everything and take pride in managing things ourselves, yet we just dump our videos onto YouTube, even when we have the tools to handle it internally. Meanwhile, it's common for conferences abroad to manage this themselves. Just look at FOSDEM or Chaos Communication Congress.

This is why, from the moment Vhsky.cz launched, my ambition was to broadcast talks from OpenAlt-a conference I care about and help organize. The first small step was uploading videos from previous years. Throughout the year, we experimented with streaming from OpenAlt meetups. We found that it worked, but a single stream isn't quite the stress test needed to prove we could handle broadcasting an entire conference.

For several years, Michal Vašíček has been in charge of recording at OpenAlt, and he has managed to create a system where he handles recording from all rooms almost single-handedly (with assistance from session chairs in each room). All credit to him, because other conferences with a similar scope of recordings have entire teams for this. However, I don't have insight into this part of the process, so I won't focus on it. Michal's job was to get the streams to our server; our job was to get them to the viewers.

OpenAlt's AV background with running streams. Author: Michal Stanke.

Stress Test

We only got to a real stress test the weekend before the conference, when Bashy prepared a setup with seven streams at 1440p resolution. This was exactly what awaited us at OpenAlt. Vhsky.cz runs on a fairly powerful server with a 32-core i9-13900 processor and 96 GB of RAM. However, it's not entirely dedicated to PeerTube. It has to share the server with other OSCloud services (OSCloud is a community hosting of open source web services).

We hadn't been limited by performance until then, but seven 1440p streams were truly at the edge of the server's capabilities, and streams occasionally dropped. In reality, this meant 14 continuous transcoding processes, as we were streaming in both 1440p and 480p. Even if you don't change the resolution, you still need to transcode the video to leverage useful distribution features, which I'll cover later. The 480p resolution was intended for mobile devices and slow connections.

Remote Runner

We knew the Vhsky.cz server alone couldn't handle it. Fortunately, PeerTube allows for the use of "remote runners". The PeerTube instance sends video to these runners for transcoding, while the main instance focuses only on distributing tasks, storage, and video distribution to users. However, it's not possible to do some tasks locally and offload others. If you switch transcoding to remote runners, they must handle all the transcoding. Therefore, we had to find enough performance somewhere to cover everything.

I reached out to several hosting providers known to be friendly to open-source activities. Adam Štrauch from Roští.cz replied almost immediately, saying they had a backup machine that they had filed a warranty claim for over the summer and hadn't tested under load yet. I wrote back that if they wanted to see how it behaved under load, now was a great opportunity. And so we made a deal.

It was a truly powerful machine: a 48-core Ryzen with 1 TB of RAM. Nothing else was running on it, so we could use all its performance for video transcoding. After installing the runner on it, we passed the stress test. As it turned out, the server with the runner still had a large reserve. For a moment, I toyed with the idea of adding another resolution to transcode the videos into, but then I decided we'd better not tempt fate. The stress test showed us we could keep up with transcoding, but not how it would behave with all the viewers. The performance reserve could come in handy.

Load on the runner server during the stress test. Author: Adam Štrauch.

Smart Video Distribution

Once we solved the transcoding performance, it was time to look at how PeerTube would handle video distribution. Vhsky.cz has a bandwidth of 1 Gbps, which isn't much for such a service. If we served everyone the 1440p stream, we could serve a maximum of 100 viewers. Fortunately, another excellent PeerTube feature helps with this: support for P2P sharing using HLS and WebRTC.

Thanks to this, every viewer (unless they are on a mobile device and data) also becomes a peer and shares the stream with others. The more viewers watch the stream, the more they share the video among themselves, and the server load doesn't grow at the same rate.

A two-year-old stress test conducted by the PeerTube developers themselves gave us some idea of what Vhsky could handle. They created a farm of 1,000 browsers, simulating 1,000 viewers watching the same stream or VOD. Even though they used a relatively low-performance server (quad-core i7-8700 CPU @ 3.20GHz, slow hard drive, 4 GB RAM, 1 Gbps connection), they managed to serve 1,000 viewers, primarily thanks to data sharing between them. For VOD, this saved up to 98% of the server's bandwidth; for a live stream, it was 75%:

If we achieved a similar ratio, then even after subtracting 200 Mbps for overhead (running other services, receiving streams, data exchange with the runner), we could serve over 300 viewers at 1440p and multiples of that at 480p. Considering that OpenAlt had about 160 online viewers in total last year, this was a more than sufficient reserve.

Live Operation

On Saturday, Michal fired up the streams and started sending video to Vhsky.cz via RTMP. And it worked. The streams ran smoothly and without stuttering. In the end, we had a maximum of tens of online viewers at any one time this year, which posed no problem from a distribution perspective.

In practice, the server data download savings were large even with just 5 peers on a single stream and resolution.

Our solution, which PeerTube allowed us to flexibly assemble from servers in different data centers, has one disadvantage: it creates some latency. In our case, however, this meant the stream on Vhsky.cz was about 5-10 seconds behind the stream on YouTube, which I don't think is a problem. After all, we're not broadcasting a sports event.

Diagram of the streaming solution for OpenAlt. Labels in Czech, but quite self-explanatory.

Minor Problems

We did, however, run into minor problems and gained experience that one can only get through practice. During Saturday, for example, we found that the stream would occasionally drop from 1440p to 480p, even though the throughput should have been sufficient. This was because the player felt that the delivery of stream chunks was delayed and preemptively switched to a lower resolution. Setting a higher cache increased the stream delay slightly, but it significantly reduced the switching to the lower resolution.

Subjectively, even 480p wasn't a problem. Most of the screen was taken up by the red frame with the OpenAlt logo and the slides. The speaker was only in a small window. The reduced resolution only caused slight blurring of the text on the slides, which I wouldn't even have noticed as a problem if I wasn't focusing on it. I could imagine streaming only in 480p if necessary. But it's clear that expectations regarding resolution are different today, so we stream in 1440p when we can.

Over the whole weekend, the stream from one room dropped for about two talks. For some rooms, viewers complained that the stream was too quiet, but that was an input problem. This issue was later fixed in the recordings.

When uploading the talks as VOD (Video on Demand), we ran into the fact that PeerTube itself doesn't support bulk uploads. However, tools exist for this, and we'd like to use them next time to make uploading faster and more convenient. Some videos also uploaded with the wrong orientation, which was likely a problem in their metadata, as PeerTube wasn't the only player that displayed them that way. YouTube, however, managed to handle it. Re-encoding them solved the problem.

On Saturday, to save performance, we also tried transcoding the first finished talk videos on the external runner. For these, a bar is displayed with a message that the video failed to save to external storage, even though it is clearly stored in object storage. In the end we had to reupload them because they were available to watch, but not indexed.

A small interlude - my talk about PeerTube at this year's OpenAlt. Streamed, of course, via PeerTube:

Thanks and Support

I think that for our very first time doing this, it turned out very well, and I'm glad we showed that the community can stream such a conference using its own resources. I would like to thank everyone who participated. From Michal, who managed to capture footage in seven lecture rooms at once, to Bashy, who helped us with the stress test, to Archos and Schmaker, who did the work on the Vhsky side, and Adam Štrauch, who lent us the machine for the external runner.

If you like what we do and appreciate that someone is making OpenAlt streams and talks available on an open platform without ads and tracking, we would be grateful if you supported us with a contribution to one of OSCloud's accounts, under which Vhsky.cz runs. PeerTube is a great tool that allows us to operate such a service without having Google's infrastructure, but it doesn't run for free either.

13 Nov 2025 11:37am GMT

10 Nov 2025

feedPlanet GNOME

Ignacy Kuchciński: Digital Wellbeing Contract: Screen Time Limits

It's been four months since my last Digital Wellbeing update. In that previous post I talked about the goals of the Digital Wellbeing project. I also described our progress improving and extending the functionality of the GNOME Parental Controls application, as well as redesigning the application to meet the current design guidelines.

Introducing Screen Time Limits

Following our work on the Parental Controls app, the next major work item was to implement screen time limits functionality, offering the parents ability to check the child's screen time usage, set the time limits, and lock the child account outside of a specified curfew. This feature actually spanned across *three* different GNOME projects:

Out of all of the three above, the Parental Controls and Shell changes have been already merged, while the Settings integration has been through unwritten review during the bi-weekly Settings meeting and adjusted to the feedback, so it's only a matter of time now before it reaches the main branch as well. You can find the screenshots of the added functionalities below, and the reference designs can be find in the app-mockups and os-mockups tickets.

Child screen usage

When viewing a managed account, a summary of screen time is shown with actions for changing further settings, as well as actions to access additional settings for restrictions and filtering.

Child account view with added screen time overview and action for more options

The Screen Time view shows an overview of the child's account's screen time as well as controls which mirror those of the Settings panel to control screen limits and downtime for the child.

Screen Time page with detailed screen time records and time limit controls

Settings integration

On the Settings side, a child account will see a banner in the Wellbeing panel that lets them know some settings cannot be changed, with a link to the Parental Controls app.

Wellbeing panel with a banner informing that limits can only be changed in Parental Controls

Screen limits in GNOME Shell

We have implemented the locking mechanism in GNOME Shell. When a Screen Time limit is reached, the session locks, so that the child can't use the computer for the rest of the day.

Following is a screen cast of the Shell functionality:

Preventing children from unlocking has not been implemented yet. However, fortunately, the hardest part was implementing the framework for the rest of the code, so hopefully the easier graphical change will take less to implement and the next update will be much sooner than this one.

GNOME OS images

You don't have to take my word for it, especially since one can notice I've had to cut the recording at one point (forgot that one can't switch users in the lock screen :P) - you can check out all of these features in the very same GNOME OS live image I've used in the recording, that you can either run in GNOME Boxes, or try on your hardware if you know what you're doing 🙂

Malcontent changes

While all of these user facing changes look cool, none of them would be actually possible without the malcontent backend, which Philip Withnall has been working on. While the daily schedule had already been implemented, the daily limit session limit had to be added, as well as malcontent timer daemon API for Shell to use. There has been many other improvements, web filtering daemon has been added, which I'll use in the future for implementing Web Filtering page in Parental Controls app.

Conclusion

Our work for the GNOME Foundation is funded by Endless and Dalio Philanthropies, so kudos to them! I want to thank Florian Müllner for his patience too, during the very educative for me merge request review, and answering to all of my Shell development wonderings. I also want to thank Matthijs Velsink and Felipe Borges for finding time to review the Settings integration.

Now that this foundation has been made, we'll be focusing on finishing the last remaining bit of the session limits support in Shell, which is tweaking the appearance of lock screen when the limit is reached, and implementing the ignore button for extending screen limit, as well as notifications, followed by Web Filtering support in Parental Controls. Until next update!

10 Nov 2025 5:31pm GMT

Luis Villa: Three LLM-assisted projects

Some notes on my first serious coding projects in something like 20 years, possibly longer. If you're curious what these projects mean, more thoughts over on the OpenML.fyi newsletter.

TLDR

A GitHub contribution graph, showing a lot of activity in the past three weeks after virtually none the rest of the year.

News, Fixed

The "Fix The News" newsletter is a pillar of my mental health these days, bringing me news that the world is not entirely going to hell in a handbasket. And my 9yo has repeatedly noted that our family news diet is "broken" in exactly the way Fix The News is supposed to fix-hugely negative, hugely US-centric. So I asked Claude to create a "newspaper" version of FTN - a two page pdf of some highlights. It was a hit.

So I've now been working with Claude Code to create and gradually improve a four-days-a-week "News, Fixed" newspaper. This has been super-fun for the whole family-my wife has made various suggestions over my shoulder, my son devours it every morning, and it's the first serious coding project I've tackled in ages. It is almost entirely strictly personal (it still has hard-coded Duke Basketball schedules) but nevertheless is public and FOSS. (It is even my first usage of reuse.software-and also of SonarQube Server!)

Example newspaper here.

No matter how far removed you are from practical coding experience, I cannot recommend enough finding a simple, fun project like this that scratches a human itch in your life, and using the project to experiment with the new code tools.

Getting Things Done assistant

While working on News, Fixed a friend pointed out Steve Yegge's "beads", which reimagines software issue tracking as an LLM-centric activity - json-centric, tracked in git, etc. At around the same time, I was also pointed at Superpowers-essentially, canned "skills" like "teach the LLM, temporarily, how to brainstorm".

The two of these together in my mind screamed "do this for your overwhelmed todo list". I've long practiced various bastardized versions of Getting Things Done, but one of the hangups has been that I'm inconsistent about doing the daily/weekly/nth-ly reviews that good GTD really relies on. I might skip a step, or not look through all my huge "someday-maybe" list, or… any of many reasons one can be tired and human when faced with a wall of text. Also, while there are many tools out there to do GTD, in my experience they either make some of the hardest parts (like the reviews) your problem, or they don't quite fit with how I want to do GTD, or both. Hacking on my own prompts to manage the reviews seems to fit these needs to a T.

I currently use Amazing Marvin as my main GTD tool. It is funky and weird and I've stuck with it much longer than any other task tracker I've ever used. So what I've done so far:

This is all read-only right now because of limitations in the Marvin API but for various reasons I'm not yet ready to embark on building my own entire UI. So this will do for now. But this code, therefore, is very limited to me. The prompts on the other hand…

Note that my emphasis is not on "do tasks", it is on helping me stay on priority. Less "chief of staff", more "executive assistant"-both incredibly valuable when done well, but different roles. This is different from some of the use examples for Yegge's Beads, which really are around agents.

Also note: the results have been outstanding. I'm getting more easily into my doing zone, I think largely because I have less anxiety about staring at the Giant Wall of Tasks that defines the life of any high-level IC. And my projects are better organized and todos feel more accurate than they have been in a long time, possibly ever.

a note on LLMs and issue/TODO tracking

It is worth noting that while LLMs are probabilistic/lossy, so they can't find the "perfect" next TODO to work on, that's OK. Personal TODO and software issue tracking are inherently subjective, probabilistic activities-there is no objectively perfect "next best thing to work on", "most important thing to work on", etc. So the fact that an LLM is only probabilistic in identifying the next task to work on is fine-no human can do substantially better. In fact I'm pretty sure that once an issue list is past a certain point, the LLM is likely to be able to do better- if (and like many things LLM, this is a big if) you can provide it with documented standards explaining how you want to do prioritization. (Literally one of the first things I did at my first job was write standards on how to prioritize bugs-the forerunner of this doc-so I have strong opinions, and experience, here.)

Skills for license "concluded"

While at a recent Linux Foundation event, I was shocked to realize how many very smart people haven't internalized the skills/prompts/context stuff. It's either "you chat with it" or "you train a model". This is not their fault; it is hard to keep up!

Of course this came up most keenly in the context of the age-old problem of "how do I tell what license an open source project is under". In other words, what is the difference between "I have scanned this" and "I have reached the zen state of SPDX's 'concluded' field".

So … yes, I've started playing with scripts and prompts on this. It's much less further along than the other two projects above, but I think it could be very fruitful if structured correctly. Some potentially big benefits above and beyond the traditional scanning and/or throw a lawyer at it approaches:

ClearlyDefined offers test data on this, by the way - I'm really looking forward to seeing if this can be made actually reliable or not. (And then we can hook up reuse.software on the backend to actually improve the upstream metadata…)

But even then, I may not ever release this. There's a lot of real risks here and I still haven't thought them through enough to be comfortable with them. That's true even though I think the industry has persistently overstated its ability to reach useful conclusions about licensing, since it so persistently insists on doing licensing analysis without ever talking to maintainers.

More to come?

I'm sure there will be more of these. That said, one of the interesting temptations of this is that it is very hard to say "something is done" because it is so easy to add more. (eg, once my personal homebrew News Fixed is done… why not turn it into a webapp? once my GTD scripts are done… why not port the backend? etc. etc.) So we'll see how that goes.

10 Nov 2025 7:24am GMT