23 Nov 2025

feedPlanet Grep

Lionel Dricot: Nos comptoirs virtuels

Nos comptoirs virtuels

La façade d'un grand café parisien. Zoom sur l'enseigne un peu décrépie ornée d'un pouce blanc sur fond de peinture écaillée : « Le Facebook ».

Intérieur bondé. Moyenne d'âge : 55-60 ans. Les murs sont recouverts de publicité. Les clients sont visiblement tous des habitués et alternent entre ballons de rouge et bières.

- Depuis qu'on peut plus fumer, c'est quand même plus pareil.
- Tout ça c'est la faute de communisses sissgenres !
- Les quois ?
- Les sissgenres. C'est un mot qu'y disent pour légaliser la pédophilie.
- Je croyais qu'on disait transse ?
- C'pareil. Enfin, je crois. Un truc de tarlouzes.
- En tout cas, on peut même plus se rouler une clope en paix !

Une voix résonne provenant d'une table voisine :
- Mon petit fils a fait premier au concours de poésie de son lycée.

Toute la salle crie « Bravo ! » et applaudit pendant 3 secondes avant de reprendre les conversations comme si rien ne s'était passé.

Fondu

Une cafétéria aux murs blancs couverts de posters motivationnels dont les images sont très visiblement générées par IA. Les clients portent tous des costumes-cravates ou des tailleurs un peu cheap, mais qui font illusion de loin. Tous consomment du café dans un gobelet en plastique qu'ils remuent légèrement avec une touillette en bois. Un petit pot contient des touillettes usagées sous une inscription « Pour sauver la planète, recyclez vos touillettes ! »

Gros plan sur Armand, visage bien rasé, lunettes, pommettes saillantes. Il a l'air stressé, mais essaie d'en imposer avec son sourire nerveux.

- Depuis que je fréquente « Le Linkedin », mon rendement de conversion client a augmenté de 3% et j'ai été officiellement nommé Marketing Story Customers Deputy Manager. C'est une belle réussite que je dois à mon réseau.

La caméra s'éloigne. On constate que, comme tous les autres clients, il est seul à sa table et en train de parler à un robot qui acquiesce machinalement.

Fondu

L'endroit branché avec des lumières colorées qui clignotent et de la musique tellement à fond que tu ne sais pas passer commande autrement qu'en hurlant. Des néons hyper design dessinent le nom du bar : « Instagram »

Les cocktails coûtent un mois de salaire, sont faits avec des jus de fruits en boîte. De temps en temps, un client fait une crise d'épilepsie, mais tout le monde trouve ça normal. Et puis les murs sont recouverts de posters géants représentant des paysages somptueux.

La barbe de trois jours soigneusement travaillée, Youri-Maxime pointe un poster à sa compagne.
- Cette photo est magnifique, on doit absolument aller là-bas !

Estelle n'a pas 30 ans, mais son visage est gonflé par la chirurgie esthétique. Sans regarder son compagnon, elle répond :
- Excellente idée, on prendra une photo là, je mettrai mon bikini jaune MachinBazar(TM) et je me ferai un maquillage BrolTruc(TM).
- Trop génial, répond le mec sans quitte des yeux son smartphone. Il me tarde de pouvoir partager la photo !

Fondu

Un ancien entrepôt qui a été transformé en loft de luxe. Briques nues, tuyauteries apparentes. Mais c'est intentionnel. Cependant, on sent que l'endroit n'est plus vraiment entretenu. Il y a des la poussière. Des détritus s'accumulent dans un coin. Les toilettes refoulent. Ça pue la merde dans tout le bar.

Au mur, un grand panneau bleu est barré d'un grand X noir. En dessous, on peut lire, à moitié effacé : « Twitter ».

Dans des pulls élimés et des pantalons de velours, une bande d'habitués est assise à une table. Chacun tape frénétiquement sur le clavier repliable de sa tablette.

Une bande de voyous s'approchent. Ils ont des tatouages en forme de croix gammées, d'aigles, de symboles vaguement nordiques. Ils interpellent les habitués.

- Eh, les mecs ! Vous faites quoi ?
- Nous sommes des journalistes, on écrit des articles. Ça fait 15 ans qu'on vient ici pour travailler.
- Et vous écrivez sur quoi ?
- Sur la démocratie, les droits des trans…

Un nazi a violemment donné un coup de batte de baseball sur la table, éclatant les verres des journalistes.

- Euh, enchaine aussitôt un autre journaliste, on écrit surtout sur le grand remplacement, sur les dangers du wokisme.

Le nazi renifle.

- Vous êtes cool les mecs, continuez !

Fondu

Exactement le même entrepôt sauf que cette fois-ci tout est propre. Le panneau, tout nouveau, indique « Bluesky ». Quand on s'approche des murs, on se rend compte qu'ils sont en fait en carton. Il s'agit d'un décor de cinéma !

Il n'y a pas d'habitués, le bar vient d'ouvrir.

- Bienvenue, lance le patron a la foule qui entre. Je sais que vous ne voulez pas rester à côté, car c'est devenu sale et rempli de nazis. Ici, pas de risque. Tout est pareil, mais décentralisé.

La foule pousse un soupir de satisfaction. Un client fronce les sourcils.

- En quoi est-ce décentralisé ? C'est pareil que…

Il n'a pas le temps de finir sa phrase. Le patron a claqué des doigts et deux cerbères sortis de nulle part le poussent dehors.

- C'est décentralisé, continue le patron, et c'est moi qui prends les commandes.
- Chouette, murmure un client. On va pouvoir avoir l'impression de faire un truc nouveau sans rien changer.
- En plus, on peut lui faire confiance, réplique un autre. C'est le patron de l'ancien bar. Il l'a revendu à un nazi et a pris une partie de l'argent pour ouvrir celui-ci.
- Alors, c'est clairement un gage de confiance !

Fondu

Une vielle grange avec de la paille par terre. Il y a des poules, on entend un mouton bêler.

Un type dans une chemise à carreaux élimée appuie sur un vieux thermo pour en tirer de la bouillasse qu'il tend à ses clients.

- C'est du bio issu du commerce équitable, dit-il. Du Honduras. Ou du Nicaragua ? Faut que je vérifie…
- Merci, répond une grande femme aux cheveux mauves d'un côté du crâne, rasés de l'autre côté.

Elle a un énorme piercing dans le nez, une jupe en voilettes, des bas résille troués et des chaussettes aux couleurs du drapeau trans qui lui remontent jusqu'aux genoux. Elle va s'asseoir devant une vieille table en tréteaux sur laquelle un type barbu en t-shirt « FOSDEM 2004 » tape fiévreusement sur le clavier d'un ordinateur Atari qui prend la moitié de la table. Des câbles sortent de partout.

Arrive une vieille dame aux yeux pétillants. Elle s'appuie sur une canne d'une main, tire un cabas à roulettes de l'autre.

- Bonjour tout le monde ! Vous allez bien aujourd'hui ?

Tout le monde répond des trucs différents en même temps, une poule s'affole et s'envole sur la table en caquetant. La vieille dame ouvre son cabas, faisant tomber une pile de livres de la Pléiade, un Guillaume Musso et une botte de poireaux.

- Regardez ce que je nous ai fait ! Une enseigne pour mettre devant le portail.

Elle déplie un ouvrage au crochet de plusieurs mètres de long. Inscrit en lettres de laine aux coloris plus que douteux, on peut vaguement déchiffrer « Mastodon ». Si on penche la tête et qu'on cligne des yeux.

- Bravo ! C'est magnifique ! entonne une cliente.
- Il fallait dire « Fediverse » dit un autre.
- Est-ce que ça ne rend pas l'endroit un peu trop commercial ? Faudrait pas qu'on devienne comme le bar à néon d'en face.
- Ouais, c'est sûr, c'est le risque. Faudrait que les clients d'en face viennent ici, mais sans que ce soit commercial.
- C'est de la laine bio, continue la vieille dame.

Dans l'étable, une vache mugit.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

23 Nov 2025 2:40am GMT

Frederic Descamps: Deploying on OCI with the starter kit – part 3 (applications)

We saw in part 1 how to deploy our starter kit in OCI, and in part 2 how to connect to the compute instance. We will now check which development languages are available on the compute instance acting as the application server. After that, we will see how easy it is to install a new […]

23 Nov 2025 2:40am GMT

Dries Buytaert: DrupalCon Nara keynote Q&A

DrupalCon Nara just wrapped up, and it left me feeling energized.

During the opening ceremony, Nara City Mayor Gen Nakagawa shared his ambition to make Nara the most Drupal-friendly city in the world. I've attended many conferences over the years, but I've never seen a mayor talk about open source as part of his city's long-term strategy. It was surprising, encouraging, and even a bit surreal.

Because Nara came only five weeks after DrupalCon Vienna, I didn't prepare a traditional keynote. Instead, Pam Barone, CTO of Technocrat and a member of the Drupal CMS leadership team, led a Q&A.

I like the Q&A format because it makes space for more natural questions and more candid answers than a prepared keynote allows.

We covered a lot: the momentum behind Drupal CMS, the upcoming Drupal Canvas launch, our work on a site template marketplace, how AI is reshaping digital agencies, why governments are leaning into open source for digital sovereignty, and more.

If you want more background, my DrupalCon Vienna keynote offers helpful context and includes a video recording with product demos.

The event also featured excellent sessions with deep dives into these topics. All session recordings are available on the DrupalCon Nara YouTube playlist.

Having much of the Drupal CMS leadership team together in Japan also turned the week into a working session. We met daily to align on our priorities for the next six months.

On top of that, I spent most of my time in back-to-back meetings with Drupal agencies and end-users. Hearing about their ambitions and where they need help gave me a clearer sense of where Drupal should go next.

Thank you to the organizers and to everyone who took the time to meet. The commitment and care of the community in Japan really stood out.

23 Nov 2025 2:40am GMT

feedLXer Linux News

KDE Plasma: Set Transparency for Specific Apps | Easy Guide

A brief guide on configuring transparency for selected applications in KDE Plasma using Window Rules, without affecting other windows.

23 Nov 2025 2:11am GMT

Self-Hosters Confirm It Again: Linux Dominates the Homelab OS Space

According to the 2025 Self-Host survey from selfh.st, Linux dominates self-hosting setups and homelab operating systems.

23 Nov 2025 12:06am GMT

22 Nov 2025

feedLXer Linux News

Linux 6.18 To Enable Both Touchscreens On The AYANEO Flip DS Dual-Screen Handheld

Sent out today were a set of input subsystem fixes for the near-final Linux 6.18 kernel. A bit of a notable addition via this "fixes" pull is getting both touchscreens working on the AYANEO Flip DS, a dual-screen gaming handheld device that can be loaded up with Linux...

22 Nov 2025 10:34pm GMT

feedFedora People

Kevin Fenzi: infra weeksly recap: Late November 2025

Kevin Fenzi's avatar Scrye into the crystal ball

Another busy week in fedora infrastructure. Here's my attempt at a recap of the more interesting items.

Inscrutable vHMC

We have a vHMC vm. This is a virtual Hardware Management Console for our power10 servers. You need one of these to do anything reasonably complex on the servers. I had initially set it up on one of our virthosts just as a qemu raw image, since thats the way the appliance is shipped. But that was making the root filesystem on that server be close to full, so I moved it to a logical volume like all our other vm's. However, after I did that, it started getting high packet loss talking to the servers. Nothing at all should have changed network wise, and indeed, it was the only thing seeing this problem. The virthost, all the other vm's on it, they were all fine. I rebooted it a bunch, tried changing things with no luck.

Then, we had our mass update/reboot outage thursday. After rebooting that virthost, everything was back to normal with the vHMC. Very strange. I hate problems that just go away where you don't know what actually caused them, but at least for now the vHMC is back to normal.

Mass update/reboot cycle

We did a mass update/reboot cycle this last week. We wanted to:

  • Update all the RHEL9 instances to 9.7 which just came out

  • Update all the RHEL10 instances to 10.1 which just came out.

  • Update all the fedora builders from f42 to f43

  • Update all our proxies from f42 to f43

  • Update a few other fedora instances from f42 to f43

This overall went pretty smoothly and everything should be updated and working now. Please do file an issue if you see anything amiss (as always).

AI Scrapers / DDoSers

The new anubis is working I think quite well to keep the ai scrapers at bay now. It is causing some problems for some clients however. It's more likely to find a client that has no user-agent or accept header might be a bot. So, if you are running some client that hits our infra and are seeing anubis challenges, you should adjust your client to send a user-agent and accept header and see if that gets you working again.

The last thing we are seeing thats still anoying is something I thought was ai scraping, but now I am not sure the motivation of it, but here's what I am seeing:

  • LOTS of requests from a large amount of ip's

  • fetching the same files

  • all under forks/$someuser/$popularpackage/ (so forks/kevin/kernel or the like)

  • passing anubis challenges

My guess is that these may be some browser add on/botnet where they don't care about the challenge, but why fetch the same commit 400 times? Why hit the same forked project for millions of hits over 8 or so hours?

If this is a scraper, it's a very unfit one, gathering the same content over and over and never moving on. Perhaps it's just broken and looping?

In any case currently the fix seems to be just to block requests to those forks, but of course that means the user who's fork it is cannot access them. ;( Will try and come up with a better solution.

RDU2-CC to RDU3 move

This datacenter move is still planned to happen. :) I was waiting for a new machine to migrate things to, but it's stuck in process, so instead I just repurposed for now a older server that we still had around. I've setup a new stg.pagure.io on it and copied all the staging data to it, it seems to be working as expected, but I haven't moved it in dns yet.

I then setup a new pagure.io there and am copying data to it now.

The current plan if all goes well is to have an outage and move pagure.io over on december 3rd.

Then, on December 8th, the rest of our RDU2-CC hardware will be powered off and moved. The rest of the items we have there shouldn't be very impactful to users and contributors. download-cc-rdu01 will be down, but we have a bunch of other download servers. Some proxies will be down, but we have a bunch of other proxy servers. After stuff comes back up on the 8th or 9th we will bring things back on line.

US Thanksgiving

Next week is the US Thanksgiving holiday (on thursday). We get thursday and friday as holidays at Red Hat, and I am taking the rest of the week off too. So, I might be around some in community spaces, but will not be attending any meetings or doing things I don't want to.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115595437083693195

22 Nov 2025 8:48pm GMT

feedPlanet Debian

Dirk Eddelbuettel: RcppArmadillo 15.2.2-1 on CRAN: Upstream Update, OpenMP Updates

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language-and is widely used by (currently) 1286 other packages on CRAN, downloaded 42.6 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 659 times according to Google Scholar.

This versions updates to the 15.2.2 upstream Armadillo release made two days ago. It brings a few changes over the RcppArmadillo 15.2.0 release made only to GitHub (and described in this post), and of course even more changes relative to the last CRAN release described in this earlier post. As described previously, and due to both the upstream transition to C++14 coupled with the CRAN move away from C++11, the package offers a transition by allowing packages to remain with the older, pre-15.0.0 'legacy' Armadillo yet offering the current version as the default. During the transition we did not make any releases to CRAN allowing both the upload cadence to settle back to the desired 'about six in six months' that the CRAN Policy asks for, and for packages to adjust to any potential changes. Most affected packages have done so (as can be seen in the GitHub issues #489 and #491) which is good to see. We appreciate all the work done by the respective package maintainers. A number of packages are still under a (now formally expired) deadline at CRAN and may get removed. Our offer to help where we can still stands, so please get in touch if we can be of assistance. As a reminder, the meta-issue #475 regroups all the resources for the transition.

With respect to changes in the package, we once more overhauled the OpenMP detection and setup, following the approach take by package data.table but sticking with an autoconf-based configure. The detailed changes since the last CRAN release follow.

Changes in RcppArmadillo version 15.2.2-1 (2025-11-21)

  • Upgraded to Armadillo release 15.2.2 (Medium Roast Deluxe)

    • Improved reproducibility of random number generation when using OpenMP
  • Skip a unit test file under macOS as complex algebra seems to fail under newer macOS LAPACK setting

  • Further OpenMP detection rework for macOS (Dirk in #497, #499)

  • Define ARMA_CRIPPLED_LAPACK on Windows only if 'LEGACY' Armadillo selected

Changes in RcppArmadillo version 15.2.1-0 (2025-10-28) (GitHub Only)

  • Upgraded to Armadillo release 15.2.1 (Medium Roast Deluxe)

    • Faster handling of submatrices with one row
  • Improve OpenMP detection (Dirk in #495 fixing #493)

Changes in RcppArmadillo version 15.2.0-0 (2025-10-20) (GitHub Only)

  • Upgraded to Armadillo release 15.2.0 (Medium Roast Deluxe)

    • Added rande() for generating matrices with elements from exponential distributions

    • shift() has been deprecated in favour of circshift(), for consistency with Matlab/Octave

    • Reworked detection of aliasing, leading to more efficient compiled code

  • OpenMP detection in configure has been simplified

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

22 Nov 2025 3:44pm GMT

feedLinuxiac

KDE Plasma 6.6 Will Introduce Per-Window Screen-Recording Exclusions

KDE Plasma 6.6 Will Introduce Per-Window Screen-Recording Exclusions

KDE Plasma 6.6 desktop environment will introduce per-window screen-recording exclusions, richer blur effects for dark themes, and more.

22 Nov 2025 3:28pm GMT

Bottles 60.0 Launches with Native Wayland Support

Bottles 60.0 Launches With Native Wayland Support

Bottles 60.0, a Wine prefix manager for running Windows apps on Linux, adds native Wayland support, a refreshed UI, and more.

22 Nov 2025 1:20pm GMT

Self-Hosters Confirm It Again: Linux Dominates the Homelab OS Space

Self-Hosters Confirm It Again: Linux Dominates the Homelab OS Space

According to the 2025 Self-Host survey from selfh.st, Linux dominates self-hosting setups and homelab operating systems.

22 Nov 2025 12:04pm GMT

feedPlanet KDE | English

This Week in Plasma: UI and performance improvements

Welcome to a new issue of This Week in Plasma!

This week there were many user interface and performance improvements - some quite consequential. So let's get right into it!

Notable New Features

Plasma 6.6.0

Windows can now be selectively excluded from screen recording! This can be invoked from the titlebar context menu, Task Manager context menu, and window rules. (Stanislav Aleksandrov, link)

Notable UI Improvements

Plasma 6.6.0

With a dark color scheme, the blur effect now produces a blur that's darker (ideally back to the level seen in Plasma 6.4) and also more vibrant in cases where there are bright colors behind it. People seemed to like this! But for those who don't, the saturation value of the blur effect is now user-configurable, so you can dial it in to your preferred level. (Vlad Zahorodnii, link 1, link 2, and link 3)

Blur saturation settings

When clicking on grouped Task Manager icons to cycle through their windows, full-screen windows will no longer always be raised first. Now, windows will be raised in the order of their last use. (Grégori Mignerot, link)

Did a round of UI polishing on the portal remote control dialog to make it look better and read more naturally. (Nate Graham and Joshua Goins, link 1 link 2, link 3 and link 4)

Portal remote control request dialog
Portal remote control tray icon

When you open the Kickoff Application Launcher and your pointer happens to end up right on top of one of the items in the Favorites view, it won't be selected automatically. (Christoph Wolk, link)

The Kickoff Application Launcher widget now tries very hard to keep the first item of the search results view selected - at least until the point where you focus the list and start navigating to another item. (Christoph Wolk, link)

Discover now uses more user-friendly language when it's being used to find apps that can open a certain file type. (Taras Oleksy, link)

You're now far less likely to accidentally raise an unintended app when a notification happens to appear right underneath something you're dragging-and-dropping. (Kai Uwe Broulik, link)

KMenuEdit now lets you select multiple items at a time for faster deletion. (Alexander Wilms, link)

The QR code dialog invokable from the clipboard has been removed, and instead the QR code is shown inline in the widget. This makes it large enough to actually use and also reduces unnecessary code. (Fushan Wen, link)

Notable Bug Fixes

Plasma 6.5.3

Fixed a rare case where KWin could crash when the system wakes from sleep. (Xaver Hugl, link)

Worked around a QML compiler bug in Qt that made the power and session buttons in the Application Launcher widget overlap with the tab bar if you resized its popup. (Christoph Wolk, link)

Plasma 6.5.4

Fixed a regression in menu sizing that got accidentally backported to Plasma 6.5.3. All should be well in 6.5.4, and some distros have backported the fix already. (Akseli Lahtinen and Nate Graham, link)

Fixed a Plasma 6 regression that broke the ability to activate the System Tray's expanded items popup with a keyboard shortcut. (Cursor AI, operated by Mikhail Sidorenko, link)

Fixed a regression caused by a Qt change that broke the clipboard's Actions menu from being able to appear when the configuration dialog wasn't open. (Fushan Wen, link)

Fixed a bug that could make the Plasma panel's custom size chooser appear on the wrong screen. (Vlad Zahorodnii, link)

Fixed a bug that could make the clipboard contents get sent many times when it's being set programmatically in a portal-using app. (David Redondo, link)

Fixed a memory leak in Plasma's desktop. (Vlad Zahorodnii, link)

Fixed a memory leak in the clipboard Actions menu. (Fushan Wen, link)

KWin's zoom effect now saves its current zoom level a little bit after you change it, rather than at logout. This prevents a situation where the system is inappropriately zoomed in (or not zoomed in) after a KWin crash or power loss. (Ritchie Frodomar, link)

Fixed a bug that made the optional Textual List representation of multiple windows in the Task Manager widget fail to get focus when using medium focus stealing prevention. (David Redondo, link)

Plasma 6.6.0

Worked around a bug in some XWayland-using games that made it impossible type text into certain popups. (Xaver Hugl, link)

Clearing KRunner's search history now takes effect immediately, rather than only after KRunner was restarted. (Nate Graham, link)

With a very narrow display and a high scale factor, the buttons on the login, lock, and logout screens can no longer get cut off; now they wrap onto the next line. (Nate Graham, link)

Frameworks 6.21

Fixed a bug that could confuse KWallet - when being used as a Secret Service proxy for KeePassXC - into becoming convinced that it needed to create a new wallet. (Marco Martin, link)

Fixed two memory leaks affecting QML-based System Settings pages. (Vlad Zahorodnii, link 1 and link 2)

Other bug information of note:

Notable in Performance & Technical

Plasma 6.5.3

Apps that use the Keyboard Shortcuts Portal to set shortcuts can now remove them in the same way. (David Redondo, link)

You can now use Spectacle's Active Window mode to take a screenshot of WINE windows. (Xaver Hugl, link)

Plasma 6.6.0

Made a major improvement to the smoothness of animations throughout Plasma and KWin for people using screens with a refresh rate higher than 60 Hz! (David Edmundson, link)

Reduced the amount of unnecessary work KWin does during its compositing pipeline. (Xaver Hugl, link)

When you delete a whole category's worth of shortcuts on System Settings' Shortcuts page, all the shortcuts get grayed out and cease to be interactive, and a warning message tells you they'll seen be deleted and gives you a chance to undo that before it happens. (Nate Graham, link)

Frameworks 6.21

KConfig now parses config files in a stream rather than opening them all at once, which allows it to notice early when a file is corrupted or improperly formatted. This prevents freezes in several places. (Méven Car, link 1, link 2, and link 3)

When using the Systemd integration functionality (which is on by default if Systemd is present), programs will no longer fail to launch while there are any environment variables beginning with a digit, as this is something Systemd doesn't support. (Christoph Cullmann, link)

How You Can Help

Donate to KDE's 2025 fundraiser! It really makes a big difference. Believe it or not, we've already hit out our €75k stretch goal and are €5k towards the final one. I'm just in awe of the generosity of the KDE community and userbase. Thank you all for helping KDE to grow and prosper!

If money is tight, you can help KDE by directly getting involved. Donating time is actually more impactful than donating money. Each contributor makes a huge difference in KDE - you are not a number or a cog in a machine! You don't have to be a programmer, either; many other opportunities exist.

To get a new Plasma feature or a bugfix mentioned here, feel free to push a commit to the relevant merge request on invent.kde.org.

22 Nov 2025 12:01am GMT

21 Nov 2025

feedOMG! Ubuntu

The Raspberry Pi 500+ Works as a Standalone Keyboard (Well, Kinda)

Can the Raspberry Pi 500+ work as a standalone Bluetooth keyboard? Yes, using the open-source btferret project - but not without limitations, as I report.

You're reading The Raspberry Pi 500+ Works as a Standalone Keyboard (Well, Kinda), a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

21 Nov 2025 11:12pm GMT

feedFedora People

Fedora Badges: New badge: Let's have a party (Fedora 43) !

21 Nov 2025 3:38pm GMT

feedKernel Planet

Brendan Gregg: Intel is listening, don't waste your shot

Intel's new CEO, Lip-Bu Tan, has made listening to customers a top priority, saying at Intel Vision earlier this year: "Please be brutally honest with us. This is what I expect of you this week, and I believe harsh feedback is most valuable."

I'd been in regular meetings with Intel for several years before I joined, and I had been giving them technical direction on various projects, including at times some brutal feedback. When I finally interviewed for a role at Intel I was told something unexpected: that I had already accomplished so much within Intel that I qualified to be an Intel Fellow candidate. I then had to pass several extra interviews to actually become a fellow (and was told I may only be the third person in Intel's history to be hired as a Fellow) but what stuck with me was that I had already accomplished so much at a company I'd never worked for.

If you are in regular meetings with a hardware vendor as a customer (or potential customer) you can accomplish a lot by providing firm and tough feedback, particularly with Intel today. This is easier said than done, however.

Now that I've seen it from the other side I realize I could have accomplished more, and you can too. I regret the meetings where I wasn't really able to have my feedback land as the staff weren't really getting it, so I eventually gave up. After the meeting I'd crack jokes with my colleagues about how the product would likely fail. (Come on, at least I tried to tell them!)

Here's what I wish I had done in any hardware vendor meeting:

I'm now in meetings from the other side where we'd really appreciate brutal feedback, but some customers aren't comfortable doing this, even when prompted. It isn't easy to tell someone their project is doomed, or that their reasons for not doing something are BS. It isn't easy dealing with peer pressure and a room of warm and friendly staff begging you say something, anything nice about their terrible product for fear of losing their jobs -- and realizing you must be brutal to their faces otherwise you're not helping the vendor or your own company. And it's extra effort to check meeting minutes and to push for meetings with the ELT or the CEO. Giving brutal feedback takes brutal effort.

21 Nov 2025 1:00pm GMT

feedPlanet KDE | English

Web Review, Week 2025-47

Let's go for my web review for the week 2025-47.


In 1982, a physics joke gone wrong sparked the invention of the emoticon - Ars Technica

Tags: tech, history, culture

If you're wondering where emoticons and emojis are coming from, this is a nice little piece about that.

https://arstechnica.com/gadgets/2025/11/in-1982-a-physics-joke-gone-wrong-sparked-the-invention-of-the-emoticon/


Screw it, I'm installing Linux

Tags: tech, linux, foss, gaming

Clearly something is brewing right now. We're seeing more and more people successfully switching.

https://www.theverge.com/tech/823337/switching-linux-gaming-desktop-cachyos


Lawmakers Want to Ban VPNs-And They Have No Idea What They're Doing

Tags: tech, vpn, privacy, law

This is totally misguided… Let's hope no one will succeed passing such dangerously stupid bills.

https://www.eff.org/deeplinks/2025/11/lawmakers-want-ban-vpns-and-they-have-no-idea-what-theyre-doing


Learning with AI falls short compared to old-fashioned web search

Tags: tech, ai, machine-learning, gpt, learning, teaching

If there's one area where people should stay clear from LLMs, it's definitely when they want to learn a topic. That's one more study showing the knowledge you retain from LLMs briefs is shallower. The friction and the struggle to get to the information is a feature, our brain needs it to remember properly.

https://theconversation.com/learning-with-ai-falls-short-compared-to-old-fashioned-web-search-269760


The Psychogenic Machine: Simulating AI Psychosis, Delusion Reinforcement and Harm Enablement in Large Language Models

Tags: tech, ai, machine-learning, gpt, psychology, safety

The findings in this paper are chilling… especially considering what fragile people are doing with those chat bots.

https://arxiv.org/abs/2509.10970v1


Feeds, Feelings, and Focus: A Systematic Review and Meta-Analysis Examining the Cognitive and Mental Health Correlates of Short-Form Video Use

Tags: tech, social-media, cognition, psychology

Unsurprisingly the news ain't good on the front of social media and short form videos. Better stay clear of those.

https://psycnet.apa.org/fulltext/2026-89350-001.html


Do Not Put Your Site Behind Cloudflare if You Don't Need To

Tags: tech, cloud, decentralized, web

Friendly reminder following the Cloudflare downtime earlier this week.

https://huijzer.xyz/posts/123/do-not-put-your-site-behind-cloudflare-if-you-dont


Cloudflare outage on November 18, 2025

Tags: tech, cloud, complexity, safety, rust

Wondering what happened at Cloudflare? Here is their postmortem, this is an interesting read. Now for Rust developers… this is a good illustration of why you should stay clear from unwrap() in production code.

https://blog.cloudflare.com/18-november-2025-outage/


Needy Programs

Tags: tech, ux, notifications

Kind of ignore the security impact of the needed upgrades, but apart from this I largely agree. Most applications try to push more features in your face nowadays, unneeded notifications and all… this is frankly exhausting the users.

https://tonsky.me/blog/needy-programs/


I think nobody wants AI in Firefox, Mozilla

Tags: tech, browser, ai, machine-learning, gpt, mozilla

Looks like Mozilla is doing everything it can to alienate the current Firefox user base and to push forward its forks.

https://manualdousuario.net/en/mozilla-firefox-window-ai/


DeepMind's latest: An AI for handling mathematical proofs

Tags: tech, ai, machine-learning, mathematics, google

That's an interesting approach. Early days on this one, it clearly requires further work but it seems like the proper path for math related problems.

https://arstechnica.com/ai/2025/11/deepminds-latest-an-ai-for-handling-mathematical-proofs/


Production-Grade Container Deployment with Podman Quadlets

Tags: tech, systemd, containers, linux, system, podman

Podman is really a nice option for deploying containers nowadays.

https://blog.hofstede.it/production-grade-container-deployment-with-podman-quadlets/


Match it again Sam

Tags: tech, regex, rust

Nice alternative syntax to the good old regular expressions. Gives nice structure to it all. There's a Rust crate to try it out.

https://www.sminez.dev/match-it-again-sam/


10 Smart Performance Hacks For Faster Python Code

Tags: tech, python, performance

Some of this might sound obvious I guess. Still there are interesting lesser known nuggets proposed here.

https://blog.jetbrains.com/pycharm/2025/11/10-smart-performance-hacks-for-faster-python-code/


Floodfill algorithm in Python

Tags: tech, python, algorithm, graphics

This is a nice little algorithm and it shows how to approach it in Python while keeping it efficient in term of operations.

https://mathspp.com/blog/floodfill-algorithm-in-python


AMD vs. Intel: a Unicode benchmark

Tags: tech, amd, intel, hardware, simd, performance

Clearly AMD is now well above Intel in performance around AVX-512. This is somewhat unexpected.

https://lemire.me/blog/2025/11/16/amd-vs-intel-a-unicode-benchmark/


Memory is slow, Disk is fast

Tags: tech, memory, storage, performance, system

No, don't go assuming you can use disks instead of ram. This is not what it is about. It shows ways to get more out of your disks though. It's not something you always need, but sometimes it can be a worth endeavor.

https://www.bitflux.ai/blog/memory-is-slow-part2/


Compiler Options Hardening Guide for C and C++

Tags: tech, c++, security

Good list of hardening options indeed. That's a lot to deal with of course, let's hope this spreads and some defaults are changed to make it easier.

https://best.openssf.org/Compiler-Hardening-Guides/Compiler-Options-Hardening-Guide-for-C-and-C++.html


The problem with inferring from a function call operator is that there may be more than one

Tags: tech, c++, type-systems, safety

The type inference in C++ can indeed lead to this kind of traps. Need to be careful as usual.

https://devblogs.microsoft.com/oldnewthing/20251002-00/?p=111647


There's always going to be a way to not code error handling

Tags: tech, programming, safety, failure

Depending on the ecosystem it's more or less easy indeed. Let's remember that error handling is one of the hard problems to solve.

https://utcc.utoronto.ca/~cks/space/blog/programming/AlwaysUncodedErrorHandling


Disallow code usage with a custom clippy.toml

Tags: tech, rust, tools, quality

Didn't know about that clippy feature. This is neat, allows to precisely target some of your project rules.

https://www.schneems.com/2025/11/19/find-accidental-code-usage-with-a-custom-clippytoml/


The Geometry Behind Normal Maps

Tags: tech, 3d, graphics, shader

Struggling to understand tangent space and normal maps? This post does a good job to explain where this comes from.

https://www.shlom.dev/articles/geometry-behind-normal-maps/


Know why you don't like OOP

Tags: tech, object-oriented

I don't get why object oriented programming gets so much flack these days… It brings interesting tools and less interesting ones. Just pick and choose wisely like for any other paradigm.

https://zylinski.se/posts/know-why-you-dont-like-oop/


Ditch your (mut)ex, you deserve better

Tags: tech, multithreading, safety

If you're dealing with multithreading you should not turn to mutexes by default indeed. Consider higher level primitives and patterns first.

https://chrispenner.ca/posts/mutexes


Brownouts reveal system boundaries

Tags: tech, infrastructure, reliability, failure, resilience

Interesting point of view. Indeed, you probably want things to not be available 100% of the time. This forces you to see how resilient things really are.

https://jyn.dev/brownouts-reveal-system-boundaries/


Tech Leads in Scrum

Tags: tech, agile, scrum, tech-lead, leadership

Interesting move on the Scrum definitions to move from roles to accountabilities. The article does a good job explaining it but then falls back into talking about roles somehow. Regarding the tech leads indeed they can work in Scrum teams. Scrum don't talk about them simply because Scrum don't talk about technical skills.

https://www.patkua.com/blog/tech-leads-in-scrum/


How to Avoid Solo Product Leadership Failure with a Product Value Team

Tags: tech, agile, product-management

I wonder what the whole series will give. Anyway I very much agree with this first post. Too often projects have a single product manager and that's a problem.

https://www.jrothman.com/mpd/2025/11/how-to-avoid-solo-product-leadership-failure-with-a-product-value-team-part-1/



Bye for now!

21 Nov 2025 10:46am GMT

feedFedora People

Fedora Community Blog: Community Update – Week 47

Fedora Community Blog's avatar

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infratructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 17 November - 21 November 2025

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It's responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It's responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It's responsible for releases, retirement process of packages and package builds.
Ticket tracker

RISC-V

Forgejo

Updates of the team responsible for Fedora Forge deployment and customization.
Ticket tracker

List of new releases of apps maintained by I&R Team

Minor update of FMN from 3.3.0 to 3.4.0
Minor update of FASJSON from 1.6.0 to 1.7.0
Minor update of Noggin from 1.10.0 to 1.11.0

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update - Week 47 appeared first on Fedora Community Blog.

21 Nov 2025 10:00am GMT

feedPlanet Ubuntu

Ubuntu Blog: Open design: the opportunity design students didn’t know they were missing

What if you could work on real-world projects, shape cutting-edge technology, collaborate with developers across the world, make a meaningful impact with your design skills, and grow your portfolio… all without applying for an internship or waiting for graduation?

That's what we aim to do with open design: an opportunity for universities and students of any design discipline.

What is open design, and why does it matter?

Before we go further, let's talk about what open design is. Many open source tools are built by developers, for developers, without design in mind. When open source software powers 90% of the digital world (PDF), it leaves everyday users feeling overwhelmed or left out. Open design wants to bridge that gap.

We aim to introduce human-centred thinking into open source development, enhancing these tools to be more intuitive, inclusive, and user-friendly. Most open source projects focus on code contributions, neglecting design contributions. That leaves a vast number of projects without a design system, accessibility audits, or onboarding documentation. That's where designers come in, helping shape better user experiences and more welcoming communities.

Open design is about more than just aesthetics. Open design helps to make technology work for people; that's exactly what open source needs. Learn more about open design on our webpage.

We want to raise awareness for the projects, the problems that currently exist, and how we can fix them together, and encourage universities and students to become advocates of open design.

We want universities to connect their students to real-world, meaningful design opportunities in a field that is currently lacking the creativity of designers. Our goal is to help and motivate students to bring their design skills into open source projects and become advocates, to make open design accessible, practical, and empowering!

How Canonical helps universities access open design

We want to help universities help students to access:

We have provided universities with talks and project briefs, enabling them to prepare students to utilise their expertise and design a brighter future for open source. If you're a department leader, instructor, or coordinator, exploring open source and open design will help you to give your students unique access to industry-aligned experiences, while embedding values of collaboration, open contribution, and inclusive design.

Why should students care?

If you're a student in UX, UI, interaction, service, visual, HCI design, or any other field with design influence, you've been told how important it is to build your portfolio, gain hands-on experience, and collaborate with cross-functional teams. Open design is your opportunity to do so.

The best part is, you don't have to write a single line of code to make a difference! Open source projects are looking for:

If you're in a design course, you already have, or are developing, the skills that open-source projects need.

Open design is an opportunity to develop by collaborating across disciplines, navigating ambiguity, and advocating for users: skills employers value. With open design, you'll gain confidence in presenting ideas, working with international teams, and handling feedback in a real-world setting, growing in ways that classroom projects and internships often don't offer.

If you're aiming for a tech-focused design career, open design is one of the most impactful and distinctive ways to stand out!

How can you start?

Getting started is easier than you think, even if GitHub looks scary at first. Here's how:

  1. Learn the basics of GitHub

We've made a video guide to understanding GitHub, and curated a list of other videos to get to grips with GitHub.

  1. Find a project on contribute.design

It's like a job board for design contributions. These projects are waiting for you.

  1. Understand the project's needs

Most projects on contribute.design list what they're looking for in .design file or DESIGN.md guidelines.

  1. Pick an issue, or propose your own

Navigate to the Issues tab of the project repo, where you can filter for issues labelled for design. You can also use this tab to propose any issues you discover in the project.

  1. Contribute, collaborate, grow

Start adding your ideas, questions, and solutions to issues. You'll be collaborating, communicating, and making meaningful contributions.

You can explore more projects through the GitHub Explore page, but not every project will have a design process in place; that's where your skills are especially valuable. If you don't see design issues, treat the project as a blank canvas. Suggest checklists, organise a design system, or improve documentation. The power is in your hands!

Reach out to maintainers, join community discussions, and don't hesitate to introduce design-focused thinking. Your initiative can spark meaningful change and help open source become more user-friendly, one project at a time.

View every project as an opportunity; you don't need an invitation to contribute, just curiosity, creativity, and the willingness to collaborate.

Interested?

We're looking for universities and departments interested in introducing open design to their students. Whether that's through a talk, module project briefs, or anything else you'd like to see, we're excited to find ways to work together and bring open design to campus.

Are you a program director, a design department, a student group, or an interested student? Let's talk!

Reach out at opendesign@canonical.com

21 Nov 2025 9:39am GMT

Ubuntu Blog: Anbox Cloud 1.28.0 is now available!

Enhanced Android device simulation, smarter diagnostics, and OIDC-enforced authentication

The Anbox Cloud team has been working around the clock to release Anbox Cloud 1.28.0! We're very proud of this release that adds robust authentication, improved diagnostic tools, and expanded simulation options, making Anbox Cloud even more secure, flexible, and developer-friendly for running large-scale Android workloads.

Let's go over the most significant changes in this new version.

Strengthened authentication and authorization

Our OpenID Connect (OIDC)-based authentication and authorization framework is now stable with Anbox Cloud 1.28.0. This new framework provides a standardized approach for controlling access across web and command-line clients. Operators can now assign permissions through entitlements with fine-grained control, define authorization groups, and create and manage identities.

Configuring user permissions, understanding the idea of identities and groups, and looking through the entire list of available entitlements are all thoroughly covered in the new guides that come with this release. This represents a significant advancement in the direction of a more uniform and standards-based access model for all Anbox Cloud deployments.

Simulated SMS support

This is one of our most exciting new features: developers testing telephony-enabled applications in Anbox Cloud can now simulate incoming SMS messages using the Anbox runtime HTTP API.

This new functionality allows messages to trigger notifications the same way they would on a physical device, generating more realistic end-to-end scenarios. A new how-to guide in our documentation provides detailed instructions on how to enable and use this feature.

Protection against accidental deletions

Because we know accidents happen (especially in production environments…), in order to reduce operational risk, this release introduces the ability to protect instances from accidental deletion. This option can be enabled directly in the dashboard either when creating a new instance or later from the Instance details page under the Security section.

Once this protection option is turned on, the instance cannot be deleted, even during bulk delete operations, until the configuration is reset. This simple safeguard helps operators preserve important data and prevents costly mistakes in busy environments.

Improved ADB share management

Working with ADB (the Android Debug Bridge) has also become more flexible. Anbox Cloud now allows up to five ADB shares to be managed directly from the dashboard. For those who prefer the command line, the new amc connect command provides an alternative to the existing anbox-connect tool. Together, these improvements make it easier for developers to manage and maintain multiple debugging or testing sessions at once.

New diagnostic facility for troubleshooting

With version 1.28.0, we're introducing a new diagnostic facility in the dashboard. This tool is designed to simplify troubleshooting for both the instances and the streaming sessions themselves.

This feature helps collect relevant diagnostic data automatically, thereby reducing the work needed to identify and resolve issues. It also makes collaboration with our Canonical support teams more efficient, as users can now provide consistent and accurate diagnostic information in a structured, standard format.

Sensor support in the Streaming SDK

Here's another hotly anticipated feature: the Anbox Streaming SDK gains expanded sensor support in this release. Our SDK now includes gyroscope, accelerometer and orientation sensors, allowing developers to test applications more interactively.

Sensor support is disabled by default but can be easily enabled in the streaming client configuration. This addition opens up new possibilities for interactive use cases, such as gaming.

Upgrade now and stay tuned!

We think that Anbox Cloud 1.28.0 is our best release to date, and we are pleased to keep providing a feature-rich, scalable, and safe solution for managing Android workloads on a large scale.

This latest version makes it easier than ever for developers and operators to create and test Android apps by introducing more precise device simulation, improved troubleshooting tools, and stricter access controls, as we've explained above.

Try it now and stay tuned for further developments in our upcoming releases. For detailed instructions on how to upgrade your existing deployment, please refer to the official documentation.

Further reading

Official documentation
Anbox Cloud Appliance
Learn more about Anbox Cloud or contact our team to discuss your use case


Android is a trademark of Google LLC. Anbox Cloud uses assets available through the Android Open Source Project.

21 Nov 2025 8:00am GMT

feedPlanet GNOME

Jakub Steiner: 12 months instead of 12 minutes

Hey Kids! Other than raving about GNOME.org being a static HTML, there's one more aspect I'd like to get back to in this writing exercise called a blog post.

Share card gets updated every release too

I've recently come across an apalling genAI website for a project I hold deerly so I thought I'd give a glimpse on how we used to do things in the olden days. It is probably not going to be done this way anymore in the enshittified timeline we ended up in. The two options available these days are - a quickly generated slop website or no website at all, because privately owned social media is where it's at.

The wanna-be-catchy title of this post comes from the fact the website underwent numerous iterations (iterations is the core principle of good design) spanning over a year before we introduced the redesign.

So how did we end up with a 3D model of a laptop for the hero image on the GNOME website, rather than something generated in a couple of seconds and a small town worth of drinking water or a simple SVG illustration?

The hero image is static now, but used to be a scroll based animation at the early days. It could have become a simple vector style illustration, but I really enjoy the light interaction of the screen and the laptop, especially between the light and dark variants. Toggling dark mode has been my favorite fidget spinner.

Creating light/dark variants is a bit tedious to do manually every release, but automating still a bit too hard to pull off (the taking screenshots of a nightly OS bit). There's also the fun of picking a theme for the screenshot rather than doing the same thing over and over. Doing the screenshooting manually meant automating the rest, as a 6 month cycle is enough time to forget how things are done. The process is held together with duct tape, I mean a python script, that renders the website image assets from the few screenshots captured using GNOME OS running inside Boxes. Two great invisible things made by amazing individuals that could go away in an instant and that thought gives me a dose of anxiety.

This does take a minute to render on a laptop (CPU only Cycles), but is a matter of a single invocation and a git commit. So far it has survived a couple of Blender releases, so fingers crossed for the future.

Sophie has recently been looking into translations, so we might reconsider that 3D approach if translated screenshots become viable (and have them contained in an SVG similar to how os.gnome.org is done). So far the 3D hero has always been in sync with the release, unlike in our Wordpress days. Fingers crossed.

21 Nov 2025 7:44am GMT

feedPlanet Debian

Daniel Kahn Gillmor: Transferring Signal on Android

Transferring a Signal account between two Android devices

I spent far too much time recently trying to get a Signal Private Messenger account to transfer from one device to another.

What I eventually found worked was a very finicky path to enable functioning "Wi-Fi Direct", which I go into below.

I also offer some troubleshooting and recovery-from-failure guidance.

All of this blogpost uses "original device" to refer to the Android pocket supercomputer that already has Signal installed and set up, and "new device" to mean the Android device that doesn't yet have Signal on it.

Why Transfer?

Signal Private Messenger is designed with the expectation that the user has a "primary device", which is either an iPhone or an Android pocket supercomputer.

If you have an existing Signal account, and try to change your primary device by backing up and restoring from backup, it looks to me like Signal will cause your long-term identity keys to be changed. This in turn causes your peers to see a message like "Your safety number with Alice has changed."

These warning messages are the same messages that they would get if an adversary were to take over your account. So it's a good idea to minimize them when there isn't an account takeover - false alarms train people to ignore real alarms.

You can avoid "safety number changed" warnings by using signal's "account transfer" process during setup, at least if you're transferring between two Android devices.

However, my experience was that the transfer between two Android devices was very difficult to get to happen at all. I ran into many errors trying to do this, until I finally found a path that worked.

Dealing with Failure

After each failed attempt at a transfer, my original device's Signal installation would need to be re-registered. Having set a PIN meant that i could re-register the device without needing to receive a text message or phone call.

Set a PIN before you transfer!

Also, after a failure, you need to re-link any "linked device" (i.e. any Signal Desktop or iPad installation). If any message came in during the aborted transfer, the linked device won't get a copy of that message.

Finally, after a failed transfer, i recommend completely uninstalling Signal from the new device, and starting over with a fresh install on the new device.

Permissions

My understanding is that Signal on Android uses Wi-Fi Direct to accomplish the transfer. But to use Wi-Fi Direct, Signal needs to have the right permissions.

On each device:

Preparing for Wi-Fi Direct

The transfer process depends on "Wi-Fi Direct", which is a bit of a disaster on its own.

I found that if i couldn't get Wi-Fi Direct to work between the two devices, then the Signal transfer was guaranteed to fail.

So, for clearer debugging, i first tried to establish a Wi-Fi Direct link on Android, without Signal being involved at all.

Setting up a Wi-Fi Direct connection directly failed, multiple times, until i found the following combination of steps, to be done on each device:

I found that this configuration is the most likely to enable a successful Wi-Fi Direct connection, where clicking "invite" on one device would pop up an alert on the other asking to accept the connection, and result in a "Connected" state between the two devices.

Actually Transferring

Start with both devices fully powered up and physically close to one another (on the same desk should be fine).

On the new device:

On the original device:

Now tap the "continue" choices on both devices until they both display a message that they are searching for each other. You might see the location indicator (a green dot) turn on during this process.

If you see an immediate warning of failure on either device, you probably don't have the permissions set up right.

You might see an alert (a "toast") on one of the devices that the other one is trying to connect. You should click OK on that alert.

In my experience, both devices are likely to get stuck "searching" for each other. Wait for both devices to show Signal's warning that the search has timed out.

At this point, leave Signal open on both devices, and go through all the steps described above to prepare for Wi-Fi Direct. Your Internet access will be disabled.

Now, tap "Try again" in Signal on both devices, pressing the buttons within a few seconds of each other. You should see another alert that one device is trying to connect to the other. Press OK there.

At this point, the transfer should start happening! The old device will indicate what percentag has been transferred, and the new device will indicate how many messages hav been transferred.

When this is all done, re-connect to Wi-Fi on the new device.

Temporal gap for Linked Devices

Note that during this process, if new messages are arriving, they will be queuing up for you.

When you reconnect to wi-fi, the queued messages will flow to your new device. But the process of transferring automatically unlinks any linked devices. So if you want to keep your instance of Signal Desktop with as short a gap as possible, you should re-link that installation promptly after the transfer completes.

Clean-up

After all this is done successfully, you probably want to go into the Permissions settings and turn off the Location and Nearby Devices permissions for Signal on both devices.

I recommend also going into Wi-Fi Direct and removing any connected devices and forgetting any existing connections.

Conclusion

This is an abysmally clunky user experience, and I'm glad I don't have to do it often. It would have been much simpler to make a backup and restore from it, but I didn't want to freak out my contacts with a safety number change.

By contrast, when i wanted extend a DeltaChat account across two devices, the transfer was prompt and entirely painless -- i just had to make sure the devices were on the same network, and then scanned a QR code from one to the other. And there was no temporal gap for any other deviees. And i could use Delta on both devices simultaneously until i was convinced that it would work on the new device -- Delta doesn't have the concept of a primary account.

I wish Signal made it that easy! Until it's that easy, i hope the processes described here are useful to someone.

21 Nov 2025 5:00am GMT

feedPlanet GNOME

This Week in GNOME: #226 Exporting Events

Update on what happened across the GNOME project in the week from November 14 to November 21.

GNOME Core Apps and Libraries

Calendar

A simple calendar application.

Hari Rana | TheEvilSkeleton (any/all) 🇮🇳 🏳️‍⚧️ says

Thanks to FineFindus, who previously worked on exporting events as .ics files, GNOME Calendar can now export calendars as .ics files, courtesy of merge request !615! This will be available in GNOME 50.

export-calendar-button-row.png

Hari Rana | TheEvilSkeleton (any/all) 🇮🇳 🏳️‍⚧️ says

After two long and painful years, several design iterations, and more than 50 rebases later, we finally merged the infamous, trauma-inducing merge request !362 on GNOME Calendar. This changes the entire design of the quick-add popover by merging both pages into one and updating the style to conform better with modern GNOME designs. Additionally, it remodels the way the popover retrieves and displays calendars, reducing 120 lines of code.

The calendars list in the quick-add popover has undergone accessibility improvements, providing a better experience for assistive technologies and keyboard users. Specifically: tabbing from outside the list will focus the selected calendar in the list; tabbing from inside the list will skip the entire list; arrow keys automatically select the focused calendar; and finally, assistive technologies now inform the user of the checked/selected state.

Admittedly, the quick-add popover is currently unreachable via keyboard because we lack the resources to implement keyboard focus for month and week cells. We are currently trying to address this issue in merge request !564, and hope to get it merged for GNOME 50, but it's a significant undertaking for a single unpaid developer. If it is not too much trouble, I would really appreciate some donations, to keep me motivated to improve accessibility throughout GNOME and sustain myself: https://tesk.page/#donate

This merge request allowed us to close 4 issues, and will be available in GNOME 50.

new-multi-day-event.png

Files

Providing a simple and integrated way of managing your files and browsing your file system.

Peter Eisenmann says

Files landed two big changes by Khalid Abu Shawarib this week.

The first change adds a bunch of tests, bringing the total coverage of the huge code base close to 30%. This will prevent regressions in previously uncovered areas such as bookmarking or creating files.

The second change is more noticeable as the way thumbnails are loaded was largely rewritten to finally make full use of GTK4's recycling views. It took a lot of code detangling to get thumbnails to load asynchronously, but the result is a great speedup, making thumbnails show as fast as never before. 🚀

Attached is a comparison of reloading a folder before and after the change

Libadwaita

Building blocks for modern GNOME apps using GTK4.

Alice (she/her) 🏳️‍⚧️🏳️‍🌈 announces

as of today, libadwaita has support for the new reduced motion preference, both supporting the @media (prefers-reduced-motion: reduce) query from CSS, and using simple crossfade transitions where appropriate (e.g. in AdwDialog, AdwNavigationView and AdwTabOverview

Alice (she/her) 🏳️‍⚧️🏳️‍🌈 reports

libadwaita has deprecated the style-dark.css, style-hc.css and style-hc-dark.css resources that AdwApplication automatically loads. They still work, but will be removed in 2.0. Applications are recommended to switch to style.css and media queries for dark and high contrast styles

GTK

Cross-platform widget toolkit for creating graphical user interfaces.

Matthias Clasen reports

This weeks GTK 4.21.2 release includes initial support for the CSS backdrop-filter property. The GSK APIs enabling this are new copy/paste and composite render nodes, which allow flexible reuse of the 'background' at any point in the scene graph. We are looking forward to your experiments with this!

GLib

The low-level core library that forms the basis for projects such as GTK and GNOME.

Philip Withnall says

Luca Bacci has dug into an intermittent output buffering issue with GLib on Windows, which should fix some CI issues and opt various GLib utilities into more modern features on Windows - https://gitlab.gnome.org/GNOME/glib/-/merge_requests/4788

Third Party Projects

Alain announces

Planify 4.16.0 - Natural dates, smoother flows, and smarter task handling

This week, Planify released version 4.16.0, bringing several improvements that make task management faster, more intuitive, and more predictable on GNOME.

The highlight of this release is natural language date parsing, now enabled by default in Quick Add. You can type things like "tomorrow 3pm", "next Monday", "25/12/2024", or "ahora", and Planify will automatically convert it into a proper scheduled date. Spanish support has also been added, including expressions like mañana, pasado mañana, próxima semana, and more.

Keyboard navigation got a boost too:

  • Ctrl + D now opens the date picker instantly
  • Ctrl + K toggles "Keep adding" mode
  • And several shortcuts were cleaned up for more predictable behavior

Planify also adds label management in the task context menu, making it easier to add or remove labels without opening the full editor.

For calendar users, event items now open a richer details popover, with automatic detection of Google Meet and Microsoft Teams links, making online meetings just one click away.

As always, translations, bug fixes, and general UI refinements round out the update.

Planify 4.16.0 is available now on Flathub

Jan-Willem reports

This week I released Java-GI version 0.13.0, a Java language binding for GNOME and other libraries that support GObject-Introspection, based on OpenJDK's new FFM functionality. Some of the highlights in this release are:

  • Bindings for LibRsvg, GstApp (for GStreamer) and LibSecret have been added
  • The website for Java-GI has its own domain name now: java-gi.org, and this is also used in all module- and package names
  • Thanks to GObject-Introspection's extensive testsuite, I've implemented over 900 testcases to test the Java bindings, and fixed many bugs along the way.

I hope that Java-GI will help Java (or Kotlin, Scala, Clojure, …) developers to create awesome new GNOME apps!

Quadrapassel

Fit falling blocks together.

Will Warner says

Quadrapassel 49.2 is out! Here is whats new:

  • Updated translations: Ukrainian, Russian, Brazilian Portuguese, Chinese (China), Slovenian, Georgian
  • Made the 'P' key pause the game
  • Replaced the user help docs with a 'Game Rules' dialog
  • Stopped the menu button taking focus
  • Fixed a bug where the game's score would not be recorded when the app was quit
  • Added total rows and level information to scores

Phosh

A pure wayland shell for mobile devices.

Guido announces

Phosh 0.51.0 is out:

There's a new quick setting that allows to toggle location services on/off and the ☕ quick setting can now disable itself after a certain amount of time (check here on how to configure the intervals). We also add added a toggle to enable automatic brightness from the top panel and when enabled the brightness slider acts as an offset to the current brightness value.

phosh-brightness.png

The minimum brightness of the 🔦 brightness slider can now be configured via hwdb/udev allowing one go to lower values then the former hard coded 40%. The configuration is maintained in gmobile.

If you're using Phosh on a Google Pixel 3A XL you can now enjoy haptic feedback when typing on the on screen keyboard (like users on other devices) and creating notch configurations for new devices should now be simpler as our tooling can take screen shots of the resulting UI element layout in Phosh for you.

There's more, see the full details at here

phosh-torch-brightness.png

GNOME Websites

Emmanuele Bassi says

After a long time, the new user help website is now available and up to date with the latest content. The new help website replaces the static snapshot of the old library-web project, but it is still a work in progress, and contributions are welcome. Just like in the past, the content is sourced from each application, as well as from the gnome-user-docs repository. If you want to improve the documentation of GNOME components and core applications, make sure to join the #docs:gnome.org room.

Shell Extensions

Pedro Sader Azevedo announces

Foresight is a GNOME Shell extension that automatically enters the activities view on empty workspaces, making it faster to open apps and start using your computer!

This week, it gained support for GNOME 49, courtesy of gabrielpalassi. This is the second time in a row that Foresight gained support for a newer GNOME Shell version thanks to community contributions, which I'm immensely grateful for. I'm also very grateful to Just Perfection, who single-handedly holds so many responsibilities in the GNOME Shell extensions ecosystem.

The latest version of Foresight is available at EGO: https://extensions.gnome.org/extension/7901/foresight/

Happy foretelling 🔮👣

Miscellaneous

revisto reports

The Persian GNOME community was featured at the Debian 13 Release Party at Sharif University in Iran. The talk introduced GNOME, explained how the Persian community came together, highlighted its contributions (GTK/libadwaita apps, GNOME Circle involvement, translations, and fa.gnome.org), and invited newcomers to participate and contribute.

Recording available (Farsi): https://youtu.be/UPmNNygNQuc

debian-13-gnome-persian-poster.png

GNOME Foundation

ramcq reports

The GNOME Foundation board has shared details about our recently-approved balanced budget for 2024-25, as well as a note to share our thanks to Karen Sandler, as she has decided to step down from the board.

That's all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

21 Nov 2025 12:00am GMT

feedPlanet KDE | English

FAQs

Table of Contents

🟠 Skill Level: INTERMEDIATE

21 Nov 2025 12:00am GMT

20 Nov 2025

feedOMG! Ubuntu

Use AirPods Pro Features on Linux with LibrePods

Linux mascot holds AirPods Pro against a bright purple and yellow backdropLibrePods brings AirPods Pro features to Linux desktops, including active noise cancellation, transparency mode, ear detection and accurate battery levels.

You're reading Use AirPods Pro Features on Linux with LibrePods, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

20 Nov 2025 11:58pm GMT

feedKernel Planet

Linux Plumbers Conference: Slides templates available

Dear speakers,

You can find the LPC 2025 slides templates in different formats in the following link:

https://drive.google.com/drive/folders/1oGQz6MXtq7fjRJS0Q7Q_oBI91g38VFOC

They were created by our designer, Zohar Nir-Amitin. Zohar has been working with LPC since 2015, and has created all our wonderful t-shirts, badges and signage designs.

20 Nov 2025 10:32pm GMT

feedPlanet Debian

Bálint Réczey: Think you can’t interpose static binaries with LD_PRELOAD? Think again!

Well, you are right, you can't. At least not directly. This is well documented in many projects relying on interposing binaries, like faketime.

But what if we could write something that would take a static binary, replace at least the direct syscalls with ones going through libc and load it with the dynamic linker? We are in luck, because the excellent QEMU project has a user space emulator! It can be compiled as a dynamically linked executable, honors LD_PRELOAD and uses the host libc's syscall - well, at least sometimes. Sometimes syscalls just bypass libc.

The missing piece was a way to make QEMU always take the interposable path and call the host libc instead of using an arch-specifix assembly routine (`safe_syscall_base`) to construct the syscall and going directly to the kernel. Luckily, this turned out to be doable. A small patch later, QEMU gained a switch that forces all syscalls through libc. Suddenly, our static binaries started looking a lot more dynamic!

$ faketime '2008-12-24 08:15:42'  qemu-x86_64 ./test_static_clock_gettime
2008-12-24 08:15:42.725404654
$ file test_static_clock_gettime 
test_clock_gettime: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, ...

With this in place, Firebuild can finally wrap even those secretive statically linked tools. QEMU runs them, libc catches their syscalls, LD_PRELOAD injects libfirebuild.so, and from there the usual interposition magic happens. The result: previously uncachable build steps can now be traced, cached, and shortcut just like their dynamic friends.

There is one more problem though. Why would the static binaries deep in the build be run by QEMU? Firebuild also intercepts the `exec()` calls and now it rewrites them on the fly whenever the executed binary would be statically linked!

$ firebuild -d comm bash -c ./test_static
...
FIREBUILD: fd 9.1: ({ExecedProcess 161077.1, running, "bash -c ./test_static", fds=[0: {FileFD ofd={FileO
FD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOFD #3 type=FD_PIPE_OUT w} {Pipe #0} close_o
n_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=FD_PIPE_OUT w} {Pipe #1} close_on_popen=fal
se cloexec=false}, 3: {FileFD NULL} /* times 2 */]})
{
    "[FBBCOMM_TAG]": "exec",
    "file": "test_static",
    "// fd": null,
    "// dirfd": null,
    "arg": [
        "./test_static"
    ],
    "env": [
        "SHELL=/bin/bash",
 ...
        "FB_SOCKET=/tmp/firebuild.cpMn75/socket",
        "_=./test_static"
    ],
    "with_p": false,
    "// path": null,
    "utime_u": 0,
    "stime_u": 1017
}
FIREBUILD: -> proc_ic_msg()  (message_processor.cc:782)  proc={ExecedProcess 161077.1, running, "bash -c 
./test_static", fds=[0: {FileFD ofd={FileOFD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOF
D #3 type=FD_PIPE_OUT w} {Pipe #0} close_on_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=F
D_PIPE_OUT w} {Pipe #1} close_on_popen=false cloexec=false}, 3: {FileFD NULL} /* times 2 */]}, fd_conn=9.
1, tag=exec, ack_num=0
FIREBUILD:   -> send_fbb()  (utils.cc:292)  conn=9.1, ack_num=0 fd_count=0
Sending message with ancillary fds []:
{
    "[FBBCOMM_TAG]": "rewritten_args",
    "arg": [
        "/usr/bin/qemu-user-interposable",
        "-libc-syscalls",
        "./test_static"
    ],
    "path": "/usr/bin/qemu-user-interposable"
}
...
FIREBUILD: -> accept_ic_conn()  (firebuild.cc:139)  listener=6
...
FIREBUILD: fd 9.2: ({Process NULL})
{
    "[FBBCOMM_TAG]": "scproc_query",
    "pid": 161077,
    "ppid": 161073,
    "cwd": "/home/rbalint/projects/firebuild/test",
    "arg": [
        "/usr/bin/qemu-user-interposable",
        "-libc-syscalls",
        "./test_static"
    ],
    "env_var": [
        "CCACHE_DISABLE=1",
...
        "SHELL=/bin/bash",
        "SHLVL=0",
        "_=./test_static"
    ],
    "umask": "0002",
    "jobserver_fds": [],
    "// jobserver_fifo": null,
    "executable": "/usr/bin/qemu-user-interposable",
    "// executed_path": null,
    "// original_executed_path": null,
    "libs": [
        "/lib/x86_64-linux-gnu/libatomic.so.1",
        "/lib/x86_64-linux-gnu/libc.so.6",
        "/lib/x86_64-linux-gnu/libglib-2.0.so.0",
        "/lib/x86_64-linux-gnu/libm.so.6",
        "/lib/x86_64-linux-gnu/libpcre2-8.so.0",
        "/lib64/ld-linux-x86-64.so.2"
    ],
    "version": "0.8.5.1"
}

The QEMU patch is forwarded to qemu-devel. If it lands, anyone using QEMU user-mode emulation could benefit - not just Firebuild.

For Firebuild users, though, the impact is immediate. Toolchains that mix dynamic and static helpers? Cross-builds that pull in odd little statically linked utilities? Previously "invisible" steps in your builds? All now fair game for caching.

Firebuild 0.8.5 ships this new capability out of the box. Just update, make sure you're using a patched QEMU, and enjoy the feeling of watching even static binaries fall neatly into place in your cached build graph. Ubuntu users can get the prebuilt patched QEMU packages from the Firebuild PPA already.

Static binaries, welcome to the party!

20 Nov 2025 8:56pm GMT

feedPlanet Ubuntu

Balint Reczey: Think you can’t interpose static binaries with LD_PRELOAD? Think again!

Well, you are right, you can't. At least not directly. This is well documented in many projects relying on interposing binaries, like faketime.

But what if we could write something that would take a static binary, replace at least the direct syscalls with ones going through libc and load it with the dynamic linker? We are in luck, because the excellent QEMU project has a user space emulator! It can be compiled as a dynamically linked executable, honors LD_PRELOAD and uses the host libc's syscall - well, at least sometimes. Sometimes syscalls just bypass libc.

The missing piece was a way to make QEMU always take the interposable path and call the host libc instead of using an arch-specifix assembly routine (`safe_syscall_base`) to construct the syscall and going directly to the kernel. Luckily, this turned out to be doable. A small patch later, QEMU gained a switch that forces all syscalls through libc. Suddenly, our static binaries started looking a lot more dynamic!

$ faketime '2008-12-24 08:15:42'  qemu-x86_64 ./test_static_clock_gettime
2008-12-24 08:15:42.725404654
$ file test_static_clock_gettime 
test_clock_gettime: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, ...

With this in place, Firebuild can finally wrap even those secretive statically linked tools. QEMU runs them, libc catches their syscalls, LD_PRELOAD injects libfirebuild.so, and from there the usual interposition magic happens. The result: previously uncachable build steps can now be traced, cached, and shortcut just like their dynamic friends.

There is one more problem though. Why would the static binaries deep in the build be run by QEMU? Firebuild also intercepts the `exec()` calls and now it rewrites them on the fly whenever the executed binary would be statically linked!

$ firebuild -d comm bash -c ./test_static
...
FIREBUILD: fd 9.1: ({ExecedProcess 161077.1, running, "bash -c ./test_static", fds=[0: {FileFD ofd={FileO
FD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOFD #3 type=FD_PIPE_OUT w} {Pipe #0} close_o
n_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=FD_PIPE_OUT w} {Pipe #1} close_on_popen=fal
se cloexec=false}, 3: {FileFD NULL} /* times 2 */]})
{
    "[FBBCOMM_TAG]": "exec",
    "file": "test_static",
    "// fd": null,
    "// dirfd": null,
    "arg": [
        "./test_static"
    ],
    "env": [
        "SHELL=/bin/bash",
 ...
        "FB_SOCKET=/tmp/firebuild.cpMn75/socket",
        "_=./test_static"
    ],
    "with_p": false,
    "// path": null,
    "utime_u": 0,
    "stime_u": 1017
}
FIREBUILD: -> proc_ic_msg()  (message_processor.cc:782)  proc={ExecedProcess 161077.1, running, "bash -c 
./test_static", fds=[0: {FileFD ofd={FileOFD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOF
D #3 type=FD_PIPE_OUT w} {Pipe #0} close_on_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=F
D_PIPE_OUT w} {Pipe #1} close_on_popen=false cloexec=false}, 3: {FileFD NULL} /* times 2 */]}, fd_conn=9.
1, tag=exec, ack_num=0
FIREBUILD:   -> send_fbb()  (utils.cc:292)  conn=9.1, ack_num=0 fd_count=0
Sending message with ancillary fds []:
{
    "[FBBCOMM_TAG]": "rewritten_args",
    "arg": [
        "/usr/bin/qemu-user-interposable",
        "-libc-syscalls",
        "./test_static"
    ],
    "path": "/usr/bin/qemu-user-interposable"
}
...
FIREBUILD: -> accept_ic_conn()  (firebuild.cc:139)  listener=6
...
FIREBUILD: fd 9.2: ({Process NULL})
{
    "[FBBCOMM_TAG]": "scproc_query",
    "pid": 161077,
    "ppid": 161073,
    "cwd": "/home/rbalint/projects/firebuild/test",
    "arg": [
        "/usr/bin/qemu-user-interposable",
        "-libc-syscalls",
        "./test_static"
    ],
    "env_var": [
        "CCACHE_DISABLE=1",
...
        "SHELL=/bin/bash",
        "SHLVL=0",
        "_=./test_static"
    ],
    "umask": "0002",
    "jobserver_fds": [],
    "// jobserver_fifo": null,
    "executable": "/usr/bin/qemu-user-interposable",
    "// executed_path": null,
    "// original_executed_path": null,
    "libs": [
        "/lib/x86_64-linux-gnu/libatomic.so.1",
        "/lib/x86_64-linux-gnu/libc.so.6",
        "/lib/x86_64-linux-gnu/libglib-2.0.so.0",
        "/lib/x86_64-linux-gnu/libm.so.6",
        "/lib/x86_64-linux-gnu/libpcre2-8.so.0",
        "/lib64/ld-linux-x86-64.so.2"
    ],
    "version": "0.8.5.1"
}

The QEMU patch is forwarded to qemu-devel. If it lands, anyone using QEMU user-mode emulation could benefit - not just Firebuild.

For Firebuild users, though, the impact is immediate. Toolchains that mix dynamic and static helpers? Cross-builds that pull in odd little statically linked utilities? Previously "invisible" steps in your builds? All now fair game for caching.

Firebuild 0.8.5 ships this new capability out of the box. Just update, make sure you're using a patched QEMU, and enjoy the feeling of watching even static binaries fall neatly into place in your cached build graph. Ubuntu users can get the prebuilt patched QEMU packages from the Firebuild PPA already.

Static binaries, welcome to the party!

20 Nov 2025 8:56pm GMT

19 Nov 2025

feedPlanet GNOME

Philip Withnall: Parental controls screen time limits backend

Ignacy blogged recently about all the parts of the user interface for screen time limits in parental controls in GNOME. He's been doing great work pulling that all together, while I have been working on the backend side of things. We're aiming for this screen time limits feature to appear in GNOME 50.

High level design

There's a design document which is the canonical reference for the design of the backend, but to summarise it at a high level: there's a stateless daemon, malcontent-timerd, which receives logs of the child user's time usage of the computer from gnome-shell in the child's session. For example, when the child stops using the computer, gnome-shell will send the start and end times of the most recent period of usage. The daemon deduplicates/merges and stores them. The parent has set a screen time policy for the child, which says how much time they're allowed on the computer per day (for example, 4h at most; or only allowed to use the computer between 15:00 and 17:00). The policy is stored against the child user in accounts-service.

malcontent-timerd applies this policy to the child's usage information to calculate an 'estimated end time' for the child's current session, assuming that they continue to use the computer without taking a break. If they stop or take a break, their usage - and hence the estimated end time - is updated.

The child's gnome-shell is notified of changes to the estimated end time and, once it's reached, locks the child's session (with appropriate advance warning).

Meanwhile, the parent can query the child's computer usage via a separate API to malcontent-timerd. This returns the child's total screen time usage per day, which allows the usage chart to be shown to the parent in the parental controls user interface (malcontent-control). The daemon imposes access controls on which users can query for usage information. Because the daemon can be accessed by the child and by the parent, and needs to be write-only for the child and read-only for the parent, it has to be a system daemon.

There's a third API flow which allows the child to request an extension to their screen time for the day, but that's perhaps a topic for a separate post.

IPC diagram of screen time limits support in malcontent. Screen time limit extensions are shown in dashed arrows.

So, at its core, malcontent-timerd is a time range store with some policy and a couple of D-Bus interfaces built on top.

Per-app time limits

Currently it only supports time limits for login sessions, but it is built in such a way that adding support for time limits for specific apps would be straightforward to add to malcontent-timerd in future. The main work required for that would be in gnome-shell - recording usage on a per-app basis (for apps which have limits applied), and enforcing those limits by freezing or blocking access to apps once the time runs out. There are some interesting user experience questions to think about there before anyone can implement it - how do you prevent a user from continuing to use an app without risking data loss (for example, by killing it)? How do you unambiguously remind the user they're running out of time for a specific app? Can we reliably find all the windows associated with a certain app? Can we reliably instruct apps to save their state when they run out of time, to reduce the risk of data loss? There are a number of bits of architecture we'd need to get in place before per-app limits could happen.

Wrapping up

As it stands though, the grant funding for parental controls is coming to an end. Ignacy will be continuing to work on the UI for some more weeks, but my time on it is basically up. With the funding, we've managed to implement digital wellbeing (screen time limits and break reminders for adults) including a whole UI for it in gnome-control-center and a fairly complex state machine for tracking your usage in gnome-shell; a refreshed UI for parental controls; parental controls screen time limits as described above; the backend for web filtering (but more on that in a future post); and everything is structured so that the extra features we want in future should bolt on nicely.

While the features may be simple to describe, the implementation spans four projects, two buses, contains three new system daemons, two new system data stores, and three fairly unique new widgets. It's tackled all sorts of interesting user design questions (and continues to do so). It's fully documented, has some unit tests (but not as many as I'd like), and can be integration tested using sysexts. The new widgets are localisable, accessible, and work in dark and light mode. There are even man pages. I'm quite pleased with how it's all come together.

It's been a team effort from a lot of people! Code, design, input and review (in no particular order): Ignacy, Allan, Sam, Florian, Sebastian, Matthijs, Felipe, Rob. Thank you Endless for the grant and the original work on parental controls. Administratively, thank you to everyone at the GNOME Foundation for handling the grant and paperwork; and thank you to the freedesktop.org admins for providing project hosting for malcontent!

19 Nov 2025 11:39pm GMT

feedOMG! Ubuntu

TABS API is Mozilla’s Latest Bet on the Agentic Web

Robotic skeleton pointing at a CAPTCHA box with the text I’m not a robot while surrounded by an abstract globe and multiple blank documentsMozilla's new TABS API helps developers build AI agents to automate web tasks, as the company continues to bet on AI as its future. Details, pricing, and links inside.

You're reading TABS API is Mozilla's Latest Bet on the Agentic Web, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

19 Nov 2025 4:11pm GMT

18 Nov 2025

feedPlanet Arch Linux

Self-hosting DNS for no fun, but a little profit!

After Gandi was bought up and started taking extortion level prices for their domains I've been looking for an excuse to migrate registrar. Last week I decided to bite the bullet and move to Porkbun as I have another domain renewal coming up. However after setting up an account and paying for the transfer for 4 domains, I realized their DNS services are provided by Cloudflare! I personally do not use Cloudflare, and stay far away from all of their products for various reasons.

18 Nov 2025 12:00am GMT

16 Nov 2025

feedKernel Planet

Brendan Gregg: Third Stage Engineering

The real performance of any computer hardware in production is the result of the hardware, software, and tuning; the investment and sequence of these efforts can be pictured as a three-stage rocket:

I recently presented this embarrassingly simple diagram to Intel's executive leadership, and at the time realized the value of sharing it publicly. The Internet is awash with comparisons about Intel (and other vendors') product performance based on hardware performance alone, but the performance of software and then tuning can make a huge difference for your particular workload. You need all three stages to reach the highest, and most competitive, performance.

It's obvious why this is important for HW vendors to understand internally - they, like the Internet, can get overly focused on HW alone. But customers need to understand it as well. If a benchmark is comparing TensorFlow performance between HW vendors, was the Intel hardware tested using the Intel Extension for TensorFlow Software, and was it then tuned? The most accurate and realistic evaluation for HW involves selecting the best software and then tuning it, and doing this for all HW options.

I spend a lot of time on the final stage, tuning - what I call third-stage engineering. It's composed of roughly four parts: People, training, tools, and capabilities. You need staff, you need them trained to understand performance methodologies and SW and HW internals, they need tools to analyze the system (both observational and experimental), and finally they need capabilities to tune (tunable parameters, settings, config, code changes, etc.).

I see too many HW evaluations that are trying to understand customer performance but are considering HW alone, which is like only testing the first stage of a rocket. This doesn't help vendors or customers. I hope that's what my simple diagram makes obvious: We need all three stages to reach the highest altitude.

16 Nov 2025 1:00pm GMT

06 Nov 2025

feedPlanet Arch Linux

waydroid >= 1.5.4-3 update may require manual intervention

The waydroid package prior to version 1.5.4-2 (including aur/waydroid) creates Python byte-code files (.pyc) at runtime which were untracked by pacman. This issue has been fixed in 1.5.4-3, where byte-compiling these files is now done during the packaging process. As a result, the upgrade may conflict with the unowned files created in previous versions. If you encounter errors like the following during the update:

error: failed to commit transaction (conflicting files) waydroid: /usr/lib/waydroid/tools/__pycache__/__init__.cpython-313.pyc exists in filesystem waydroid: /usr/lib/waydroid/tools/actions/__pycache__/__init__.cpython-313.pyc exists in filesystem waydroid: /usr/lib/waydroid/tools/actions/__pycache__/app_manager.cpython-313.pyc exists in filesystem

You can safely overwrite these files by running the following command: pacman -Syu --overwrite /usr/lib/waydroid/tools/\*__pycache__/\*

06 Nov 2025 12:00am GMT

31 Oct 2025

feedPlanet Arch Linux

dovecot >= 2.4 requires manual intervention

The dovecot 2.4 release branch has made breaking changes which result in it being incompatible with any <= 2.3 configuration file. Thus, the dovecot service will no longer be able to start until the configuration file was migrated, requiring manual intervention. For guidance on the 2.3-to-2.4 migration, please refer to the following upstream documentation: Upgrading Dovecot CE from 2.3 to 2.4 Furthermore, the dovecot 2.4 branch no longer supports their replication feature, it was removed. For users relying on the replication feature or who are unable to perform the 2.4 migration right now, we provide alternative packages available in [extra]:

The dovecot 2.3 release branch is going to receive critical security fixes from upstream until stated otherwise.

31 Oct 2025 12:00am GMT

15 Oct 2025

feedPlanet Maemo

Dzzee 1.9.0 for N800/N810/N900/N9/Leste

I was playing around with Xlib this summer, and one thing led to another, and here we are with four fresh ports to retro mobile X11 platforms. There is even a Maemo Leste port, but due to some SGX driver woes on the N900, I opted for using XSHM and software rendering, which works well and has the nice, crisp pixel look (on Fremantle, it's using EGL+GLESv2). Even the N8x0 port has very fluid motion by utilizing Xv for blitting software-rendered pixels to the screen. The game is available over at itch.io.





1 Add to favourites0 Bury

15 Oct 2025 11:31am GMT

12 Oct 2025

feedPlanet Gentoo

How we incidentally uncovered a 7-year old bug in gentoo-ci

"Gentoo CI" is the service providing periodic linting for the Gentoo repository. It is a part of the Repository mirror and CI project that I've started in 2015. Of course, it all started as a temporary third-party solution, but it persisted, was integrated into Gentoo Infrastructure and grew organically into quite a monstrosity.

It's imperfect in many ways. In particular, it has only some degree of error recovery and when things go wrong beyond that, it requires a manual fix. Often the "fix" is to stop mirroring a problematic repository. Over time, I've started having serious doubts about the project, and proposed sunsetting most of it.

Lately, things have been getting worse. What started as a minor change in behavior of Git triggered a whole cascade of failures, leading to me finally announcing the deadline for sunsetting the mirroring of third-party repositories, and starting ripping non-critical bits out of it. Interesting enough, this whole process led me to finally discover the root cause of most of these failures - a bug that has existed since the very early version of the code, but happened to be hidden by the hacky error recovery code. Here's the story of it.


Repository mirror and CI is basically a bunch of shell scripts with Python helpers run via a cronjob (repo-mirror-ci code). The scripts are responsible for syncing the lot of public Gentoo repositories, generating caches for them, publishing them onto our mirror repositories, and finally running pkgcheck on the Gentoo repository. Most of the "unexpected" error handling is set -e -x, with a dumb logging to a file, and mailing on a cronjob failure. Some common errors are handled gracefully though - sync errors, pkgcheck failures and so on.

The whole cascade started when Git was upgraded on the server. The upgrade involved a change in behavior where git checkout -- ${branch} stopped working; you could only specify files after the --. The fix was trivial enough.

However, once the issue was fixed I've started periodically seeing sync failures from the Gentoo repository. The scripts had a very dumb way of handling sync failures: if syncing failed, they removed the local copy entirely and tried again. This generally made sense - say, if upstream renamed the main branch, git pull would fail but a fresh clone would be a cheap fix. However, the Gentoo repository is quite big and when it gets removed due to sync failure, cloning it afresh from the Gentoo infrastructure failed.

So when it failed, I did a quick hack - I've cloned the repository manually from GitHub, replaced the remote and put it in place. Problem solved. Except a while later, the same issue surfaced. This time I kept an additional local clone, so I wouldn't have to fetch it from server, and added it again. But then, it got removed once more, and this was really getting tedious.

What I have assumed then is that the repository is failing to sync due to some temporary problems, either network or Infrastructure related. If that were the case, it really made no sense to remove it and clone afresh. On top of that, since we are sunsetting support for third-party repositories anyway, there is no need for automatic recovery from issues such as branch name changes. So I removed that logic, to have sync fail immediately, without removing the local copy.

Now, this had important consequences. Previously, any failed sync would result in the repository being removed and cloned again, leaving no trace of the original error. On top of that, a logic stopping the script early when the Gentoo repository failed meant that the actual error wasn't even saved, leaving me only with the subsequent clone failures.

When the sync failed again (and of course it did), I was able to actually investigate what was wrong. What actually happened is that the repository wasn't on a branch - the checkout was detached at some commit. Initially, I assumed this was some fluke, perhaps also related to the Git upgrade. I've switched manually to master, and that fixed it. Then it broke again. And again.

So far I've been mostly dealing with the failures asynchronously - I wasn't around at the time of the initial failure, and only started working on it after a few failed runs. However, finally the issue resurfaced so fast that I was able to connect the dots. The problem likely happened immediately after gentoo-ci hit an issue, and bisected it! So I've started suspecting that there is another issue in the scripts, perhaps another case of missed --, but I couldn't find anything relevant.

Finally, I've started looking at the post-bisect code. What we were doing is calling git rev-parse HEAD prior to bisect, and then using that result in git checkout. This obviously meant that after every bisect, we ended up with detached tree, i.e. precisely the issue I was seeing. So why didn't I notice this before?

Of course, because of the sync error handling. Once bisect broke the repository, next sync failed and the repository got cloned again, and we never noticed anything was wrong. We only started noticing once cloning started failing. So after a few days of confusion and false leads, I finally fixed a bug that was present for over 7 years in production code, and caused the Gentoo repository to be cloned over and over again whenever any bad commit happened.

12 Oct 2025 9:14am GMT

26 Jul 2025

feedPlanet Gentoo

EPYTEST_PLUGINS and other goodies now in Gentoo

If you are following the gentoo-dev mailing list, you may have noticed that there's been a fair number of patches sent for the Python eclasses recently. Most of them have been centered on pytest support. Long story short, I've came up with what I believed to be a reasonably good design, and decided it's time to stop manually repeating all the good practices in every ebuild separately.

In this post, I am going to shortly summarize all the recently added options. As always, they are all also documented in the Gentoo Python Guide.

The unceasing fight against plugin autoloading

The pytest test loader defaults to automatically loading all the plugins installed to the system. While this is usually quite convenient, especially when you're testing in a virtual environment, it can get quite messy when you're testing against system packages and end up with lots of different plugins installed. The results can range from slowing tests down to completely breaking the test suite.

Our initial attempts to contain the situation were based on maintaining a list of known-bad plugins and explicitly disabling their autoloading. The list of disabled plugins has gotten quite long by now. It includes both plugins that were known to frequently break tests, and these that frequently resulted in automagic dependencies.

While the opt-out approach allowed us to resolve the worst issues, it only worked when we knew about a particular issue. So naturally we'd miss some rarer issue, and learn only when arch testing workflows were failing, or users reported issues. And of course, we would still be loading loads of unnecessary plugins at the cost of performance.

So, we started disabling autoloading entirely, using PYTEST_DISABLE_PLUGIN_AUTOLOAD environment variable. At first we only used it when we needed to, however over time we've started using it almost everywhere - after all, we don't want the test suites to suddenly start failing because of a new pytest plugin installed.

For a long time, I have been hesitant to disable autoloading by default. My main concern was that it's easy to miss a missing plugin. Say, if you ended up failing to load pytest-asyncio or a similar plugin, all the asynchronous tests would simply be skipped (verbosely, but it's still easy to miss among the flood of warnings). However, eventually we started treating this warning as an error (and then pytest started doing the same upstream), and I have decided that going opt-in is worth the risk. After all, we were already disabling it all over the place anyway.

EPYTEST_PLUGINS

Disabling plugin autoloading is only the first part of the solution. Once you disabled autoloading, you need to load the plugins explicitly - it's not sufficient anymore to add them as test dependencies, you also need to add a bunch of -p switches. And then, you need to keep maintaining both dependencies and pytest switches in sync. So you'd end up with bits like:

BDEPEND="
  test? (
    dev-python/flaky[${PYTHON_USEDEP}]
    dev-python/pytest-asyncio[${PYTHON_USEDEP}]
    dev-python/pytest-timeout[${PYTHON_USEDEP}]
  )
"

distutils_enable_tests pytest

python_test() {
  local -x PYTEST_DISABLE_PLUGIN_AUTOLOAD=1
  epytest -p asyncio -p flaky -p timeout
}

Not very efficient, right? The idea then is to replace all that with a single EPYTEST_PLUGINS variable:

EPYTEST_PLUGINS=( flaky pytest-{asyncio,timeout} )
distutils_enable_tests pytest

And that's it! EPYTEST_PLUGINS takes a bunch of Gentoo package names (without category - almost all of them reside in dev-python/, and we can special-case the few that do not), distutils_enable_tests adds the dependencies and epytest (in the default python_test() implementation) disables autoloading and passes the necessary flags.

Now, what's really cool is that the function will automatically determine the correct argument values! This can be especially important if entry point names change between package versions - and upstreams generally don't consider this an issue, since autoloading isn't affected.

Going towards no autoloading by default

Okay, that gives us a nice way of specifying which plugins to load. However, weren't we talking of disabling autoloading by default?

Well, yes - and the intent is that it's going to be disabled by default in EAPI 9. However, until then there's a simple solution we encourage everyone to use: set an empty EPYTEST_PLUGINS. So:

EPYTEST_PLUGINS=()
distutils_enable_tests pytest

…and that's it. When it's set to an empty list, autoloading is disabled. When it's unset, it is enabled for backwards compatibility. And the next pkgcheck release is going to suggest it:

dev-python/a2wsgi
  EPyTestPluginsSuggestion: version 1.10.10: EPYTEST_PLUGINS can be used to control pytest plugins loaded

EPYTEST_PLUGIN* to deal with special cases

While the basic feature is neat, it is not a golden bullet. The approach used is insufficient for some packages, most notably pytest plugins that run a pytest subprocesses without appropriate -p options, and expect plugins to be autoloaded there. However, after some more fiddling we arrived at three helpful features:

  1. EPYTEST_PLUGIN_LOAD_VIA_ENV that switches explicit plugin loading from -p arguments to PYTEST_PLUGINS environment variable. This greatly increases the chance that subprocesses will load the specified plugins as well, though it is more likely to cause issues such as plugins being loaded twice (and therefore is not the default). And as a nicety, the eclass takes care of finding out the correct values, again.
  2. EPYTEST_PLUGIN_AUTOLOAD to reenable autoloading, effectively making EPYTEST_PLUGINS responsible only for adding dependencies. It's really intended to be used as a last resort, and mostly for future EAPIs when autoloading will be disabled by default.
  3. Additionally, EPYTEST_PLUGINS can accept the name of the package itself (i.e. ${PN}) - in which case it will not add a dependency, but load the just-built plugin.

How useful is that? Compare:

BDEPEND="
  test? (
    dev-python/pytest-datadir[${PYTHON_USEDEP}]
  )
"

distutils_enable_tests pytest

python_test() {
  local -x PYTEST_DISABLE_PLUGIN_AUTOLOAD=1
  local -x PYTEST_PLUGINS=pytest_datadir.plugin,pytest_regressions.plugin
  epytest
}

…and:

EPYTEST_PLUGINS=( "${PN}" pytest-datadir )
EPYTEST_PLUGIN_LOAD_VIA_ENV=1
distutils_enable_tests pytest

Old and new bits: common plugins

The eclass already had some bits related to enabling common plugins. Given that EPYTEST_PLUGINS only takes care of loading plugins, but not passing specific arguments to them, they are still meaningful. Furthermore, we've added EPYTEST_RERUNS.

The current list is:

  1. EPYTEST_RERUNS=... that takes a number of reruns and uses pytest-rerunfailures to retry failing tests the specified number of times.
  2. EPYTEST_TIMEOUT=... that takes a number of seconds and uses pytest-timeout to force a timeout if a single test does not complete within the specified time.
  3. EPYTEST_XDIST=1 that enables parallel testing using pytest-xdist, if the user allows multiple test jobs. The number of test jobs can be controlled (by the user) by setting EPYTEST_JOBS with a fallback to inferring from MAKEOPTS (setting to 1 disables the plugin entirely).

The variables automatically add the needed plugin, so they do not need to be repeated in EPYTEST_PLUGINS.

JUnit XML output and gpy-junit2deselect

As an extra treat, we ask pytest to generate a JUnit-style XML output for each test run that can be used for machine processing of test results. gpyutils now supply a gpy-junit2deselect tool that can parse this XML and output a handy EPYTEST_DESELECT for the failing tests:

$ gpy-junit2deselect /tmp/portage/dev-python/aiohttp-3.12.14/temp/pytest-xml/python3.13-QFr.xml
EPYTEST_DESELECT=(
  tests/test_connector.py::test_tcp_connector_ssl_shutdown_timeout_nonzero_passed
  tests/test_connector.py::test_tcp_connector_ssl_shutdown_timeout_passed_to_create_connection
  tests/test_connector.py::test_tcp_connector_ssl_shutdown_timeout_zero_not_passed
)

While it doesn't replace due diligence, it can help you update long lists of deselects. As a bonus, it automatically collapses deselects to test functions, classes and files when all matching tests fail.

hypothesis-gentoo to deal with health check nightmare

Hypothesis is a popular Python fuzz testing library. Unfortunately, it has one feature that, while useful upstream, is pretty annoying to downstream testers: health checks.

The idea behind health checks is to make sure that fuzz testing remains efficient. For example, Hypothesis is going to fail if the routine used to generate examples is too slow. And as you can guess, "too slow" is more likely to happen on a busy Gentoo system than on dedicated upstream CI. Not to mention some upstreams plain ignore health check failures if they happen rarely.

Given how often this broke for us, we have requested an option to disable Hypothesis health checks long ago. Unfortunately, upstream's answer can be summarized as: "it's up to packages using Hypothesis to provide such an option, and you should not be running fuzz testing downstream anyway". Easy to say.

Well, obviously we are not going to pursue every single package using Hypothesis to add a profile with health checks disabled. We did report health check failures sometimes, and sometimes got no response at all. And skipping these tests is not really an option, given that often there are no other tests for a given function, and even if there are - it's just going to be a maintenance nightmare.

I've finally figured out that we can create a Hypothesis plugin - now hypothesis-gentoo - that provides a dedicated "gentoo" profile with all health checks disabled, and then we can simply use this profile in epytest. And how do we know that Hypothesis is used? Of course we look at EPYTEST_PLUGINS! All pieces fall into place. It's not 100% foolproof, but health check problems aren't that common either.

Summary

I have to say that I really like what we achieved here. Over the years, we learned a lot about pytest, and used that knowledge to improve testing in Gentoo. And after repeating the same patterns for years, we have finally replaced them with eclass functions that can largely work out of the box. This is a major step forward.

26 Jul 2025 1:29pm GMT

05 Jun 2025

feedPlanet Maemo

Mobile blogging, the past and the future

This blog has been running more or less continuously since mid-nineties. The site has existed in multiple forms, and with different ways to publish. But what's common is that at almost all points there was a mechanism to publish while on the move.

Psion, documents over FTP

In the early 2000s we were into adventure motorcycling. To be able to share our adventures, we implemented a way to publish blogs while on the go. The device that enabled this was the Psion Series 5, a handheld computer that was very much a device ahead of its time.

Psion S5, also known as the Ancestor

The Psion had a reasonably sized keyboard and a good native word processing app. And battery life good for weeks of usage. Writing while underway was easy. The Psion could use a mobile phone as a modem over an infrared connection, and with that we could upload the documents to a server over FTP.

Server-side, a cron job would grab the new documents, converting them to HTML and adding them to our CMS.

In the early days of GPRS, getting this to work while roaming was quite tricky. But the system served us well for years.

If we wanted to include photos to the stories, we'd have to find an Internet cafe.

SMS and MMS

For an even more mobile setup, I implemented an SMS-based blogging system. We had an old phone connected to a computer back in the office, and I could write to my blog by simply sending a text. These would automatically end up as a new paragraph in the latest post. If I started the text with NEWPOST, an empty blog post would be created with the rest of that message's text as the title.

As I got into neogeography, I could also send a NEWPOSITION message. This would update my position on the map, connecting weather metadata to the posts.

As camera phones became available, we wanted to do pictures too. For the Death Monkey rally where we rode minimotorcycles from Helsinki to Gibraltar, we implemented an MMS-based system. With that the entries could include both text and pictures. But for that you needed a gateway, which was really only realistic for an event with sponsors.

Photos over email

A much easier setup than MMS was to slightly come back to the old Psion setup, but instead of word documents, sending email with picture attachments. This was something that the new breed of (pre-iPhone) smartphones were capable of. And by now the roaming question was mostly sorted.

And so my blog included a new "moblog" section. This is where I could share my daily activities as poor-quality pictures. Sort of how people would use Instagram a few years later.

My blog from that era

Pause

Then there was sort of a long pause in mobile blogging advancements. Modern smartphones, data roaming, and WiFi hotspots had become ubiquitous.

In the meanwhile the blog also got migrated to a Jekyll-based system hosted on AWS. That means the old Midgard-based integrations were off the table.

And I traveled off-the-grid rarely enough that it didn't make sense to develop a system.

But now that we're sailing offshore, that has changed. Time for new systems and new ideas. Or maybe just a rehash of the old ones?

Starlink, Internet from Outer Space

Most cruising boats - ours included - now run the Starlink satellite broadband system. This enables full Internet, even in the middle of an ocean, even video calls! With this, we can use normal blogging tools. The usual one for us is GitJournal, which makes it easy to write Jekyll-style Markdown posts and push them to GitHub.

However, Starlink is a complicated, energy-hungry, and fragile system on an offshore boat. The policies might change at any time preventing our way of using it, and also the dishy itself, or the way we power it may fail.

But despite what you'd think, even on a nerdy boat like ours, loss of Internet connectivity is not an emergency. And this is where the old-style mobile blogging mechanisms come handy.

Inreach, texting with the cloud

Our backup system to Starlink is the Garmin Inreach. This is a tiny battery-powered device that connects to the Iridium satellite constellation. It allows tracking as well as basic text messaging.

When we head offshore we always enable tracking on the Inreach. This allows both our blog and our friends ashore to follow our progress.

I also made a simple integration where text updates sent to Garmin MapShare get fetched and published on our blog. Right now this is just plain text-based entries, but one could easily implement a command system similar to what I had over SMS back in the day.

One benefit of the Inreach is that we can also take it with us when we go on land adventures. And it'd even enable rudimentary communications if we found ourselves in a liferaft.

Sailmail and email over HF radio

The other potential backup for Starlink failures would be to go seriously old-school. It is possible to get email access via a SSB radio and a Pactor (or Vara) modem.

Our boat is already equipped with an isolated aft stay that can be used as an antenna. And with the popularity of Starlink, many cruisers are offloading their old HF radios.

Licensing-wise this system could be used either as a marine HF radio (requiring a Long Range Certificate), or amateur radio. So that part is something I need to work on. Thankfully post-COVID, radio amateur license exams can be done online.

With this setup we could send and receive text-based email. The Airmail application used for this can even do some automatic templating for position reports. We'd then need a mailbox that can receive these mails, and some automation to fetch and publish.

0 Add to favourites0 Bury

05 Jun 2025 12:00am GMT

30 Apr 2025

feedPlanet Gentoo

Urgent - OSU Open Source Lab needs your help

OSL logo Oregon State University's Open Source Lab (OSL) has been a major supporter of Gentoo Linux and many other software projects for years. It is currently hosting several of our infrastructure servers as well as development machines for exotic architectures, and is critical for Gentoo operation.

Due to drops in sponsor contributions, OSL has been operating at loss for a while, with the OSU College of Engineering picking up the rest of the bill. Now, university funding has been cut, this is not possible anymore, and unless US$ 250.000 can be provided within the next two weeks OSL will have to shut down. The details can be found in a blog post of Lance Albertson, the director of OSL.

Please, if you value and use Gentoo Linux or any of the other projects that OSL has been supporting, and if you are in a position to make funds available, if this is true for the company you work for, etc … contact the address in the blog post. Obviously, long-term corporate sponsorships would here serve best - for what it's worth, OSL developers have ended up at almost every big US tech corporation by now. Right now probably everything helps though.

30 Apr 2025 5:00am GMT

16 Oct 2024

feedPlanet Maemo

Adding buffering hysteresis to the WebKit GStreamer video player

The <video> element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.

WebKit GStreamer regular playback class diagram

The player private can have 3 buffering modes:

The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.

So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.

All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.

Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.

For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().

So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.

0 Add to favourites0 Bury

16 Oct 2024 6:12am GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT