05 Aug 2025

feedFedora People

Fedora Infrastructure Status: Updates and Reboots

05 Aug 2025 9:00pm GMT

31 Jul 2025

feedLinuxiac

DuckStation PS1 Emulator Dev May Drop Linux Support After AUR Frustrations

DuckStation PS1 Emulator Dev May Drop Linux Support After AUR Frustrations

After repeated complaints from Arch users, the DuckStation PS1 emulator dev removed the PKGBUILD and is considering dropping Linux support altogether.

31 Jul 2025 10:46am GMT

feedLXer Linux News

Archinstall 3.0.9 Rolls Out with U2F and Bluetooth Support

Archinstall 3.0.9, a guided installer for Arch Linux, adds U2F authentication, LUKS iteration tweaks, and Bluetooth support.

31 Jul 2025 10:05am GMT

Linux Begins Preparing For The Lenovo Legion Go 2 Handheld

In recent days there have been an increasing flow of leaks surrounding the Legion Go 2 as the next-generation handheld from Lenovo. The Lenovo Legion Go 2 is reported to be launching later this year with an AMD Ryzen Z2 Extreme SoC, 144Hz OLED display, and a variety of other hardware upgrades over the original Lenovo Legion Go. Linux driver activity around the Legion Go 2 has begun...

31 Jul 2025 8:33am GMT

feedPlanet GNOME

Thibault Martin: TIL that Micro habits can bring you down

You don't need trauma to be depressed. A lot of people don't know why they are depressed, and think they have no reason to be depressed. Emma McAdam believes this is due to micro habits building up negativity, that outweigh the positive in people's lives and make them feel depressed.

According to her, our nervous system would have a shutdown mode past a tipping point, past which it reinforces negativity and makes us flip from "It's okay I can make it" to "This is pointless, l give up."

McAdam listed those habits:

  1. Dismissing the positive, like rejecting compliments or discounting the positive, e.g. "I did it but anyone can do that" or "It's a beautiful sunset, that must be pollution."
  2. Self-punishment when you make a mistake, e.g. "I'm such a loser, why can't I do that." Many people believe that self-criticism motivates improvement but research shows it fuels shame and inaction.
  3. Blaming yourself for having emotions, e.g. "I feel guilty for being depressed," "I'm weak for having anxiety." It is normal to feel emotions, including negative ones,
  4. Withdrawing from life, either avoiding opportunities or canceling plans, to avoid facing difficult situations. This leads to isolation, which makes you feel worse, which in turn leads to more isolation.
  5. Numbing behaviors, the past generation raised their kids asking them to bottle their emotions, and as a result people often try to get distracted from their emotions by eating something, looking at their phone, or even becoming a workaholic to distract themselves from their poor self-image. These are strategies not to feel bad, instead of accepting it and working through it.
  6. Rumination, thinking about the past, everything you screwed up or could have done better. The brain mistakes this for problem solving but it keeps you stuck in negativity.
  7. Self-neglect, e.g. having too little sleep, too little exercise, unhealthy eating, fuels depression.
  8. Waiting for motivation, e.g. "I'll wait until I feel like it before getting out of bed." We often think motivation leads to action, but the opposite is true: motivation follows action. We need to take even micro steps towards a goal to be motivated.
  9. Emotional reasoning, e.g. "If I feel anxious I must be an awkward person." Feelings aren't facts, and separating them is key to solving depression.
  10. All-or-nothing thinking, e.g. "if I can't do a full workout, why bother at all?" or "If I mess up at work, I should quit." This fuels depression. Usually any small action is better than no action.
  11. Victim mindset, "life is unfair, I can't do anything about it." It comes up when you're asked to take action: "I can't, because..." The opposite is a growth mindset: "What can I do about it."
  12. Nothing will change mindset, fuels depression too.

Those micro habits add up and makes our nervous system flip from resilience to depression. The good news is that those habits can be unlearned by paying attention to them, which according to McAdam makes the nervous system flip back from depression to resilience.

I believe this list is useful to bear in mind, and taking action against those bias is likely to help people stay positive. I also believe depression is a mental illness and that people who experience it need to be supported on their way to healing. If you feel depressed, please go see a doctor even if it feels unsurmountable, and ask for help.

31 Jul 2025 7:15am GMT

feedLXer Linux News

Zed code editor hears your prayers, rolls out AI-free mode

Can we have this as a global feature in all software? Please?Zed, a fast new Rust-based text editor aimed at programmers, now lets you totally disable LLM bot integration. We're sure some users will rejoice - but how many?…

31 Jul 2025 7:02am GMT

feedPlanet Ubuntu

Podcast Ubuntu Portugal: E358 Arroz De Pato Liquidificado

Gostam de arroz de pato? Nós também - neste episódio temos disso e muito mais para causar indigestão: o Diogo foi ao Porto ensinar LXD e dizer mal da gastronomia local; visitou o Museu LOAD do Timex ZX Spectrum em Cantanhede e trouxe-nos um relato nostálgico dos bons tempos da infância; a Microsoft libertou o Edit para ser usado como Snap; a Canonical meteu-se no negócio da «fast food» e decidiu investir em kernels novinhos em folha e Ubuntu como «rolling release»; em Linux teremos cada vez mais TPM. E ainda discutimos como usar agentes de IA para bater código pode dar mau resultado: entra pato, sai cocó. E facto inédito: o Diogo usou grosseiros palavrões.

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: "Won't see it comin' (Feat Aequality & N'sorte d'autruche)", por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os separadores de péssima qualidade foram tocados ao vivo e sem rede pelo Miguel, pelo que pedimos desculpa pelos incómodos causados. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

31 Jul 2025 12:00am GMT

30 Jul 2025

feedLinuxiac

Alma-Based HeliumOS 10 Is Out — Here’s What I Think

Alma-Based HeliumOS 10 Is Out — Here’s What I Think

Alma-based HeliumOS 10 is out now with Linux kernel 6.12, Zsh as default, Btrfs with optional encryption, and Docker preinstalled-here's my take.

30 Jul 2025 8:21pm GMT

feedOMG! Ubuntu

Ubuntu 25.10 Offers Improved Disk Encryption Using TPM

Ubuntu 25.10 improves experimental TPM-backed full-disk encryption, which ties security to hardware integrity. New options and checks will be in place.

You're reading Ubuntu 25.10 Offers Improved Disk Encryption Using TPM, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

30 Jul 2025 7:00pm GMT

feedLinux Today

Rescuezilla 2.6.1 Released with Ubuntu 25.04 Base

Discover the latest Rescuezilla 2.6.1 release, now based on Ubuntu 25.04. Explore new features and enhancements for efficient system recovery.

The post Rescuezilla 2.6.1 Released with Ubuntu 25.04 Base appeared first on Linux Today.

30 Jul 2025 1:46pm GMT

Best Free and Open Source Alternatives to Autodesk Fusion

Discover the best free and open source alternatives to Autodesk Fusion. Explore powerful tools that enhance your design capabilities without the cost.

The post Best Free and Open Source Alternatives to Autodesk Fusion appeared first on Linux Today.

30 Jul 2025 1:38pm GMT

GStreamer 1.26.4 Rolls Out with Bug Fixes and Performance Tweaks

Discover the latest GStreamer 1.26.4 update featuring essential bug fixes and performance enhancements. Improve your multimedia experience today!

The post GStreamer 1.26.4 Rolls Out with Bug Fixes and Performance Tweaks appeared first on Linux Today.

30 Jul 2025 1:32pm GMT

feedPlanet Debian

Bits from Debian: New Debian Developers and Maintainers (May and June 2025)

The following contributors got their Debian Developer accounts in the last two months:

The following contributors were added as Debian Maintainers in the last two months:

Congratulations!

30 Jul 2025 12:00pm GMT

feedFedora People

Ben Cotton: It’s okay to stop doing things

Ben Cotton's avatar

One important rule in running a project that people depend on is: don't start something you're not willing to keep doing indefinitely. Another important rule is: it's okay to stop doing things. These two rules are seemingly at odds with each other, but together they form a key principle: reliably do what makes sense to do.

The reason you shouldn't start doing something that you're not willing (or able) to continue doing is that people will start to rely on it. If you start doing things and then stop them after a time or two, you'll lose credibility. This harms your project's reputation over the long run, and it can be hard to regain that lost trust. Your processes and practices must be sustainable.

On the other hand, there is no virtue in continuing to do things that aren't valuable. When conditions change - because of contributor availability, technology evolution, etc - what used to be a good use of time may not be any longer. Contributor time is the most valuable resource a project has, so if it's not a good use of time, stop doing it.

Recent departures from the Red Hat team that does Fedora QA have prompted the team to reevaluate some of the work they do. Fedora is a project with a long and mature history, so it has built up a lot of cruft over the decades. Some of that is in the form of the release criteria. (Chapter 11 of Program Management for Open Source Projects goes into depth on how to manage your project's release criteria.) Booting from optical media (CD or DVD) used to be a critical function of the installer, so it made sense to block the release if that didn't work. These days, a lot of hardware doesn't include an optical drive; the hardware that does almost always supports booting from USB or network (e.g. PXE), so optical boot may not be worth blocking for. This reduces the testing load.

When you think it's time to stop doing something that you had been doing, the first step is understanding what you want to stop doing. Do the conditions that lead you to start still exist? Do you have the time and resources you need to continue? Are there other things that are more valuable that you can do instead? With your answers to those questions, you can work out a final answer with the rest of the community. If you decide to stop doing something, make sure it's communicated to the people who need to know. If it involves something no longer working, try to give as long of an off-ramp as you can reasonably provide.

This post's featured photo by Dim Hou on Unsplash.

The post It's okay to stop doing things appeared first on Duck Alignment Academy.

30 Jul 2025 12:00pm GMT

Avi Alkalay: PDFs must die

Avi Alkalay's avatar

Important and well written article by Sydney Butler on How-To Geek: PDFs Must Die

❝PDFs were created as a way to give a document an absolute, invariable design suitable for PRINT. It was never meant to be how we consumed documents on a screen.❞

And I must add:

We the data professionals, we hate PDFs. They might look good and structured for your human eyes, but the data inside them is a mess, unstructured and not suitable to be processed by computer programs.

Although we still didn't reached an agreement for ubiquitous formats, here are some better options:

Also on my LinkedIn.

30 Jul 2025 11:38am GMT

feedPlanet Ubuntu

Ubuntu Blog: How to enable Real-time Ubuntu on your machine

If you're here, you likely already know about preemption, determinism, and real-time capable operating systems. If that's the case, and you want to learn how to get up and running with Real-time Ubuntu, skip ahead now to find out how to enable the kernel on your workstation.

If you'd like a short refresher, we have a three-part blog series on real-time Linux for beginners. On the other hand, if you're more interested in the business impact a real-time OS can have across industries, this whitepaper explains the benefits of real-time compute for enterprises, explores industry use-cases, and will show you how to unlock the potential of real-time compute in your business. And, finally, if you're more technically savvy, this CTO's guide to real-time Linux is probably what you're looking for.

Real-time Ubuntu is Ubuntu with a real-time kernel, which includes the PREEMPT_RT patchset. It changes Linux's default scheduler to a priority-based one, increasing predictability by modifying the existing kernel code. As a consequence, Real-time Ubuntu is more pre-emptive than mainline, delivering determinism and lower latency.

Available Versions and Long-Term Support

Real-time Ubuntu is available across both LTS and interim Ubuntu releases, offering flexibility to individual developers and enterprises alike to experiment with the latest software and features, while having the peace of mind of support.

Open access to Real-time Ubuntu

The deb packages of various Real-time Ubuntu releases, from 22.04 LTS to the interim 25.04, have been openly released and are now available for free for anyone to access. Anyone can install them via the Ubuntu Universe repository, an archive which makes it easy to install new software tested and built specifically for each version of Ubuntu.

Version Code name Real-time Kernel Version Access
Ubuntu 25.04 Plucky Puffin 6.14 Source package
Ubuntu 24.10 Oracular Oriole 6.11 Source package
Ubuntu 24.04 LTS Noble Numbat 6.8 Source package
Ubuntu 22.04 LTS Jammy Jellyfish 5.15 Source package
Table 1: Releases of Real-time Ubuntu available in the Universe repository

For open access to the real-time kernels above available in Universe:

sudo add-apt-repository universe

sudo apt update

sudo apt install ubuntu-realtime

Note that the Universe repository contains community-maintained free and open-source software, and only a teaser of the real-time kernel is available in Universe for LTS releases. To receive updates for the LTS samples, an Ubuntu Pro subscription is necessary.

While the real-time kernel packages are openly available and provide a great way to test real-time capabilities ahead of the next LTS, developers and enterprises deploying to production can receive security patching for all the software in the repository for 10+ years with Ubuntu Pro.

Ubuntu Pro is a subscription for open-source software security by Canonical, providing security and compliance on top of Ubuntu LTS, with 10+ years of coverage for over 25,000 packages.

Ubuntu Pro is always free for personal use, and anyone can use it for free on up to 5 machines, or 50 if you are an official Ubuntu Community member. Furthermore, enterprises receive a 30-day trial.

LTS Releases of Real-time Ubuntu Server with Ubuntu Pro

Canonical provides up to 12 years of security maintenance and a continuous stream of critical updates to the real-time kernel variants for Ubuntu LTS versions under the Ubuntu Pro subscription. The generic release of Real-time Ubuntu is available on AMD64 and ARM64, whereas silicon-optimised variants are available on Intel hardware (22.04 LTS with support for Intel Time Coordinated Computing and IEEE 802.1 Time Sensitive Networking ), and Raspberry Pi.

Version Code name Real-time Kernel Version Variants
Ubuntu 22.04 LTS Jammy Jellyfish 5.15 generic, Intel-optimised
Ubuntu 24.04 LTS Noble Numbat 6.8 generic, Raspberry-Pi optimised
Table 2: LTS Releases of Real-time Ubuntu Server

If you're using an LTS release of Ubuntu and have Ubuntu Pro enabled, enabling the real-time kernel is straightforward. First, if you have not yet attached your machine to an Ubuntu Pro subscription, you will need to do so in order to enable Real-time Ubuntu. You can do so by running the following command:

sudo pro attach

Otherwise, select the correct version for your OS and processor, and use the corresponding commands below to enable the appropriate kernel variant:

Generic on Ubuntu 22.04 LTS or Ubuntu 24.04 LTS:

sudo pro enable realtime-kernel

Raspberry Pi 4 or 5 on Ubuntu 24.04 LTS:

sudo pro enable realtime-kernel --variant=raspi

Optimized Real-time Ubuntu is production-ready on Intel Atom® X6000E Series Processors, as well as 11th, 12th and 13th Gen Intel® Core™ processors:

Intel Atom and Intel Core processors on Ubuntu 22.04 LTS:

sudo pro enable realtime-kernel --variant=intel-iotg

Reboot your system, and you're ready to run Real-time Ubuntu.

LTS Releases of Real-time Ubuntu Core

The real-time kernel is also available for Ubuntu Core, Canonical's embedded version of the Ubuntu OS. Ubuntu Core is Ubuntu for IoT and embedded devices, with a strong focus on robust security, a streamlined update mechanism, and a simplified developer experience. While Ubuntu Core is similar to standard Ubuntu - open source, binary-compatible and backed by a strong developer community - it is specifically designed for the world of embedded and cybersecurity compliance.

Ubuntu Core Version Real-time kernel version
Core 22 5.15
Core 24 6.8

In order to run Ubuntu Core with the real-time kernel, we first need to create and build a Real-time Ubuntu Core image.

Let's get you to production

Canonical delivers Real-time Ubuntu with the same commitment to security, stability, and open-source leadership as the rest of the Ubuntu ecosystem. With deterministic performance, long-term support, and easy access across edge and cloud environments, Real-time Ubuntu enables enterprises to focus on their value proposition.

We partner with silicon vendors, board manufacturers and ODMs to shorten enterprises' time-to-market. Reach out to our team for custom board enablement, commercial distribution, long-term support or security maintenance.

Contact Us

Further Reading

A CTO's guide to real-time Linux

Is a real-time OS right for your business?

Cyber Resilience Act: Yocto or Ubuntu Core for embedded devices?

30 Jul 2025 11:12am GMT

feedOMG! Ubuntu

GNOME Shell Gets a Proper Desktop Photo Widget (Finally)

A customisable photo widget for your GNOME desktop that shows images from any folder you like. Resizable and moveable, it adds personalised flourish.

You're reading GNOME Shell Gets a Proper Desktop Photo Widget (Finally), a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

30 Jul 2025 11:04am GMT

feedLinuxiac

Archinstall 3.0.9 Rolls Out with U2F and Bluetooth Support

Archinstall 3.0.9 Rolls Out with U2F and Bluetooth Support

Archinstall 3.0.9, a guided installer for Arch Linux, adds U2F authentication, LUKS iteration tweaks, and Bluetooth support.

30 Jul 2025 9:13am GMT

feedPlanet Debian

Steinar H. Gunderson: Superimposed codes, take three

After I wrote last week that OEIS A286874 would stop at a(12) and that computing (verifying) a(13) would take about 4-5000 CPU years, the changes have finally been approved, and… the sequence includes a(13) = 26. What happened?

Well, first of all, I am indeed not a mathematical genius (the last post even forgot the "not"); I had a stupid conversion error in the estimation, causing a factor 25 or so. But the rest came from actual speedups.

First of all, I improved one of the existing symmetry detectors a bit (the one described last in the previous post was not fully rejecting the possible symmetries when multiple new bits were introduced in one value). But I also made a more universal symmetry detector; if switching the order of certain neighboring bits and re-sorting the sequence made it lexicographically smaller, then we can abort the search. This is pretty expensive and only rejects ~5% of candidates, so it's only worth it at higher levels, but it's much cheaper than checking all n! arbitrary permutations and catches maybe 90% of a full rejection. (Also, if you can reject 5% at multiple levels, those percentages tend to add up. We're down from hundreds of thousands of duplicate solutions, to only a bit over 100, so the amount of speedup available from reducing symmetries is rapidly dwindling.)

Also, surprisingly to me, before going on to run the next level, doing a population count to check if there were too few bits to ever be a solution was seemingly a large win (e.g. are have three values so far, but only 21 bits left; we can never generate a sequence larger than 24 even if all the stars align, and can abort immediately). You would think that this counting, which takes very real CPU time even with vectorization, wouldn't be worth it compared to just running through the base layers of the recursion very quickly, but evidently, it is by a large margin. I guess it's a very common case to have many more than 1 bit left but less than 26-n, and it also means you can just stop iterating a bit before you get to the end.

But perhaps the most impactful optimization was a microoptimization. Recall that we spent most of our time ANDing 8192-bit vectors (which would be 16384-bit vectors for a(13)) with each other. Some looking at performance metrics suggested that the RAM bandwidth was completely maxed out, with ~80% of theoretical bandwidth in use; only faster RAM or more memory channels would have made a reasonable dent in the performance of this kind of architecture.

But pretty early, most of those bits will be zero. If you've already decided on the first five values in a sequence, you will not have 8187 options left; in most cases, you'll have more like 3-400. And since the bit sets only ever shrink, we can simply compress away all those known zeros. For most of our purposes, it doesn't really decide what each bit signifies (an important exception is the point where we have a valid solution and need to print it out, but it's not hard to store the mapping), as we mostly use the values for looking up pregenerated vectors to AND together. This means that when we start a new sub-job, we can find which future values are possible, and then map those into new numbers 0 through 511 (or whatever). This means we can use 512-bit vectors instead of 8192-bit vectors, with all the obvious advantages; less ALU work, less memory traffic, and better cache locality. (It's interesting that we started by being extremely ALU-bound, then moved to being very RAM-bound, and then ended up in fairly normal territory.)

Of course, there are situations where you could have more than 512 valid values. In that case, you can either recompile with larger bit sets (typically a multiple of 128, to get good use of SIMD), or you can split into smaller sub-jobs; find all valid ways of extending the sequence by one element (trivial; we already did that to make the bit sets), and then make one job for each. This splitting is also good for variance; no longer do you have some sub-jobs that finish in milliseconds and some that require days.

There are some downsides too, of course. In particular, we can no longer pregenerate one universal 8192*8192*8192 bit LUT (well, 8192*8191/2*8192); every sub-job needs to make its own set of LUTs before starting. But since this is O(n³) and we just cut n from 8192 to 512, it's not really a blocker (although of course far from zero); and importantly, it cuts our total RAM usage. For n=8192, we already needed a bit over 32 GB (though sharable between all jobs), and each next element in the sequence (a(13), a(14), etc.) is a factor 8 extra, so it starts becoming a real problem fast. But on the flipside, I think this extra precalc makes the algorithm much less amenable to a theoretical GPU implementation (~8 MB private data per instance, as opposed to one large shared static pool of constants and then just 1 kB of state per instance), which would otherwise be nontrivial but probably possible (the problem itself is so parallel). Interestingly enough, it's possible to use bitslicing to speed up this precalc, which is a technique I cannot remember when I last used.

All in all, it took only about 114 CPU-days (or, well, thread-days, as hyperthreading now makes sense again) to calculate a(13), which was eminently possible; and many of the optimizations came late in the process, so a rerun would be faster than that. So, could we get to a(14)? Well, maybe. I'm less convinced that it would be impossible than I was with a(13) earlier. :-) But I started looking at it, and it turns out there are literally trillions (possibly more!) of sub-jobs if you want to split deeply enough to get each down into the 512-bit range. And even at ~8 ms per core per job (ignoring the cost of splitting and just looking at the cost of processing the jobs themselves), it just becomes too unwieldy for me, especially since Postgres isn't really that great at storing billions of rows efficiently. But impossible? Definitely not.

30 Jul 2025 7:45am GMT

feedPlanet KDE | English

The XP-Pen Artist 22R Pro works on Linux now

The future is now!!

It's been almost two years since my last update on this project, what changed? And if you aren't familiar with what I've been working on, here's a quick recap.

The hardware

Here is a graphics tablet I bought a few years ago now, the XP-Pen Artist 22R Pro:

Yeah this picture is years old by now, but it still looks the same…

It has a fatal flaw that a lot of non-Wacom tablets share though, it doesn't work that well on Linux! To be more specific, it "works" but has a few problems:

That is not great, especially since it's not the cheapest graphics tablet on the market. So it really sucks that artists can't take advantage of all of it's features on the best operating system. You can achieve some parity with Windows if you use XP-Pen's proprietary user-space driver (which works on Wayland BTW.) This solution isn't satisfying though - I want this thing to work out of the box, using open-source software ❤️

Linux

I have completed the patch for the kernel to add support for this specific tablet. After sitting it on it for a while (due to being busy with other things.) I'm happy to announce it's merged and should be generally available in the upcoming Linux 6.17 release 🎉

(It's technically sitting in linux-next, Linus hasn't merged the HID subsystem yet but I couldn't wait!)

Thank you to the original author Aren Villanueva who wrote the original DIGImend kernel driver. I took his work, rebased it on top of the existing uclogic driver and changed how the many keys and dials were handled among other changes. Some of this work was covered in previous entries in this series, if you're interested.

What this means is regardless of your desktop environment, this tablet is properly initialized and all of the flaws listed in the hardware section will be fixed.

libwacom

I added a descriptor to libwacom for this tablet, which means it shows the correct configuration options under KDE Plasma and GNOME. For example, it will show that the pen has two buttons and not three (which is the fallback):

libinput

I added support for tablet dials in libinput. In layman terms, this means desktop environments like GNOME and KDE are now aware of this feature on your tablet. This has huge implications outside of this specific tablet, for example certain HUION devices also benefit from this. More information on how KDE Plasma uses this is explained in a later section.

Wayland

Thanks to work by Peter Hutterer, the Tablet protocol in Wayland now has support for tablet dials. What this means is that applications can now read tablet dial input and do whatever they want with it, like making it zoom in and out of a canvas.

KDE Plasma

Thanks to work by Nicolas Fella, KWin (the compositor in KDE Plasma) is now dial-aware and can send them to Wayland-enabled applications beginning in Plasma 6.4. Because of the aformentioned lack of application support however, I added a feature in KDE Plasma 6.5 to rebind dials to custom user actions:

Don’t mind the buggy-looking dialog, that is caused by a development version of Kirigami I was using.

The XP-PEN software allows you to do this too, so having a built-in solution in KDE Plasma would be great! I did just that, and added support for rebinding dials which will show up in the upcoming KDE Plasma 6.5 release. Making them user configurable is meant as a "bridge", as I'm not aware of any applications supporting dials yet.

With this final piece - from top-to-bottom - the entire Linux graphics tablet stack can now take advantage of relative dials like any other hardware feature 🎉

Conclusion

I want to make it clear (especially if you don't know me) that this isn't some hack, or a rushed driver. This is thanks to years of effort, and from multiple parties and ecosystems. I also literally use this driver and KDE Plasma for my hobbies every day, I know it works first-hand.

I also hope this series showcases that the graphics tablet infrastructure in Linux is not stagnant, but actively being worked on by super-talented people every day. In a theoretical future distribution release that has Linux 6.17 and KDE Plasma 6.5, this tablet will work out-of-the-box. (And for other hardware, it's only a matter of time!) We can make the Linux desktop not just usable for artists, but we can executing it better than anything else out there. No more crappy driver software, tablets will work out of the box on an operating system that actually respects you ❤️

To Nicolas, Peter and Aren - you can redeem a beer or drink of choice at your earliest convenience 🍻


If this series has been fascinating for you, then I highly suggest making plans to watch my Akademy 2025 talk in Berlin or online about bridging the gap for artists using Linux and KDE Plasma. I'm going to be covering the amazing progress - especially in our KDE Plasma 6.x series - that's relevant to artists big and small, and also discuss plans for the long road ahead. You also need to follow the KDE account on Mastodon and Bluesky if you haven't already, where we post announcements and sometimes call-to-actions like our recent push to contribute data about graphics tablet hardware!

I also want to remind everyone that one of KDE's current goals is about input, and as it's Goal Champion I've been volunteering and organizing to fix issues like the ones seen above. If you are having trouble with your graphics tablet on the KDE Plasma Wayland session (and it's our fault) then we want to know! Or if you're super happy with KDE and nothing is wrong, we wouldn't mind hearing about that was well 🐸

If you're still addicted to me hearing me talk about improving stuff, here is a sneak peek of the hardware I'm testing in KDE Plasma next:

Sorry that the HUION tracks fingerprints like crazy

But that's for another time!

30 Jul 2025 12:00am GMT

29 Jul 2025

feedOMG! Ubuntu

Fish is Like Bash With a Brain — Here’s How to Try it on Ubuntu

Fish Shell.Fish might be the Bash alternative you didn't know you needed, thanks to features like highlighting, and smarter command suggestions. Learn how to install it on Ubuntu.

You're reading Fish is Like Bash With a Brain - Here's How to Try it on Ubuntu, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

29 Jul 2025 6:04pm GMT

feedPlanet GNOME

Christian Schaller: Artificial Intelligence and the Linux Community

I have wanted to write this blog post for quite some time, but been unsure about the exact angle of it. I think I found that angle now where I will root the post in a very tangible concrete example.

So the reason I wanted to write this was because I do feel there is a palpable skepticism and negativity towards AI in the Linux community, and I understand that there are societal implications that worry us all, like how deep fakes have the potential to upend a lot of things from news disbursement to court proceedings. Or how malign forces can use AI to drive narratives in social media etc., is if social media wasn't toxic enough as it is. But for open source developers like us in the Linux community there is also I think deep concerns about tooling that deeply incurs into something that close to the heart of our community, writing code and being skilled at writing code. I hear and share all those concerns, but at the same time having spent time the last weeks using Claude.ai I do feel it is not something we can afford not to engage with. So I know people have probably used a lot of different AI tools in the last year, some being more cute than useful others being somewhat useful and others being interesting improvements to your Google search for instance. I think I shared a lot of those impressions, but using Claude this last week has opened my eyes to what AI enginers are going to be capable of going forward.

So my initial test was writing a python application for internal use at Red Hat, basically connecting to a variety of sources and pulling data and putting together reports, typical management fare. How simple it was impressed me though, I think most of us having to deal with pulling data from a new source know how painful it can be, with issues ranging from missing, outdated or hard to parse API documentation. I think a lot of us also then spend a lot of time experimenting to figure out the right API calls to make in order to pull the data we need. Well Claude was able to give me python scripts that pulled that data right away, I still had to spend some time with it to fine tune the data being pulled and ensuring we pulled the right data, but I did it in a fraction of the time I would have spent figuring that stuff out on my own. The one data source Claude struggled with Fedora's Bohdi, well once I pointed it to the URL with the latest documentation for that it figured out that it would be better to use the bohdi client library to pull data and once it had that figured out it was clear sailing.

So coming of pretty impressed by that experience I wanted to understand if Claude would be able to put together something programmatically more complex, like a GTK+ application using Vulkan. [Note: should have checked the code better, but thanks to the people who pointed this out. I told the AI to use Vulkan, which it did, but not in the way I expected, I expected it to render the globe using Vulkan, but it instead decided to ensure GTK used its Vulkan backend, an important lesson in both prompt engineering and checking the code afterwards).]So I thought what would be a good example of such an application and I also figured it would be fun if I found something really old and asked Claude to help me bring it into the current age. So I suddenly remembered xtraceroute, which is an old application orginally written in GTK1 and OpenGL showing your traceroute on a 3d Globe.

Screenshot of original xtraceroute

Screenshot of the original Xtraceroute application

I went looking for it and found that while it had been updated to GTK2 since last I looked at it, it had not been touched in 20 years. So I thought, this is a great testcase. So I grabbed the code and fed it into Claude, asking Claude to give me a modern GTK4 version of this application using Vulkan. Ok so how did it go? Well it ended up being an iterative effort, with a lot of back and forth between myself and Claude. One nice feature Claude has is that you can upload screenshots of your application and Claude will use it to help you debug. Thanks to that I got a long list of screenshots showing how this application evolved over the course of the day I spent on it.

First output of Claude

This screenshot shows Claudes first attempt of transforming the 20 year old xtraceroute application into a modern one using GTK4, Vulkan and also adding a Meson build system. My prompt to create this was feeding in the old code and asking Claude to come up with a GTK4 and Vulkan equivalent. As you can see the GTK4 UI is very simple, but ok as it is. The rendered globe leaves something to be desired though. I assume the old code had some 2d fall backcode, so Claude latched onto that and focused on trying to use the Cairo API to recreate this application, despite me telling it I wanted a Vulkan application. What what we ended up with was a 2d circle that I could spin around like a wheel of fortuen. The code did have some Vulkan stuff, but defaulted to the Cairo code.

Second attempt image

Second attempt at updating this application Anyway, I feed the screenshot of my first version back into Claude and said that the image was not a globe, it was missing the texture and the interaction model was more like a wheel of fortune. As you can see the second attempt did not fare any better, in fact we went from circle to square. This was also the point where I realized that I hadn't uploaded the textures into Claude, so I had to tell it to load the earth.png from the local file repository.

Third attempt by Claude

Third attempt from Claude.Ok, so I feed my second screenshot back into Claude and pointed out that it was no globe, in fact it wasn't even a circle and the texture was still missing. With me pointing out it needed to load the earth.png file from disk it came back with the texture loading. Well, I really wanted it to be a globe, so I said thank you for loading the texture, now do it on a globe.

This is the output of the 4th attempt. As you can see, it did bring back a circle, but the texture was gone again. At this point I also decided I didn't want Claude to waste anymore time on the Cairo code, this was meant to be a proper 3d application. So I told Claude to drop all the Cairo code and instead focus on making a Vulkan application.

Fifth attempt

So now we finally had something that started looking like something, although it was still a circle, not a globe and it got that weird division of 4 thing on the globe. Anyway, I could see it using Vulkan now and it was loading the texture. So I was feeling like we where making some decent forward movement. So I wrote a longer prompt describing the globe I wanted and how I wanted to interact with it and this time Claude did come back with Vulkan code that rendered this as a globe, thus I didn't end up screenshoting it unfortunately.

So with the working globe now in place, I wanted to bring in the day/night cycle from the original application. So I asked Claude to load the night texture and use it as an overlay to get that day/night effect. I also asked it to calculate the position of the sun to earth at the current time, so that it could overlay the texture in the right location. As you can see Claude did a decent job of it, although the colors was broken.

7th attempt

So I kept fighting with the color for a bit, Claude could see it was rendering it brown, but could not initally figure out why. I could tell the code was doing things mostly right so I also asked it to look at some other things, like I realized that when I tried to spin the globe it just twisted the texture. We got that fixed and also I got Claude to create some tests scripts that helped us figure out that the color issue was a RGB vs BRG issue, so as soon as we understood that then Claude was able to fix the code to render colors correctly. I also had a few iterations trying to get the scaling and mouse interaction behaving correctly.

10th attempt

So at this point I had probably worked on this for 4-5 hours, the globe was rendering nicely and I could interact with it using the mouse. Next step was adding the traceroute lines back. By default Claude had just put in code to render some small dots on the hop points, not draw the lines. Also the old method for getting the geocoordinates, but I asked Claude to help me find some current services which it did and once I picked one it on first try gave me code that was able to request the geolocation of the ip addresses it got back. To polish it up I also asked Claude to make sure we drew the lines following the globes curvature instead of just drawing straight lines.

Final version

Final version of the updated Xtraceroute application. It mostly works now, but I did realize why I always thought this was a fun idea, but less interesting in practice, you often don't get very good traceroutes back, probably due to websites being cached or hosted globally. But I felt that I had proven that with a days work Claude was able to help me bring this old GTK application into the modern world.

Conclusions

So I am not going to argue that Xtraceroute is an important application that deserved to be saved, in fact while I feel the current version works and proves my point I also lost motivation to try to polish it up due to the limitations of tracerouting, but the code is available for anyone who finds it worthwhile.

But this wasn't really about Xtraceroute, what I wanted to show here is how someone lacking C and Vulkan development skills can actually use a tool like Claude to put together a working application even one using more advanced stuff like Vulkan, which I know many more than me would feel daunting. I also found Claude really good at producing documentation and architecture documents for your application. It was also able to give me a working Meson build system and create all the desktop integration files for me, like the .desktop file, the metainfo file and so on. For the icons I ended up using Gemini as Claude do not do image generation at this point, although it was able to take a png file and create a SVG version of it (although not a perfect likeness to the original png).

Another thing I want to say is that the way I think about this, it is not that it makes coding skills less valuable, AIs can do amazing things, but you need to keep a close eye on them to ensure the code they create actually do what you want and that it does it in a sensible manner. For instance in my reporting application I wanted to embed a pdf file and Claude initial thought was to bring in webkit to do the rendering. That would have worked, but would have added a very big and complex dependency to my application, so I had to tell it that it could just use libpoppler to do it, something Claude agreed was a much better solution. The bigger the codebase the harder it also becomes for the AI to deal with it, but I think it hose circumstances what you can do is use the AI to give you sample code for the functionality you want in the programming language you want and then you can just work on incorporating that into your big application.

The other part here if course in terms of open source is how should contributors and projects deal with this? I know there are projects where AI generated CVEs or patches are drowning them and that helps nobody. But I think if we see AI as a developers tool and that the developer using the tool is responsible for the code generated, then I think that mindset can help us navigate this. So if you used an AI tool to create a patch for your favourite project, it is your responsibility to verify that patch before sending it in, and with that I don't mean just verifying the functionality it provides, but that the code is clean and readable and following the coding standards of said upstream project. Maintainers on the other hand can use AI to help them review and evaluate patches quicker and thus this can be helpful on both sides of the equation. I also found Claude and other AI tools like Gemini pretty good at generating test cases for the code they make, so this is another area where open source patch contributions can improve, by improving test coverage for the code.

I do also believe there are many areas where projects can greatly benefit from AI, for instance in the GNOME project a constant challenge for extension developers have been keeping their extensions up-to-date, well I do believe a tool like Claude or Gemini should be able to update GNOME Shell extensions quite easily. So maybe having a service which tries to provide a patch each time there is a GNOME Shell update might be a great help there. At the same time having a AI take a look at updated extensions and giving an first review of the update might help reduce the load on people doing code reviews on extensions and help flag problematic extensions.

I know for a lot of cases and situations uploading your code to a webservice like Claude, Gemini or Copilot is not something you want or can do. I know privacy is a big concern for many people in the community. My team at Red Hat has been working on a code assistant tool using the IBM Granite model, called Granite.code. What makes Granite different is that it relies on having the model run locally on your own system, so you don't send your code or data of somewhere else. This of course have great advantages in terms of improving privacy and security, but it has challenges too. The top end AI models out there at the moment, of which Claude is probably the best at the time of writing this blog post, are running on hardware with vast resources in terms of computing power and memory available. Most of us do not have those kind of capabilities available at home, so the model size and performance will be significantly lower. So at the moment if you are looking for a great open source tool to use with VS Code to do things like code completion I recommend giving Granite.code a look. If you on the other hand want to do something like I have described here you need to use something like Claude, Gemini or ChatGPT. I do recommend Claude, not just because I believe them to be the best at it at the moment, but they also are a company trying to hold themselves to high ethical standards. Over time we hope to work with IBM and others in the community to improve local models, and I am also sure local hardware will keep improving, so over time the experience you can get with a local model on your laptop at least has less of a gap than what it does today compared to the big cloud hosted models. There is also the middle of the road option that will become increasingly viable, where you have a powerful server in your home or at your workplace that can at least host a midsize model, and then you connect to that on your LAN. I know IBM is looking at that model for the next iteration of Granite models where you can choose from a wide variety of sizes, some small enough to be run on a laptop, others of a size where a strong workstation or small server can run them or of course the biggest models for people able to invest in top of the line hardware to run their AI.

Also the AI space is moving blazingly fast, if you are reading this 6 Months from now I am sure the capabilities of online and local models will have changed drastically already.

So to all my friends in the Linux community I ask you to take a look at AI and what it can do and then lets work together on improving it, not just in terms of capabilities, but trying to figure out things like societal challenges around it and sustainability concerns I also know a lot of us got.

Whats next for this code

As I mentioned I while I felt I got it to a point where I proved to myself it worked, I am not planning on working anymore on it. But I did make a cute little application for internal use that shows a spinning globe with all global Red Hat offices showing up as little red lights and where it pulls Red Hat news at the bottom. Not super useful either, but I was able to use Claude to refactor the globe rendering code from xtraceroute into this in just a few hours.

Red Hat Globe

Red Hat Offices Globe and news.

29 Jul 2025 4:24pm GMT

Steven Deobald: 2025-07-25 Foundation Update

## Annual Report

The 2025 Annual Report is all-but-baked. Deepa and I would like to be completely confident in the final financial figures before publishing. The Board has seen these final numbers, during their all-day work day two days ago. I heard from multiple Board members that they're ecstatic with how Deepa presented the financial report. This was a massive amount of work for Deepa to contribute in her first month volunteering as our new Treasurer and we all really appreciate the work that she's put into this.

## GUADEC and Gratitude

I've organized large events before and I know in my bones how difficult and tiresome it can be. But I don't think I quite understood the scale of GUADEC. I had heard many times in the past three months "you just have to experience GUADEC to understand it" but I was quite surprised to find the day before the conference so intense and overwhelming that I was sick in bed for the entire first day of the conference - and that's as an attendee!

The conference takes the firehose of GNOME development and brings it all into one place. So many things happened here, I won't attempt to enumerate them all. Instead, I'd like to talk about the energy.

I have been pretty disoriented since the moment I landed in Italy but, even in my stupor, I was carried along by the energy of the conference. I could see that I wasn't an exception - everyone I talked to seemed to be sleeping four hours a night but still highly energized, thrilled to take part, to meet their old friends, and to build GNOME together. My experience of the conference was a constant stream of people coming up to me, introducing themselves, telling me their stories, and sharing their dreams for the project. There is a real warmth to everyone involved in GNOME and it radiates from people the moment you meet them. You all made this a very comfortable space, even for an introvert like me.

There is also incredible history here: folks who have been around for 5 years, 15 years, 25 years, 30 years. Lifelong friends like that are rare and it's special to witness, as an outsider.

But more important than anything I have to say about my experience of the conference, I want to proxy the gratitude of everyone I met. Everyone I spoke to, carried through the unbroken days on the energy of the space, kept telling me what a wonderful GUADEC it was. "The best GUADEC I've ever been to." / "It's so wonderful to meet the local community." / "Everything is so smooth and well organized."

If you were not here and couldn't experience it yourself, please know how grateful we all are for the hard work of the staff and volunteers. Kristi, for tirelessly managing the entire project and coordinating a thousand variables, from the day GUADEC 2024 ended until the moment she opened GUADEC 2025. Rosanna, for taking time away from all her regular work at the Foundation to give her full attention to the event. Pietro, for all the local coordination before the conference and his attention to detail throughout the conference. And the local/remote volunteer team - Maria, Deepesha, Ashmit, Aryan, Alessandro, and Syazwan - for openly and generously participating in every conceivable way.

Thank you everyone for making such an important event possible.

29 Jul 2025 2:48pm GMT

feedPlanet Ubuntu

Ubuntu Blog: Canonical MAAS awarded as best quality software by TIOBE

We are very proud to share that Canonical's MAAS User Interface has been ranked as the top-quality software project in its category by the quarterly TIOBE Software Quality Assurance Award. MAAS (Metal as a Service) is an open source tool that enables automated provisioning, configuration, and management of physical servers (bare metal) in data centers. It treats physical machines like cloud instances, allowing dynamic allocation and scaling of hardware resources. TIOBE's recognition reflects both the exceptional engineering behind MAAS and Canonical's ongoing commitment to quality management and open source excellence.

Screenshot of MAAS UI

How the TIOBE ranking works

TIOBE is a leading vendor that conducts software quality assessments. They currently assess more than 8,000 software projects worldwide. TIOBE Software BV provides a complete framework based on the ISO 25010 international standard for quality, checking strict metrics for security, reliability or maintainability defined within the Tiobe Quality Index (TQI). Canonical works with TIOBE to obtain an independent overview of its code quality as part of our ongoing commitment to engineering excellence.

MAAS: An elegant UI with code quality at its core

TIOBE regularly recognizes the best quality projects with the quarterly TIOBE Software Quality Assurance Award. Canonical's MAAS UI ' reached the #1 spot among all mid-sized projects (defined as having between 100,000 and 500,000 lines of code), a category comprising over 1,200 projects around the globe.

We are excited to see our efforts recognized in the industry. In the words of Paul Jansen, CEO at TIOBE Software : "It is amazing to see how quickly MAAS UI embraced our code quality system and achieved such high quality ratings".

Embracing the quality system was a thoughtful journey. The system was configured together with TIOBE. Following initial measurements, the continuous integration of the analysis was put in place, allowing to set a reliable and consistent data source for the team. The most important step came right after, where a careful analysis of the metrics in the quality system was done, helping to establish the right conversations - ie - "Doing the Right Thing". Finally, it was a win for the codebase and a huge win on the TIOBE Quality Assurance Award.

Behind the MAAS UI

The MAAS UI is a React-based web interface. It provides an intuitive dashboard for administrators to help manage and automate the datacenter. Aspects such as machine lifecycle management, network infrastructure and services configuration are streamlined thanks to the intuitive web user interface.

Some MAAS UI highlights include:

Try MAAS and check what the UI looks like

Getting started with MAAS is as simple as following the step-by-step 30 minute tutorial, which walks you through the entire installation process in a sandboxed environment. Try it out, experience the UI and share your feedback with the community on Discourse.

Check out the documentation or visit the MAAS webpage if you want to learn more.

29 Jul 2025 1:30pm GMT

feedPlanet KDE | English

Improve QML Quality with Seamless Linter for Gen AI - Qt AI Assistant v0.9.4 Released

Do you want to save your mental energy on solving complex coding challenges instead of fixing syntax issues in code generated by Gen AI? The Qt AI Assistant is the world's first coding assistant that seamlessly embeds a QML linter agent for the prompts you write. The latest release also comes with the ability to configure your LLM.

29 Jul 2025 8:08am GMT

feedPlanet Debian

Ravi Dwivedi: How to paste your password on your bank's website

If your bank is like mine, its website doesn't allow you to copy your password and paste it by performing a simple Ctrl+V. I tried the Don't Fuck With Paste extension in Firefox, which could paste my bank account's profile password but not the login password.

Therefore, I asked on Mastodon a couple of days ago and got some responses. The solution that worked for me was to use Shift+Insert to paste the password. It worked for me in LibreWolf and Firefox, and that's all I needed.

Furthermore, this behavior by bank websites leads to users choosing insecure and memorable passwords. Using this trick will help you choose strong passwords for your bank account.

I prefer to use random and strong passwords generated using the password manager pass. It is a freedom-respecting software, unlike popular proprietary password managers promoted by YouTubers. Feel free to check out their webpage here. The reason I use pass is that it stores all the passwords locally (and optionally in a remote Git repository) in encrypted form, which can only be decrypted using your private GPG keys.

29 Jul 2025 7:38am GMT

feedPlanet KDE | English

Week 2 recap GSoC 2025 - searching c++ and creating floating toolbar

Intro

Apart from setting up a new open source project, it is important to understand how the application works in order to make the changes you need. In this blog I will go over how I find code, understand the application, and my progress so far with the Selection Action Bar.

Searching code

One Stop Shop for Searching

Command Line

grep -rn "<string_to_search>"

QTCreator

ctrl + f
crtl + shift + f

Debug in C++ code

qDebug() << "[Debug] <string_to_display_for_debugger> << <additional_parameters>;

Krita's codebase is massive, so don't expect to understand everything in one day. What is important is knowing how to find code you need to make the improvements you want. Above are some tools I use when looking for code. I would use the command line or QTCreator search functionality to reverse search strings that are displayed in Krita. This helps me find buttons, dropdowns, and tools. When I want to understand the functionality of the application, I will add qDebug in the code. This allows me to perform an action when Krtia is running and display debug information about the functions I added qDebug to.

Progress

Through the use of the the useful tools above, I was able to make the base UI of the floating toolbar in Krita by identifying QPainter class that created the Assistant Tool. I wanted to use Assistant tool as a reference and recreate a similar UI look. For quick learning purposes and proof of concept, when an Assistant Tool widget is active, the floating toolbar is also generated on screen.

Below is a proof of concept for the Selection Action Bar. I used QPainter to 'draw' onto the KisCanvas2 class. This is like using a paintbrush (QPainter) and painting on a canvas (KisCanvas2). There will still need to be some more UI clean up, however I wanted to present my learnings so far.

Conclusion

Making changes in Krita can be challenging, but by using a few simple tools it can make hours turn into minutes for searching what you need. Again, "the hardest part is finding the resources to be successful". I hope this blog post helps whoever is in need of searching Krita or any C++ codebase.

Contact

To anyone reading this, please feel free to reach out to me. I'm always open to suggestions and thoughts on how to improve as a developer and as a person.
Email: ross.erosales@gmail.com
Matrix: @rossr:matrix.org

29 Jul 2025 12:00am GMT

26 Jul 2025

feedPlanet Gentoo

EPYTEST_PLUGINS and other goodies now in Gentoo

If you are following the gentoo-dev mailing list, you may have noticed that there's been a fair number of patches sent for the Python eclasses recently. Most of them have been centered on pytest support. Long story short, I've came up with what I believed to be a reasonably good design, and decided it's time to stop manually repeating all the good practices in every ebuild separately.

In this post, I am going to shortly summarize all the recently added options. As always, they are all also documented in the Gentoo Python Guide.

The unceasing fight against plugin autoloading

The pytest test loader defaults to automatically loading all the plugins installed to the system. While this is usually quite convenient, especially when you're testing in a virtual environment, it can get quite messy when you're testing against system packages and end up with lots of different plugins installed. The results can range from slowing tests down to completely breaking the test suite.

Our initial attempts to contain the situation were based on maintaining a list of known-bad plugins and explicitly disabling their autoloading. The list of disabled plugins has gotten quite long by now. It includes both plugins that were known to frequently break tests, and these that frequently resulted in automagic dependencies.

While the opt-out approach allowed us to resolve the worst issues, it only worked when we knew about a particular issue. So naturally we'd miss some rarer issue, and learn only when arch testing workflows were failing, or users reported issues. And of course, we would still be loading loads of unnecessary plugins at the cost of performance.

So, we started disabling autoloading entirely, using PYTEST_DISABLE_PLUGIN_AUTOLOAD environment variable. At first we only used it when we needed to, however over time we've started using it almost everywhere - after all, we don't want the test suites to suddenly start failing because of a new pytest plugin installed.

For a long time, I have been hesitant to disable autoloading by default. My main concern was that it's easy to miss a missing plugin. Say, if you ended up failing to load pytest-asyncio or a similar plugin, all the asynchronous tests would simply be skipped (verbosely, but it's still easy to miss among the flood of warnings). However, eventually we started treating this warning as an error (and then pytest started doing the same upstream), and I have decided that going opt-in is worth the risk. After all, we were already disabling it all over the place anyway.

EPYTEST_PLUGINS

Disabling plugin autoloading is only the first part of the solution. Once you disabled autoloading, you need to load the plugins explicitly - it's not sufficient anymore to add them as test dependencies, you also need to add a bunch of -p switches. And then, you need to keep maintaining both dependencies and pytest switches in sync. So you'd end up with bits like:

BDEPEND="
  test? (
    dev-python/flaky[${PYTHON_USEDEP}]
    dev-python/pytest-asyncio[${PYTHON_USEDEP}]
    dev-python/pytest-timeout[${PYTHON_USEDEP}]
  )
"

distutils_enable_tests pytest

python_test() {
  local -x PYTEST_DISABLE_PLUGIN_AUTOLOAD=1
  epytest -p asyncio -p flaky -p timeout
}

Not very efficient, right? The idea then is to replace all that with a single EPYTEST_PLUGINS variable:

EPYTEST_PLUGINS=( flaky pytest-{asyncio,timeout} )
distutils_enable_tests pytest

And that's it! EPYTEST_PLUGINS takes a bunch of Gentoo package names (without category - almost all of them reside in dev-python/, and we can special-case the few that do not), distutils_enable_tests adds the dependencies and epytest (in the default python_test() implementation) disables autoloading and passes the necessary flags.

Now, what's really cool is that the function will automatically determine the correct argument values! This can be especially important if entry point names change between package versions - and upstreams generally don't consider this an issue, since autoloading isn't affected.

Going towards no autoloading by default

Okay, that gives us a nice way of specifying which plugins to load. However, weren't we talking of disabling autoloading by default?

Well, yes - and the intent is that it's going to be disabled by default in EAPI 9. However, until then there's a simple solution we encourage everyone to use: set an empty EPYTEST_PLUGINS. So:

EPYTEST_PLUGINS=()
distutils_enable_tests pytest

…and that's it. When it's set to an empty list, autoloading is disabled. When it's unset, it is enabled for backwards compatibility. And the next pkgcheck release is going to suggest it:

dev-python/a2wsgi
  EPyTestPluginsSuggestion: version 1.10.10: EPYTEST_PLUGINS can be used to control pytest plugins loaded

EPYTEST_PLUGIN* to deal with special cases

While the basic feature is neat, it is not a golden bullet. The approach used is insufficient for some packages, most notably pytest plugins that run a pytest subprocesses without appropriate -p options, and expect plugins to be autoloaded there. However, after some more fiddling we arrived at three helpful features:

  1. EPYTEST_PLUGIN_LOAD_VIA_ENV that switches explicit plugin loading from -p arguments to PYTEST_PLUGINS environment variable. This greatly increases the chance that subprocesses will load the specified plugins as well, though it is more likely to cause issues such as plugins being loaded twice (and therefore is not the default). And as a nicety, the eclass takes care of finding out the correct values, again.
  2. EPYTEST_PLUGIN_AUTOLOAD to reenable autoloading, effectively making EPYTEST_PLUGINS responsible only for adding dependencies. It's really intended to be used as a last resort, and mostly for future EAPIs when autoloading will be disabled by default.
  3. Additionally, EPYTEST_PLUGINS can accept the name of the package itself (i.e. ${PN}) - in which case it will not add a dependency, but load the just-built plugin.

How useful is that? Compare:

BDEPEND="
  test? (
    dev-python/pytest-datadir[${PYTHON_USEDEP}]
  )
"

distutils_enable_tests pytest

python_test() {
  local -x PYTEST_DISABLE_PLUGIN_AUTOLOAD=1
  local -x PYTEST_PLUGINS=pytest_datadir.plugin,pytest_regressions.plugin
  epytest
}

…and:

EPYTEST_PLUGINS=( "${PN}" pytest-datadir )
EPYTEST_PLUGIN_LOAD_VIA_ENV=1
distutils_enable_tests pytest

Old and new bits: common plugins

The eclass already had some bits related to enabling common plugins. Given that EPYTEST_PLUGINS only takes care of loading plugins, but not passing specific arguments to them, they are still meaningful. Furthermore, we've added EPYTEST_RERUNS.

The current list is:

  1. EPYTEST_RERUNS=... that takes a number of reruns and uses pytest-rerunfailures to retry failing tests the specified number of times.
  2. EPYTEST_TIMEOUT=... that takes a number of seconds and uses pytest-timeout to force a timeout if a single test does not complete within the specified time.
  3. EPYTEST_XDIST=1 that enables parallel testing using pytest-xdist, if the user allows multiple test jobs. The number of test jobs can be controlled (by the user) by setting EPYTEST_JOBS with a fallback to inferring from MAKEOPTS (setting to 1 disables the plugin entirely).

The variables automatically add the needed plugin, so they do not need to be repeated in EPYTEST_PLUGINS.

JUnit XML output and gpy-junit2deselect

As an extra treat, we ask pytest to generate a JUnit-style XML output for each test run that can be used for machine processing of test results. gpyutils now supply a gpy-junit2deselect tool that can parse this XML and output a handy EPYTEST_DESELECT for the failing tests:

$ gpy-junit2deselect /tmp/portage/dev-python/aiohttp-3.12.14/temp/pytest-xml/python3.13-QFr.xml
EPYTEST_DESELECT=(
  tests/test_connector.py::test_tcp_connector_ssl_shutdown_timeout_nonzero_passed
  tests/test_connector.py::test_tcp_connector_ssl_shutdown_timeout_passed_to_create_connection
  tests/test_connector.py::test_tcp_connector_ssl_shutdown_timeout_zero_not_passed
)

While it doesn't replace due diligence, it can help you update long lists of deselects. As a bonus, it automatically collapses deselects to test functions, classes and files when all matching tests fail.

hypothesis-gentoo to deal with health check nightmare

Hypothesis is a popular Python fuzz testing library. Unfortunately, it has one feature that, while useful upstream, is pretty annoying to downstream testers: health checks.

The idea behind health checks is to make sure that fuzz testing remains efficient. For example, Hypothesis is going to fail if the routine used to generate examples is too slow. And as you can guess, "too slow" is more likely to happen on a busy Gentoo system than on dedicated upstream CI. Not to mention some upstreams plain ignore health check failures if they happen rarely.

Given how often this broke for us, we have requested an option to disable Hypothesis health checks long ago. Unfortunately, upstream's answer can be summarized as: "it's up to packages using Hypothesis to provide such an option, and you should not be running fuzz testing downstream anyway". Easy to say.

Well, obviously we are not going to pursue every single package using Hypothesis to add a profile with health checks disabled. We did report health check failures sometimes, and sometimes got no response at all. And skipping these tests is not really an option, given that often there are no other tests for a given function, and even if there are - it's just going to be a maintenance nightmare.

I've finally figured out that we can create a Hypothesis plugin - now hypothesis-gentoo - that provides a dedicated "gentoo" profile with all health checks disabled, and then we can simply use this profile in epytest. And how do we know that Hypothesis is used? Of course we look at EPYTEST_PLUGINS! All pieces fall into place. It's not 100% foolproof, but health check problems aren't that common either.

Summary

I have to say that I really like what we achieved here. Over the years, we learned a lot about pytest, and used that knowledge to improve testing in Gentoo. And after repeating the same patterns for years, we have finally replaced them with eclass functions that can largely work out of the box. This is a major step forward.

26 Jul 2025 1:29pm GMT

25 Jul 2025

feedKernel Planet

Linux Plumbers Conference: All Microconferences have been Accepted!

Good news! All Microconferences have been accepted and are now accepting submissions. The accepted Microconferences are:

You can start submitting topics to these Microconferences. Remember to read the Blog on what makes the ideal Microconference topic before submitting.

After that, submit your topic and make sure that you select the appropriate track that you are submitting for (they are all listed under LPC Microconference Proposals and end with MC).

25 Jul 2025 8:12pm GMT

24 Jul 2025

feedKernel Planet

Dave Airlie (blogspot): ramalama/mesa : benchmarks on my hardware and open source vs proprietary

One of my pet peeves around running local LLMs and inferencing is the sheer mountain of shit^W^W^W complexity of compute stacks needed to run any of this stuff in an mostly optimal way on a piece of hardware.

CUDA, ROCm, and Intel oneAPI all to my mind scream over-engineering on a massive scale at least for a single task like inferencing. The combination of closed source, over the wall open source, and open source that is insurmountable for anyone to support or fix outside the vendor, screams that there has to be a simpler way. Combine that with the pytorch ecosystem and insanity of deploying python and I get a bit unstuck.

What can be done about it?

llama.cpp to me seems like the best answer to the problem at present, (a rust version would be a personal preference, but can't have everything). I like how ramalama wraps llama.cpp to provide a sane container interface, but I'd like to eventually get to the point where container complexity for a GPU compute stack isn't really needed except for exceptional cases.

On the compute stack side, Vulkan exposes most features of GPU hardware in a possibly suboptimal way, but with extensions all can be forgiven. Jeff Bolz from NVIDIA's talk at Vulkanised 2025 started to give me hope that maybe the dream was possible.

The main issue I have is Jeff is writing driver code for the NVIDIA proprietary vulkan driver which reduces complexity but doesn't solve my open source problem.

Enter NVK, the open source driver for NVIDIA GPUs. Karol Herbst and myself are taking a look at closing the feature gap with the proprietary one. For mesa 25.2 the initial support for VK_KHR_cooperative_matrix was landed, along with some optimisations, but there is a bunch of work to get VK_NV_cooperative_matrix2 and a truckload of compiler optimisations to catch up with NVIDIA.

But since mesa 25.2 was coming soon I wanted to try and get some baseline figures out.

I benchmarked on two systems (because my AMD 7900XT wouldn't fit in the case). Both Ryzen CPUs. The first I used system I put in an RTX5080 then a RTX6000 Ada and then the Intel A770. The second I used for the RX7900XT. The Intel SYCL stack failed to launch unfortunately inside ramalama and I hacked llama.cpp to use the A770 MMA accelerators.

ramalama bench hf://unsloth/Qwen3-8B-GGUF:UD-Q4_K_XL

I picked this model at random, and I've no idea if it was a good idea.


Some analysis:

The token generation workload is a lot less matmul heavy than prompt processing, it also does a lot more synchronising. Jeff has stated CUDA wins here mostly due to CUDA graphs and most of the work needed is operation fusion on the llama.cpp side. Prompt processing is a lot more matmul heavy, extensions like NV_coopmat2 will help with that (NVIDIA vulkan already uses it in the above), but there may be further work to help close the CUDA gap. On AMD radv (open source) Vulkan is already better at TG than ROCm, but behind in prompt processing. Again coopmat2 like extensions should help close the gap there.

NVK is starting from a fair way behind, we just pushed support for the most basic coopmat extension and we know there is a long way to go, but I think most of it is achievable as we move forward and I hope to update with new scores on a semi regular basis. We also know we can definitely close the gap on the NVIDIA proprietary Vulkan driver if we apply enough elbow grease and register allocation :-)

I think it might also be worth putting some effort into radv coopmat2 support, I think if radv could overtake ROCm for both of these it would remove a large piece of complexity from the basic users stack.

As for Intel I've no real idea, I hope to get their SYCL implementation up and running, and maybe I should try and get my hands on a B580 card as a better baseline. When I had SYCL running once before I kinda remember it being 2-4x the vulkan driver, but there's been development on both sides.

(The graphs were generated by Gemini.)

24 Jul 2025 10:19pm GMT

23 Jul 2025

feedPlanet Arch Linux

Specifications

In October 2024 a team of dedicated developers has started work on the ALPM project. Since then it has been focusing on writing new documentation on many aspects of Arch Linux Package Management that were not thoroughly documented in the past. This article provides an overview of the specifications written by this project and attempts to contextualize them for the reader. The existing stack 📚 With its bash based makepkg tool for package creation, the libalpm C library for interfacing with system state and the central pacman package management tool, the pacman project has defined the …

23 Jul 2025 12:00am GMT

22 Jul 2025

feedKernel Planet

Pete Zaitcev: Floating Point

I'm unemployed right now and I go to job interviews once in a while. One time, the company was doing another AI thing, having to do with verifying that training computations were doing something useful, and not just "dumping a stream of floating point numbers".

Until now I didn't think of it, but apparently AI is all in FP. And it reminded me how I worked in a CPU design place, where they had a group focused on FP. Those guys were doing FP since the days of transistor. They migrated their designs, generation by generation, through TTL, ECL, Bi-CMOS, CMOS. When I heard from them last, they were tinkering with "deep sub-micron".

One remarkable part about their thing was that because they started out in transistors, their FPU didn't have any microcode. It was all in hardware. Even divisions! Just a bunch of counters that sequenced whatever necessary.

For a long time during the reign of x86, the group was somewhat de-prioritized, because many microprocessors at the time treated FP performance as an afterthought. A number of desktop CPUs shipped with no hardware FP at all. But look how the tables have turned. I honestly hope that it was not too late and AI has become a boon for the successors of my past colleagues.

22 Jul 2025 5:28pm GMT

21 Jun 2025

feedPlanet Arch Linux

linux-firmware >= 20250613.12fe085f-5 upgrade requires manual intervention

With 20250613.12fe085f-5, we split our firmware into several vendor-focused packages. linux-firmware is now an empty package depending on our default set of firmware. Unfortunately, this coincided with upstream reorganizing the symlink layout of the NVIDIA firmware, resulting in a situation that Pacman cannot handle. When attempting to upgrade from 20250508.788aadc8-2 or earlier, you will see the following errors: linux-firmware-nvidia: /usr/lib/firmware/nvidia/ad103 exists in filesystem linux-firmware-nvidia: /usr/lib/firmware/nvidia/ad104 exists in filesystem linux-firmware-nvidia: /usr/lib/firmware/nvidia/ad106 exists in filesystem linux-firmware-nvidia: /usr/lib/firmware/nvidia/ad107 exists in filesystem To progress with the system upgrade, first remove linux-firmware, then reinstall it as part of the upgrade: # pacman -Rdd linux-firmware # pacman -Syu linux-firmware

21 Jun 2025 12:00am GMT

20 Jun 2025

feedPlanet Arch Linux

Plasma 6.4.0 will need manual intervention if you are on X11

On Plasma 6.4 the wayland session will be the only one installed when the users does not manually specify kwin-x11. With the recent split of kwin into kwin-wayland and kwin-x11, users running the old X11 session needs to manually install plasma-x11-session, or they will not be able to login. Currently pacman is not able to figure out your personal setup, and it wouldn't be ok to install plasma-x11-session and kwin-x11 for every one using Plasma. tldr: Install plasma-x11-session if you are still using x11

20 Jun 2025 12:00am GMT

05 Jun 2025

feedPlanet Maemo

Mobile blogging, the past and the future

This blog has been running more or less continuously since mid-nineties. The site has existed in multiple forms, and with different ways to publish. But what's common is that at almost all points there was a mechanism to publish while on the move.

Psion, documents over FTP

In the early 2000s we were into adventure motorcycling. To be able to share our adventures, we implemented a way to publish blogs while on the go. The device that enabled this was the Psion Series 5, a handheld computer that was very much a device ahead of its time.

Psion S5, also known as the Ancestor

The Psion had a reasonably sized keyboard and a good native word processing app. And battery life good for weeks of usage. Writing while underway was easy. The Psion could use a mobile phone as a modem over an infrared connection, and with that we could upload the documents to a server over FTP.

Server-side, a cron job would grab the new documents, converting them to HTML and adding them to our CMS.

In the early days of GPRS, getting this to work while roaming was quite tricky. But the system served us well for years.

If we wanted to include photos to the stories, we'd have to find an Internet cafe.

SMS and MMS

For an even more mobile setup, I implemented an SMS-based blogging system. We had an old phone connected to a computer back in the office, and I could write to my blog by simply sending a text. These would automatically end up as a new paragraph in the latest post. If I started the text with NEWPOST, an empty blog post would be created with the rest of that message's text as the title.

As I got into neogeography, I could also send a NEWPOSITION message. This would update my position on the map, connecting weather metadata to the posts.

As camera phones became available, we wanted to do pictures too. For the Death Monkey rally where we rode minimotorcycles from Helsinki to Gibraltar, we implemented an MMS-based system. With that the entries could include both text and pictures. But for that you needed a gateway, which was really only realistic for an event with sponsors.

Photos over email

A much easier setup than MMS was to slightly come back to the old Psion setup, but instead of word documents, sending email with picture attachments. This was something that the new breed of (pre-iPhone) smartphones were capable of. And by now the roaming question was mostly sorted.

And so my blog included a new "moblog" section. This is where I could share my daily activities as poor-quality pictures. Sort of how people would use Instagram a few years later.

My blog from that era

Pause

Then there was sort of a long pause in mobile blogging advancements. Modern smartphones, data roaming, and WiFi hotspots had become ubiquitous.

In the meanwhile the blog also got migrated to a Jekyll-based system hosted on AWS. That means the old Midgard-based integrations were off the table.

And I traveled off-the-grid rarely enough that it didn't make sense to develop a system.

But now that we're sailing offshore, that has changed. Time for new systems and new ideas. Or maybe just a rehash of the old ones?

Starlink, Internet from Outer Space

Most cruising boats - ours included - now run the Starlink satellite broadband system. This enables full Internet, even in the middle of an ocean, even video calls! With this, we can use normal blogging tools. The usual one for us is GitJournal, which makes it easy to write Jekyll-style Markdown posts and push them to GitHub.

However, Starlink is a complicated, energy-hungry, and fragile system on an offshore boat. The policies might change at any time preventing our way of using it, and also the dishy itself, or the way we power it may fail.

But despite what you'd think, even on a nerdy boat like ours, loss of Internet connectivity is not an emergency. And this is where the old-style mobile blogging mechanisms come handy.

Inreach, texting with the cloud

Our backup system to Starlink is the Garmin Inreach. This is a tiny battery-powered device that connects to the Iridium satellite constellation. It allows tracking as well as basic text messaging.

When we head offshore we always enable tracking on the Inreach. This allows both our blog and our friends ashore to follow our progress.

I also made a simple integration where text updates sent to Garmin MapShare get fetched and published on our blog. Right now this is just plain text-based entries, but one could easily implement a command system similar to what I had over SMS back in the day.

One benefit of the Inreach is that we can also take it with us when we go on land adventures. And it'd even enable rudimentary communications if we found ourselves in a liferaft.

Sailmail and email over HF radio

The other potential backup for Starlink failures would be to go seriously old-school. It is possible to get email access via a SSB radio and a Pactor (or Vara) modem.

Our boat is already equipped with an isolated aft stay that can be used as an antenna. And with the popularity of Starlink, many cruisers are offloading their old HF radios.

Licensing-wise this system could be used either as a marine HF radio (requiring a Long Range Certificate), or amateur radio. So that part is something I need to work on. Thankfully post-COVID, radio amateur license exams can be done online.

With this setup we could send and receive text-based email. The Airmail application used for this can even do some automatic templating for position reports. We'd then need a mailbox that can receive these mails, and some automation to fetch and publish.

0 Add to favourites0 Bury

05 Jun 2025 12:00am GMT

30 Apr 2025

feedPlanet Gentoo

Urgent - OSU Open Source Lab needs your help

OSL logo Oregon State University's Open Source Lab (OSL) has been a major supporter of Gentoo Linux and many other software projects for years. It is currently hosting several of our infrastructure servers as well as development machines for exotic architectures, and is critical for Gentoo operation.

Due to drops in sponsor contributions, OSL has been operating at loss for a while, with the OSU College of Engineering picking up the rest of the bill. Now, university funding has been cut, this is not possible anymore, and unless US$ 250.000 can be provided within the next two weeks OSL will have to shut down. The details can be found in a blog post of Lance Albertson, the director of OSL.

Please, if you value and use Gentoo Linux or any of the other projects that OSL has been supporting, and if you are in a position to make funds available, if this is true for the company you work for, etc … contact the address in the blog post. Obviously, long-term corporate sponsorships would here serve best - for what it's worth, OSL developers have ended up at almost every big US tech corporation by now. Right now probably everything helps though.

30 Apr 2025 5:00am GMT

20 Feb 2025

feedPlanet Gentoo

Bootable Gentoo QCOW2 disk images - ready for the cloud!

Larry the Qcow2 We are very happy to announce new official downloads on our website and our mirrors: Gentoo for amd64 (x86-64) and arm64 (aarch64), as immediately bootable disk images in qemu's QCOW2 format! The images, updated weekly, include an EFI boot partition and a fully functional Gentoo installation; either with no network activated but a password-less root login on the console ("no root pw"), or with network activated, all accounts initially locked, but cloud-init running on boot ("cloud-init"). Enjoy, and read on for more!

Questions and answers

How can I quickly test the images?

We recommend using the "no root password" images and qemu system emulation. Both amd64 and arm64 images have all the necessary drivers ready for that. Boot them up, use as login name "root", and you will immediately get a fully functional Gentoo shell. The set of installed packages is similar to that of an administration or rescue system, with a focus more on network environment and less on exotic hardware. Of course you can emerge whatever you need though, and binary package sources are already configured too.

What settings do I need for qemu?

You need qemu with the target architecture (aarch64 or x86_64) enabled in QEMU_SOFTMMU_TARGETS, and the UEFI firmware.

app-emulation/qemu
sys-firmware/edk2-bin

You should disable the useflag "pin-upstream-blobs" on qemu and update edk2-bin at least to the 2024 version. Also, since you probably want to use KVM hardware acceleration for the virtualization, make sure that your kernel supports that and that your current user is in the kvm group.

For testing the amd64 (x86-64) images, a command line could look like this, configuring 8G RAM and 4 CPU threads with KVM acceleration:

qemu-system-x86_64 \
        -m 8G -smp 4 -cpu host -accel kvm -vga virtio -smbios type=0,uefi=on \
        -drive if=pflash,unit=0,readonly=on,file=/usr/share/edk2/OvmfX64/OVMF_CODE_4M.qcow2,format=qcow2 \
        -drive file=di-amd64-console.qcow2 &

For testing the arm64 (aarch64) images, a command line could look like this:

qemu-system-aarch64 \
        -machine virt -cpu neoverse-v1 -m 8G -smp 4 -device virtio-gpu-pci -device usb-ehci -device usb-kbd \
        -drive if=pflash,unit=0,readonly=on,file=/usr/share/edk2/ArmVirtQemu-AARCH64/QEMU_EFI.qcow2 \
        -drive file=di-arm64-console.qcow2 &

Please consult the qemu documentation for more details.

Can I install the images onto a real harddisk / SSD?

Sure. Gentoo can do anything. The limitations are:

pinacolada ~ # blockdev --report /dev/sdb
RO    RA   SSZ   BSZ        StartSec            Size   Device
rw   256   512  4096               0   4000787030016   /dev/sdb

So, this is an expert workflow.

Assuming your disk is /dev/sdb and has a size of at least 20GByte, you can then use the utility qemu-img to decompress the image onto the raw device. Warning, this obviously overwrites the first 20Gbyte of /dev/sdb (and with that the existing boot sector and partition table):

qemu-img convert -O raw di-amd64-console.qcow2 /dev/sdb

Afterwards, you can and should extend the new root partition with xfs_growfs, create an additional swap partition behind it, possibly adapt /etc/fstab and the grub configuration, …

If you are familiar with partitioning and handling disk images you can for sure imagine more workflow variants; you might find also the qemu-nbd tool interesting.

So what are the cloud-init images good for?

Well, for the cloud. Or more precisely, for any environment where a configuration data source for cloud-init is available. If this is already provided for you, the image should work out of the box. If not, well, you can provide the configuration data manually, but be warned that this is a non-trivial task.

Are you planning to support further architectures?

Eventually yes, in particular (EFI) riscv64 and loongarch64.

Are you planning to support legacy boot?

No, since the placement of the bootloader outside the file system complicates things.

How about disks with 4096 byte sectors?

Well… let's see how much demand this feature finds. If enough people are interested, we should be able to generate an alternative image with a corresponding partition table.

Why XFS as file system?

It has some features that ext4 is sorely missing (reflinks and copy-on-write), but at the same time is rock-solid and reliable.

20 Feb 2025 6:00am GMT

16 Oct 2024

feedPlanet Maemo

Adding buffering hysteresis to the WebKit GStreamer video player

The <video> element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.

WebKit GStreamer regular playback class diagram

The player private can have 3 buffering modes:

The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.

So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.

All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.

Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.

For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().

So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.

0 Add to favourites0 Bury

16 Oct 2024 6:12am GMT

10 Sep 2024

feedPlanet Maemo

Don’t shoot yourself in the foot with the C++ move constructor

Move semantics can be very useful to transfer ownership of resources, but as many other C++ features, it's one more double edge sword that can harm yourself in new and interesting ways if you don't read the small print.

For instance, if object moving involves super and subclasses, you have to keep an extra eye on what's actually happening. Consider the following classes A and B, where the latter inherits from the former:

#include <stdio.h>
#include <utility>

#define PF printf("%s %p\n", __PRETTY_FUNCTION__, this)

class A {
 public:
 A() { PF; }
 virtual ~A() { PF; }
 A(A&& other)
 {
  PF;
  std::swap(i, other.i);
 }

 int i = 0;
};

class B : public A {
 public:
 B() { PF; }
 virtual ~B() { PF; }
 B(B&& other)
 {
  PF;
  std::swap(i, other.i);
  std::swap(j, other.j);
 }

 int j = 0;
};

If your project is complex, it would be natural that your code involves abstractions, with part of the responsibility held by the superclass, and some other part by the subclass. Consider also that some of that code in the superclass involves move semantics, so a subclass object must be moved to become a superclass object, then perform some action, and then moved back to become the subclass again. That's a really bad idea!

Consider this usage of the classes defined before:

int main(int, char* argv[]) {
 printf("Creating B b1\n");
 B b1;
 b1.i = 1;
 b1.j = 2;
 printf("b1.i = %d\n", b1.i);
 printf("b1.j = %d\n", b1.j);
 printf("Moving (B)b1 to (A)a. Which move constructor will be used?\n");
 A a(std::move(b1));
 printf("a.i = %d\n", a.i);
 // This may be reading memory beyond the object boundaries, which may not be
 // obvious if you think that (A)a is sort of a (B)b1 in disguise, but it's not!
 printf("(B)a.j = %d\n", reinterpret_cast<B&>(a).j);
 printf("Moving (A)a to (B)b2. Which move constructor will be used?\n");
 B b2(reinterpret_cast<B&&>(std::move(a)));
 printf("b2.i = %d\n", b2.i);
 printf("b2.j = %d\n", b2.j);
 printf("^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place\n");
 printf("Destroying b2, a, b1\n");
 return 0;
}

If you've read the code, those printfs will have already given you some hints about the harsh truth: if you move a subclass object to become a superclass object, you're losing all the subclass specific data, because no matter if the original instance was one from a subclass, only the superclass move constructor will be used. And that's bad, very bad. This problem is called object slicing. It's specific to C++ and can also happen with copy constructors. See it with your own eyes:

Creating B b1
A::A() 0x7ffd544ca690
B::B() 0x7ffd544ca690
b1.i = 1
b1.j = 2
Moving (B)b1 to (A)a. Which move constructor will be used?
A::A(A&&) 0x7ffd544ca6a0
a.i = 1
(B)a.j = 0
Moving (A)a to (B)b2. Which move constructor will be used?
A::A() 0x7ffd544ca6b0
B::B(B&&) 0x7ffd544ca6b0
b2.i = 1
b2.j = 0
^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place
Destroying b2, a, b1
virtual B::~B() 0x7ffd544ca6b0
virtual A::~A() 0x7ffd544ca6b0
virtual A::~A() 0x7ffd544ca6a0
virtual B::~B() 0x7ffd544ca690
virtual A::~A() 0x7ffd544ca690

Why can something that seems so obvious become such a problem, you may ask? Well, it depends on the context. It's not unusual for the codebase of a long lived project to have started using raw pointers for everything, then switching to using references as a way to get rid of null pointer issues when possible, and finally switch to whole objects and copy/move semantics to get rid or pointer issues (references are just pointers in disguise after all, and there are ways to produce null and dangling references by mistake). But this last step of moving from references to copy/move semantics on whole objects comes with the small object slicing nuance explained in this post, and when the size and all the different things to have into account about the project steals your focus, it's easy to forget about this.

So, please remember: never use move semantics that convert your precious subclass instance to a superclass instance thinking that the subclass data will survive. You can regret about it and create difficult to debug problems inadvertedly.

Happy coding!

0 Add to favourites0 Bury

10 Sep 2024 7:58am GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT