19 Jun 2025

feedplanet.freedesktop.org

Peter Hutterer: libinput and tablet tool eraser buttons

This is, to some degree, a followup to this 2014 post. The TLDR of that is that, many a moon ago, the corporate overlords at Microsoft that decide all PC hardware behaviour decreed that the best way to handle an eraser emulation on a stylus is by having a button that is hardcoded in the firmware to, upon press, send a proximity out event for the pen followed by a proximity in event for the eraser tool. Upon release, they dogma'd, said eraser button shall virtually move the eraser out of proximity followed by the pen coming back into proximity. Or, in other words, the pen simulates being inverted to use the eraser, at the push of a button. Truly the future, back in the happy times of the mid 20-teens.

In a world where you don't want to update your software for a new hardware feature, this of course makes perfect sense. In a world where you write software to handle such hardware features, significantly less so.

Anyway, it is now 11 years later, the happy 2010s are over, and Benjamin and I have fixed this very issue in a few udev-hid-bpf programs but I wanted something that's a) more generic and b) configurable by the user. Somehow I am still convinced that disabling the eraser button at the udev-hid-bpf level will make users that use said button angry and, dear $deity, we can't have angry users, can we? So many angry people out there anyway, let's not add to that.

To get there, libinput's guts had to be changed. Previously libinput would read the kernel events, update the tablet state struct and then generate events based on various state changes. This of course works great when you e.g. get a button toggle, it doesn't work quite as great when your state change was one or two event frames ago (because prox-out of one tool, prox-in of another tool are at least 2 events). Extracing that older state change was like swapping the type of meatballs from an ikea meal after it's been served - doable in theory, but very messy.

Long story short, libinput now has a internal plugin system that can modify the evdev event stream as it comes in. It works like a pipeline, the events are passed from the kernel to the first plugin, modified, passed to the next plugin, etc. Eventually the last plugin is our actual tablet backend which will update tablet state, generate libinput events, and generally be grateful about having fewer quirks to worry about. With this architecture we can hold back the proximity events and filter them (if the eraser comes into proximity) or replay them (if the eraser does not come into proximity). The tablet backend is none the wiser, it either sees proximity events when those are valid or it sees a button event (depending on configuration).

This architecture approach is so successful that I have now switched a bunch of other internal features over to use that internal infrastructure (proximity timers, button debouncing, etc.). And of course it laid the ground work for the (presumably highly) anticipated Lua plugin support. Either way, happy times. For a bit. Because for those not needing the eraser feature, we've just increased your available tool button count by 100%[2] - now there's a headline for tech journalists that just blindly copy claims from blog posts.

[1] Since this is a bit wordy, the libinput API call is just libinput_tablet_tool_config_eraser_button_set_button()
[2] A very small number of styli have two buttons and an eraser button so those only get what, 50% increase? Anyway, that would make for a less clickbaity headline so let's handwave those away.

19 Jun 2025 1:44am GMT

18 Jun 2025

feedplanet.freedesktop.org

Hari Rana: It’s True, “We” Don’t Care About Accessibility on Linux

Introduction

What do virtue-signalers and privileged people without disabilities sharing content about accessibility on Linux being trash have in common? They don't actually really care about the group they're defending; they just exploit these victims' unfortunate situation to fuel hate against groups and projects actually trying to make the world a better place.

I never thought I'd be this upset to a point I'd be writing an article about something this sensitive with a clickbait-y title. It's simultaneously demotivating, unproductive, and infuriating. I'm here writing this post fully knowing that I could have been working on accessibility in GNOME, but really, I'm so tired of having my mood ruined because of privileged people spending at most 5 minutes to write erroneous posts and then pretending to be oblivious when confronted while it takes us 5 months of unpaid work to get a quarter of recognition, let alone acknowledgment, without accounting for the time "wasted" addressing these accusations.

I'm Not Angry

I'm not mad. I'm absolutely furious and disappointed in the Linux Desktop community for being quiet in regards to any kind of celebration to advancing accessibility, while proceeding to share content and cheer for random privileged people from big-name websites or social media who have literally put a negative amount of effort into advancing accessibility on Linux. I'm explicitly stating a negative amount because they actually make it significantly more stressful for us.

None of this is fair. If you're the kind of person who stays quiet when we celebrate huge accessibility milestones, yet shares (or even writes) content that trash talk the people directly or indirectly writing the fucking software you use for free, you are the reason why accessibility on Linux is shit.

No one in their right mind wants to volunteer in a toxic environment where their efforts are hardly recognized by the public and they are blamed for "not doing enough", especially when they are expected to take in all kinds of harassment, nonconstructive criticism, and slander for a salary of 0$.

There's only one thing I am shamefully confident about: I am not okay in the head. I shouldn't be working on accessibility anymore. The recognition-to-smearing ratio is unbearably low and arguably unhealthy, but leaving people in unfortunate situations behind is also not in accordance with my values.

I've been putting so much effort, quite literally hundreds of hours, into:

  1. thinking of ways to come up with inclusive designs and experiences;
  2. imagining how I'd use something if I had a certain disability or condition;
  3. asking for advice and feedback from people with disabilities;
  4. not getting paid from any company or organization; and
  5. making sure that all the accessibility-related work is in the public, and stays in the public.

Number 5 is especially important to me. I personally go as far as to refuse to contribute to projects under a permissive license, and/or that utilize a contributor license agreement, and/or that utilize anything riskily similar to these two, because I am of the opinion that no amount of code for accessibility should either be put under a paywall or be obscured and proprietary.

Permissive licenses make it painlessly easy for abusers to fork, build an ecosystem on top of it which may include accessibility-related improvements, slap a price tag alongside it, all without publishing any of these additions/changes. Corporations have been doing that for decades, and they'll keep doing it until there's heavy push back. The only time I would contribute to a project under a permissive license is when the tool is the accessibility infrastructure itself. Contributor license agreements are significantly worse in that regard, so I prefer to avoid them completely.

The Truth Nobody Is Telling You

KDE hired a legally blind contractor to work on accessibility throughout the KDE ecosystem, including complying with the EU Directive to allow selling hardware with Plasma.

GNOME's new executive director, Steven Deobald, is partially blind.

The GNOME Foundation has been investing a lot of money to improve accessibility on Linux, for example funding Newton, a Wayland accessibility project and AccessKit integration into GNOME technologies. Around 250,000€ (1/4) of the STF budget was spent solely on accessibility. And get this: literally everybody managing these contracts and communication with funders are volunteers; they're ensuring people with disabilities earn a living, but aren't receiving anything in return. These are the real heroes who deserve endless praise.

The Culprits

Do you want to know who we should be blaming? Those who are benefiting from the community's effort while investing very little to nothing into accessibility.

This includes a significant portion of the companies sponsoring GNOME and even companies that employ developers to work on GNOME. These companies are the ones making hundreds of millions, if not billions, in net profit indirectly from GNOME, and investing little to nothing into accessibility. However, the worst offenders are the companies actively using GNOME without ever donating anything to fund the project.

Some companies actually do put an effort, like Red Hat and Igalia. Red Hat employs people with disabilities to work on accessibility in GNOME, one of which I actually rely on when making accessibility-related contributions in GNOME. Igalia funds Orca, the screen reader as part of GNOME, which is something the Linux community should be thankful of.

The privileged people who keep sharing and making content around accessibility on Linux being bad are, in my opinion, significantly worse than the companies profiting off of GNOME. Companies are and stay quiet, but the privileged people add an additional burden to contributors by either trash talking from their content or sharing trash talkers. Once again, no volunteer deserves to be in the position of being shamed and ridiculed for "not doing enough", since no one is entitled to their free time, but themselves.

My Work Is Free but the Worth Is Not

Earlier in this article, I mentioned, and I quote: "I've been putting so much effort, quite literally hundreds of hours […]". Let's put an emphasis on "hundreds". Here's a list of most accessibility-related merge requests that have been incorporated into GNOME:

GNOME Calendar's !559 addresses an issue where event widgets were unable to be focused and activated by the keyboard. That was present since the very beginning of GNOME Calendar's existence, to be specific: for more than a decade. This alone was was a two-week effort. Despite it being less than 100 lines of code, nobody truly knew what to do to have them working properly before. This was followed up by !576, which made the event buttons usable in the month view with a keyboard, and then !587, which properly conveys the states of the widgets. Both combined are another two-week effort.

Then, at the time of writing this article, !564 adds 640 lines of code, which is something I've been volunteering on for more than a month, excluding the time before I opened the merge request.

Let's do a little bit of math together with 'only' !559, !576, and !587. Just as a reminder: these three merge requests are a four-week effort in total, which I volunteered full-time-8 hours a day, or 160 hours a month. I compiled a small table that illustrates its worth:

Country Average Wage for Professionals Working on Digital AccessibilityWebAIM Total in Local Currency
(160 hours)
Exchange Rate Total (CAD)
Canada 58.71$ CAD/hour 9,393.60$ CAD N/A 9,393.60$
United Kingdom 48.20£ GBP/hour 7,712£ GBP 1.8502 14,268.74$
United States of America 73.08$ USD/hour 11,692.80$ USD 1.3603 15,905.72$

To summarize the table: those three merge requests that I worked on for free were worth 9,393.60$ CAD (6,921.36$ USD) in total at a minimum.

Just a reminder:

Now just imagine how I feel when I'm told I'm "not doing enough", either directly or indirectly. Whenever anybody says we're "not doing enough", I feel very much included, and I will absolutely take it personally.

It All Trickles Down to "GNOME Bad"

I fully expect everything I say in this article to be dismissed or be taken out of context on the basis of ad hominem, simply by the mere fact I'm a GNOME Foundation member / regular GNOME contributor. Either that, or be subject to whataboutism because another GNOME contributor made a comment that had nothing to do with mine but 'is somewhat related to this topic and therefore should be pointed out just because it was maybe-probably-possibly-perhaps ableist'. I can't speak for other regular contributors, but I presume that they don't feel comfortable talking about this because they dared be a GNOME contributor. At least, that's how I felt for the longest time.

Any content related to accessibility that doesn't dunk on GNOME doesn't foresee as many engagement, activity, and reaction as content that actively attacks GNOME, regardless of whether the criticism is fair. Regular GNOME contributors like myself don't always feel comfortable defending ourselves because dismissing GNOME developers just for being GNOME developers is apparently a trend…

Final Word

Dear people with disabilities,

I won't insist that we're either your allies or your enemies-I have no right to claim that whatsoever.

I wasn't looking for recognition. I wasn't looking for acknowledgment since the very beginning either. I thought I would be perfectly capable of quietly improving accessibility in GNOME, but because of the overall community's persistence to smear developers' efforts without actually tackling the underlying issues within the stack, I think I've justified myself to at least demand for acknowledgment from the wider community.

I highly doubt it will happen anyway, because the Linux community feeds off of drama and trash talking instead of being productive, without realizing that it negatively demotivates active contributors while pushing away potential contributors. And worst of all: people with disabilities are the ones affected the most because they are misled into thinking that we don't care.

It's so unfair and infuriating that all the work I do and share online gain very little activity compared to random posts and articles from privileged people without disabilities that rant about the Linux desktop's accessibility being trash. It doesn't help that I become severely anxious sharing accessibility-related work to avoid signs of virtue-signaling. The last thing I want is to (unintentionally) give any sign and impression of pretending to care about accessibility.

I beg you, please keep writing banger posts like fireborn's I Want to Love Linux. It Doesn't Love Me Back series and their interluding post. We need more people with disabilities to keep reminding developers that you exist and your conditions and disabilities are a spectrum and not absolute.

We simultaneously need more interest from people with disabilities to contribute to FOSS, and the wider community to be significantly more intolerant of bullies who profit from smearing and demotivating people who are actively trying. We could also take inspiration from "Accessibility on Linux sucks, but GNOME and KDE are making progress" by OSNews, as they acknowledge that accessibility on Linux is suboptimal while recognizing the efforts of GNOME and KDE.

18 Jun 2025 12:00am GMT

11 Jun 2025

feedplanet.freedesktop.org

Lennart Poettering: ASG! 2025 CfP Closes Tomorrow!

The All Systems Go! 2025 Call for Participation Closes Tomorrow!

The Call for Participation (CFP) for All Systems Go! 2025 will close tomorrow, on 13th of June! We'd like to invite you to submit your proposals for consideration to the CFP submission site quickly!

11 Jun 2025 10:00pm GMT

09 Jun 2025

feedplanet.freedesktop.org

Dave Airlie (blogspot): radv: vulkan VP9 video decode

The Vulkan WG has released VK_KHR_video_decode_vp9. I did initial work on a Mesa extensions for this a good while back, and I've updated the radv code with help from AMD and Igalia to the final specification.

There is an open MR[1] for radv to add support for vp9 decoding on navi10+ with the latest firmware images in linux-firmware. It is currently passing all VK-GL-CTS tests for VP9 decode.

Adding this decode extension is a big milestone for me as I think it now covers all the reasons I originally got involved in Vulkan Video as signed off, there is still lots to do and I'll stay involved, but it's been great to see the contributions from others and how there is a bit of Vulkan Video community upstream in Mesa.

[1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/35398

09 Jun 2025 7:42pm GMT

02 Jun 2025

feedplanet.freedesktop.org

Mike Blumenkrantz: Pruning

Time Constraints

As many of you have seen, I've been deleting a lot of code lately. There's a reason for this, aside from it being a really great feeling to just obliterate some entire subsystem, and that reason is time.

There are 24 hours in a day. You sleep for 6. You work for 8. Spend an hour eating, and then you're down to only 9 hours at the gym minus a few minutes to manage those pesky social and romantic obligations. That doesn't leave a lot of time for mucking around in random codebases.

For example. Suppose I maintain a Gallium driver. This likely means I know my way around that driver, various related infrastructure, the GL state tracker, NIR, maybe enough GLSL to rubber stamp some MRs from @tarceri, and I know which channel on IRC in which to scream when my MRs get blocked by something that is definitely not me failing to test-compile the patches before merging them. Everything outside of these areas is out of scope for this hypothetical version of me, which means it may as well be a black box.

Now imagine I am all the maintainers of all the Gallium drivers. My collective scope has expanded. I am the master of all things src/gallium/drivers. I wave my hand and src/mesa obeys my whim. CI is always green, except when matters beyond the control of mere mortals conspire against me. I have a blog. News sites cover my MRs as though OpenGL is still relevant.

But there are still black boxes. Vulkan drivers, for example, are a mystery. CI is an artifact from a distant civilization which, though alien, ensures everything functions as I know it does. And then there are the esoteric parts of the tree in src/gallium/frontends. People I've never met may file bug reports against my drivers with tags for one of these components. Who is sexypixel420, what is a teflon, and why is that my problem?

Maintenance

A key aspect of any good Open Source project is maintenance. This is, relatively speaking, how well it is expected to function if Joe Randomguy installs and runs it. Maintenance of projects requires people to work on them and fix bugs. These are maintainers. When a project has a maintainer, we say that it is maintained. A project which does not have a maintainer is unmaintained. Simple enough.

Mesa is a project comprised of many subprojects. We call this an ecosystem. An ecosystem functions when all its projects work together in harmony towards a common goal, in this case blasting out those pixels into as many green triangles per second as possible.

What happens when a maintained project has an issue? Well, that's when the maintainer steps in to fix it (assuming some other random contributor doesn't, but we're assuming a very low bus factor here). Tickets are filed, maintainers analyze and fix, and end users are happy because the software they randomly installed happens to work as they expect.

But what happens when a project with no maintainer has an issue? In short, nothing. That issue is filed away into the void, never to be resolved ever in a million years (unless some kind soul happens to pitch an unreviewed #TrustMeBuddy patch into the repo, but this is rare). These issues accumulate, and nobody even notices because nobody is subscribed to that label on the issue tracker. The project is derelict. If the project accumulates enough of these issues, distributions may even stop packaging it; packaging a defective piece of software creates downstream tickets for packagers, and much of the time they are not looking to drag their editor upstream and solve all the problems because they have more than enough problems already with packaging.

Now here's where things start to get messy: what happens when an unmaintained subproject in an ecosystem has an issue? Some might be tempted to say this is the same as the above scenario, but it's subtly different because the issue might not be directly user-facing. It might be "what happens in this codebase if I change this thing over here?" And if a codebase is unmaintained, then nobody knows what happens. The code can be read, but without a maintainer who possesses deep knowledge about the intent of the machinery, such shallow readings can only do so much.

This Is Why We Prune

Like trees with dead limbs, dead parts of Open Source projects must be periodically pruned to keep the rest of the project healthy. Having all these dead limbs around creates a larger surface area for the ecosystem, which creates the potential for unintended side effects (and bizarro bugs from unknown components) to manifest. It also has a hidden cost, which is burnout. When a maintainer must step outside their area in an attempt to triage something in a codebase that they do not know, instinctual fear and distaste of Other Code kicks in: this code is terrible because I didn't write it. Also what the fuck is with this formatting? Is that a same-line brace with no space after the closing parens?! That's it, I'm clocking it for today.

We've all been out in the jungle with some code that may as well be written in dirt. It sucks. And any time you're stuck out in the dirt for more than a couple minutes, you want to be able to call in an expert to bail you out. Those experts are called maintainers. When you enter territory which is unmaintained, you're effectively stranded unless you can cut your way out. If you can't cut your way out, you're stuck, and being stuck is frustrating, and being frustrated makes you not want to work on your thing anymore, which is how you end up losing maintainers. One of the ways, that is, because we're all just one sarcastic winky-face away from a ranty ragequit mail.

Now is when I reveal that this long-winded, circuitous explanation is not actually about everyone's favorite D3D9 state tracker (pour one out for a legend) or whatever the hell XA was. I'm talking about last week when I deleted legacy renderpass support from Zink. It's been a long time coming, and realistically I should have done this sooner.

Zink Struggles

Like Mesa, Zink is an ecosystem supporting a wide variety of projects, but it's also a single project with a single maintainer. A bug in RadeonSI code will not affect me, but a bug in Zink code affects me even if it is not code which has been tested or even used in the past 5 years. While it's likely true that any code in Zink is code that I have written, there's a big difference between code written in the past year and code written back in like 2020: in the former case I probably know what's happening and why, and in the latter case it's more likely that I'm confused how the code still exists.

Vulkan is a moving target. Every month brings changes and improvements, fun new extensions to misuse, and long-lost validation errors to tell us that nobody actually knows how to use SPIR-V. Over time, these new features and changes become more widely adopted, which makes them reliable, but historically Zink has been very lax in requiring "new" features.

There is this idea that Zink should be able to provide high-level OpenGL support to any device which provides any amount of conformant Vulkan support. It's a neat idea: provide Vulkan 1.0, and you get GL 4.6 + ES 3.2 for free. There are, however, a number of issues with this pie-in-the-sky thinking:

I'm not saying all this as a cry for help, though help is always appreciated, encouraged, and welcomed. This is a notice that I'm going to be pruning some old and unused codepaths to keep things manageable. Zink isn't going to work on Vulkan 1.0; that goal is a nice idea but not achievable, especially when there is fierce competition like ANGLE gunning for every fraction of a perf percent they can get. I don't foresee requiring any new extensions/features the day they ship, but I also don't foresee keeping legacy fallbacks for codepaths which should be standard by now.

TL;DR: If you want Zink on old drivers/hardware, try Mesa 25.1. Everyone else, business as usual.

02 Jun 2025 12:00am GMT

26 May 2025

feedplanet.freedesktop.org

Mike Blumenkrantz: Monthly Post

Well.

I had intended to be writing this post over a month ago, but [for reasons] I'm here writing it now.

Way back in March of '25, I was doing work that I could talk about publicly, and a sizable chunk of that was working to improve Gallium. The stopping point of that work was the colossal !34054, which roughly amounts to "remove a single * from a struct". The result was rewriting every driver and frontend in the tree to varying extents:

` 260 files changed, 2179 insertions(+), 2331 deletions(-)`

So as I was saying, I expected to merge this right after the 25.1 branchpoint back around mid-April, which would have allowed me to keep my train of thought and momentum. Sadly this did not come to pass, and as a result I've forgotten most of the key points of that blog post (and related memes). But I still have this:

pipe-surface.png

But Hwhy?

As readers of this blog, you're all very smart. You can smell bullshit a country mile away. That's why I'm going to treat you like the intelligent rhinoceroses you are and tell you right now that I no longer have any of the performance statistics I'd gathered for this post. We're all gonna have to go on vibes and #TrustMeBuddy energy. I'll begin by posing a hypothetical to you.

Suppose you're running a complex application. Suppose this application has threads which share data. Now suppose you're running this on an AMD CPU. What is your most immediate, significant performance concern?

If you said atomic operations, you are probably me from way back in February-Take that time machine and get back where you belong. The problems are not fixed.

AMD CPUs are bad with atomic operations. It's a feature. No, I will not go into more detail; months have passed since I read all those dissertations, and I can't remember what I ate for breakfast an hour ago. #TrustMeBuddy.

I know what you're thinking. Mike, why aren't you just pinning your threads?

Well, you incredibly handsome reader, the thing is thread pinning is a lie. You can pin threads by setting their affinity to keep them on the same CCX, and L3 cache, and blah blah blah, and even when you do that sometimes it has absolutely zero fucking effect and your fps is still 6. There is no explanation. PhDs who work on atomic operations in compilers cannot explain this. The dark lord Yog-Sothoth cowers in fear when pressed for details. Even tariffs on performance penalties cannot mitigate this issue.

In that sense, when you have your complex threadful application which uses atomic operations on an AMD CPU, and when you want to achieve the same performance it can have for free on a different type of CPU, you have four options:

Obviously none of these options are very appealing. If you have a complex application, you need threads, you need your AMD CPU with its bazillion cores, you need atomic operations, and, being realistic, the situation here with hardware/kernel/compiler is not going to improve before AI takes over my job and I quit to become a full-time novel writer in the budding rom-pixel-com genre.

While eliminating all atomic operations isn't viable, eliminating a certain class of them is theoretically possible. I'm talking, of course, about reference counting, the means by which C developers LARP as Java developers.

In Mesa, nearly every object is reference counted, especially the ones which have no need for garbage collection. Haters will scream REWRITE IT IN RUST, but I'm not going to do that until someone finally rewrites the GLSL compiler in Rust to kick off the project. That's right, I'm talking to all you rustaceans out there: do something useful for once instead of rewriting things that aren't the best graphics stack on the planet.

A great example of this reference counting overreliance was sampler views, which I took a hatchet to some months ago. This is a context-specific object which has a clear ownership pattern. Why was it reference counted? Science cannot explain this, but psychologists will tell you that engineers will always follow existing development patterns without question regardless of how non-performant they may be. Don't read any zink code to find examples. #TrustMeBuddy.

Sampler views were a relatively easy pickup, more like a proof of concept to see if the path was viable. Upon succeeding, I immediately rushed to the hardest possible task: the framebuffer. Framebuffer surfaces can be shared between contexts, which makes them extra annoying to solve in this case. For that reason, the solution was not to try a similar approach, it was to step back and analyze the usage and ownership pattern.

Why pipe_surface*?

Originally the pipe_surface object was used solely for framebuffers, but this concept has since metastacized to clear operations and even video. It's a useful object at a technical level: it provides a resource, format, miplevel, and layers. But does it really need to be an object?

Deeper analysis said no: the vast majority of drivers didn't use this for anything special, and few drivers invested into architecture based on this being an actual object vs just having the state available. The majority of usage was pointlessly passing the object around because the caller handed it off to another function.

Of course, in the process of this analysis, I noted that zink was one of the heaviest investors into pipe_surface*. Pour one out for my past decision-making process. But I pulled myself up by my bootstraps, and I rewrote every driver and every frontend, and now whenever the framebuffer changes there are at least num_attachments * (frontend_thread + tc_thread + driver_thread) fewer atomic operations.

More Work

This saga is not over. There's still base buffers and images to go, which is where a huge amount of performance is lost if you are hitting an affected codepath. Ideally those changes will be smaller and more concentrated than the framebuffer refactor.

Ideally I will find time for it.

#TrustMeBuddy.

26 May 2025 12:00am GMT

André Almeida: Linux 6.15, DRM scheduler, wedged events, sched_ext and more

The Linux 6.15 has just been released, bringing a lot of new features:

As always, I suggest to have a look at the Kernel Newbies summary. Now, let's have a look at Igalia's contributions.

DRM wedged events

In 3D graphics APIs such Vulkan and OpenGL, there are some mechanisms that applications can rely to check if the GPU had reset (you can read more about this in the kernel documentation). However, there was no generic mechanism to inform userspace that a GPU reset has happened. This is useful because in some cases the reset affected not only the app involved in the reset, but the whole graphic stack and thus needs some action to recover, like doing a module rebind or even bus reset to recovery the hardware. For this release, we helped to add an userspace event for this, so a daemon or the compositor can listen to it and trigger some recovery measure after the GPU has reset. Read more in the kernel docs.

DRM scheduler work

In the DRM scheduler area, in preparation for the future scheduling improvements, we worked on cleaning up the code base, better separation of the internal and external interfaces, and adding formal interfaces at places where individual drivers had too much knowledge of the scheduler internals.

General GPU/DRM stack

In the wider GPU stack area we optimised the most frequent dma-fence single fence merge operation to avoid memory allocations and array sorting. This should slightly reduce the CPU utilisation with workloads which use the DRM sync objects heavily, such as the modern composited desktops using Vulkan explicit sync.

Some releases ago, we helped to enable async page flips in the atomic DRM uAPI. So far, this feature was only enabled for the primary plane. In this release, we added a mechanism for the driver to decide which plane can perform async flips. We used this to enable overlay planes to do async flips in AMDGPU driver.

We also fixed a bug in the DRM fdinfo common layer which could cause use after free after driver unbind.

Intel Xe driver improvements

On the Intel GPU specific front we worked on adding better Alderlake-P support to the new Intel Xe driver by identifying and adding missing hardware workarounds, fixed the workaround application in general and also made some other smaller improvements.

sched_ext

When developing and optimizing a sched_ext-based scheduler, it is important to understand the interactions between the BPF scheduler and the in-kernel sched_ext core. If there is a mismatch between what the BPF scheduler developer expects and how the sched_ext core actually works, such a mismatch could often be the source of bugs or performance issues.

To address such a problem, we added a mechanism to count and report the internal events of the sched_ext core. This significantly improves the visibility of subtle edge cases, which might easily slip off. So far, eight events have been added, and the events can be monitored through a BPF program, sysfs, and a tracepoint.

A few less bugs

As usual, as part of our work on diverse projects, we keep an eye on automated test results to look for potential security and stability issues in different kernel areas. We're happy to have contributed to making this release a bit more robust by fixing bugs in memory management, network (SCTP), ext4, suspend/resume and other subsystems.


This is the complete list of Igalia's contributions for this release:

Authored (75)

André Almeida

Angelos Oikonomopoulos

Bhupesh

Changwoo Min

Gavin Guo

Guilherme G. Piccoli

Luis Henriques

Maíra Canal

Melissa Wen

Ricardo Cañuelo Navarro

Rodrigo Siqueira

Thadeu Lima de Souza Cascardo

Tvrtko Ursulin

Reviewed (30)

André Almeida

Christian Gmeiner

Iago Toral Quiroga

Jose Maria Casanova Crespo

Luis Henriques

Maíra Canal

Melissa Wen

Rodrigo Siqueira

Thadeu Lima de Souza Cascardo

Tvrtko Ursulin

Tested (2)

Changwoo Min

Guilherme G. Piccoli

Acked (12)

Changwoo Min

Maíra Canal

Tvrtko Ursulin

Maintainer SoB (2)

Maíra Canal

Tvrtko Ursulin

26 May 2025 12:00am GMT

23 May 2025

feedplanet.freedesktop.org

Hans de Goede: IPU6 cameras with ov02c10 / ov02e10 now supported in Fedora

I'm happy to share that 3 major IPU6 camera related kernel changes from linux-next have been backported to Fedora and have been available for about a week now the Fedora kernel-6.14.6-300.fc42 (or later) package:

  1. Support for the OV02C10 camera sensor, this should e.g. enable the camera to work out of the box on all Dell XPS 9x40 models.
  2. Support for the OV02E10 camera sensor, this should e.g. enable the camera to work out of the box on Dell Precision 5690 laptops. When combined with item 3. below and the USBIO drivers from rpmfusion this should also e.g. enable the camera on other laptop models like e.g. the Dell Latitude 7450.
  3. Support for the special handshake GPIO used to turn on the sensor and allow sensor i2c-access on various new laptop models using the Lattice MIPI aggregator FPGA / USBIO chip.


If you want to give this a test using the libcamera-softwareISP FOSS stack, run the following commands:

sudo rm -f /etc/modprobe.d/ipu6-driver-select.conf
sudo dnf update 'kernel*'
sudo dnf install libcamera-qcam
reboot
qcam

Note the colors being washed out and/or the image possibly being a bit over or under exposed is expected behavior ATM, this is due to the software ISP needing more work to improve the image quality. If your camera still does not work after these changes and you've not filed a bug for this camera already please file a bug following these instructions.

See my previous blogpost on how to also test Intel's proprietary stack from rpmfusion if you also have that installed.

comment count unavailable comments

23 May 2025 4:09pm GMT

Hans de Goede: IPU6 FOSS and proprietary stack co-existence

Since the set of rpmfusion intel-ipu6-kmod + ipu6-camera-* package updates from last February the FOSS libcamera-softwareISP and Intel's proprietary stack using the Intel hardware ISP can now co-exist on Fedora systems, sharing the mainline IPU6-CSI2 receiver driver.

Because of this it is no longer necessary to blacklist the kernel-modules from the other stack. Unfortunately when the rpmfusion packages first generated "/etc/modprobe.d/ipu6-driver-select.conf" for blacklisting this file was not marked as "%ghost" in the specfile and now with the February ipu6-camera-hal the file has been removed from the package. This means that if you've jumped from an old ipu6-camera-hal where the file was not marked as "%ghost directly to the latest you may still have the modprobe.d conf file around causing issues. To fix this run:

sudo rm -f /etc/modprobe.d/ipu6-driver-select.conf

and then reboot. I'll also add this as post-install script to the ipu6-camera-hal packages, to fix systems being broken because of this.

If you want the rpmfusion packages because your system needs the USBIO drivers, but you do not want the proprietary stack, you can run the following command to disable the proprietary stack:

sudo ipu6-driver-select foss

Or if you have disabled the prorietary stack in the past and want to give it a try, run:

sudo ipu6-driver-select proprietary

To test switching between the 2 stacks in Firefox go to Mozilla's webrtc test page and click on the "Camera" button, you should now get a camera permisson dialog with 2 cameras: "Built in Front Camera" and "Intel MIPI Camera (V4L2)" the "Built in Front Camera" is the FOSS stack and the "Intel MIPI Camera (V4L2)" is the proprietary stack. Note the FOSS stack will show a strongly zoomed in (cropped) image, this is caused by the GUM test-page, in e.g. google-meet this will not be the case.

Unfortunately switching between the 2 cameras in jitsi does not work well. The jitsi camera selector tries to show a preview of both cameras at the same time and while one stack is streaming the other stack cannot access the camera. You should be able to switch by: 1. Selecting the camera you want 2. Closing the jitsi tab 3. wait a few seconds for the camera to stop streaming 4. open jitsi in a new tab.

Note I already mentioned most of this in my previous blog post but it was a bit buried there.

comment count unavailable comments

23 May 2025 3:42pm GMT

21 May 2025

feedplanet.freedesktop.org

Peter Hutterer: libinput and Lua plugins

First of all, what's outlined here should be available in libinput 1.29 but I'm not 100% certain on all the details yet so any feedback (in the libinput issue tracker) would be appreciated. Right now this is all still sitting in the libinput!1192 merge request. I'd specifically like to see some feedback from people familiar with Lua APIs. With this out of the way:

Come libinput 1.29, libinput will support plugins written in Lua. These plugins sit logically between the kernel and libinput and allow modifying the evdev device and its events before libinput gets to see them.

The motivation for this are a few unfixable issues - issues we knew how to fix but we cannot actually implement and/or ship the fixes without breaking other devices. One example for this is the inverted Logitech MX Master 3S horizontal wheel. libinput ships quirks for the USB/Bluetooth connection but not for the Bolt receiver. Unlike the Unifying Receiver the Bolt receiver doesn't give the kernel sufficient information to know which device is currently connected. Which means our quirks could only apply to the Bolt receiver (and thus any mouse connected to it) - that's a rather bad idea though, we'd break every other mouse using the same receiver. Another example is an issue with worn out mouse buttons - on that device the behavior was predictable enough but any heuristics would catch a lot of legitimate buttons. That's fine when you know your mouse is slightly broken and at least it works again. But it's not something we can ship as a general solution. There are plenty more examples like that - custom pointer deceleration, different disable-while-typing, etc.

libinput has quirks but they are internal API and subject to change without notice at any time. They're very definitely not for configuring a device and the local quirk file libinput parses is merely to bridge over the time until libinput ships the (hopefully upstreamed) quirk.

So the obvious solution is: let the users fix it themselves. And this is where the plugins come in. They are not full access into libinput, they are closer to a udev-hid-bpf in userspace. Logically they sit between the kernel event devices and libinput: input events are read from the kernel device, passed to the plugins, then passed to libinput. A plugin can look at and modify devices (add/remove buttons for example) and look at and modify the event stream as it comes from the kernel device. For this libinput changed internally to now process something called an "evdev frame" which is a struct that contains all struct input_events up to the terminating SYN_REPORT. This is the logical grouping of events anyway but so far we didn't explicitly carry those around as such. Now we do and we can pass them through to the plugin(s) to be modified.

The aforementioned Logitech MX master plugin would look like this: it registers itself with a version number, then sets a callback for the "new-evdev-device" notification and (where the device matches) we connect that device's "evdev-frame" notification to our actual code:

libinput:register(1) -- register plugin version 1
libinput:connect("new-evdev-device", function (_, device)
    if device:vid() == 0x046D and device:pid() == 0xC548 then
        device:connect("evdev-frame", function (_, frame)
            for _, event in ipairs(frame.events) do
                if event.type == evdev.EV_REL and 
                   (event.code == evdev.REL_HWHEEL or 
                    event.code == evdev.REL_HWHEEL_HI_RES) then
                    event.value = -event.value
                end
            end
            return frame
        end)
    end
end)

This file can be dropped into /etc/libinput/plugins/10-mx-master.lua and will be loaded on context creation. I'm hoping the approach using named signals (similar to e.g. GObject) makes it easy to add different calls in future versions. Plugins also have access to a timer so you can filter events and re-send them at a later point in time. This is useful for implementing something like disable-while-typing based on certain conditions.

So why Lua? Because it's very easy to sandbox. I very explicitly did not want the plugins to be a side-channel to get into the internals of libinput - specifically no IO access to anything. This ruled out using C (or anything that's a .so file, really) because those would run a) in the address space of the compositor and b) be unrestricted in what they can do. Lua solves this easily. And, as a nice side-effect, it's also very easy to write plugins in.[1]

Whether plugins are loaded or not will depend on the compositor: an explicit call to set up the paths to load from and to actually load the plugins is required. No run-time plugin changes at this point either, they're loaded on libinput context creation and that's it. Otherwise, all the usual implementation details apply: files are sorted and if there are files with identical names the one from the highest-precedence directory will be used. Plugins that are buggy will be unloaded immediately.

If all this sounds interesting, please have a try and report back any APIs that are broken, or missing, or generally ideas of the good or bad persuation. Ideally before we ship it and the API is stable forever :)

[1] Benjamin Tissoires actually had a go at WASM plugins (via rust). But ... a lot of effort for rather small gains over Lua

21 May 2025 4:09am GMT

19 May 2025

feedplanet.freedesktop.org

Melissa Wen: A Look at the Latest Linux KMS Color API Developments on AMD and Intel

This week, I reviewed the last available version of the Linux KMS Color API. Specifically, I explored the proposed API by Harry Wentland and Alex Hung (AMD), their implementation for the AMD display driver and tracked the parallel efforts of Uma Shankar and Chaitanya Kumar Borah (Intel) in bringing this plane color management to life. With this API in place, compositors will be able to provide better HDR support and advanced color management for Linux users.

To get a hands-on feel for the API's potential, I developed a fork of drm_info compatible with the new color properties. This allowed me to visualize the display hardware color management capabilities being exposed. If you're curious and want to peek behind the curtain, you can find my exploratory work on the drm_info/kms_color branch. The README there will guide you through the simple compilation and installation process.

Note: You will need to update libdrm to match the proposed API. You can find an updated version in my personal repository here. To avoid potential conflicts with your official libdrm installation, you can compile and install it in a local directory. Then, use the following command: export LD_LIBRARY_PATH="/usr/local/lib/"

In this post, I invite you to familiarize yourself with the new API that is about to be released. You can start doing as I did below: just deploy a custom kernel with the necessary patches and visualize the interface with the help of drm_info. Or, better yet, if you are a userspace developer, you can start developing user cases by experimenting with it.

The more eyes the better.

KMS Color API on AMD

The great news is that AMD's driver implementation for plane color operations is being developed right alongside their Linux KMS Color API proposal, so it's easy to apply to your kernel branch and check it out. You can find details of their progress in the AMD's series.

I just needed to compile a custom kernel with this series applied, intentionally leaving out the AMD_PRIVATE_COLOR flag. The AMD_PRIVATE_COLOR flag guards driver-specific color plane properties, which experimentally expose hardware capabilities while we don't have the generic KMS plane color management interface available.

If you don't know or don't remember the details of AMD driver specific color properties, you can learn more about this work in my blog posts [1] [2] [3]. As driver-specific color properties and KMS colorops are redundant, the driver only advertises one of them, as you can see in AMD workaround patch 24.

So, with the custom kernel image ready, I installed it on a system powered by AMD DCN3 hardware (i.e. my Steam Deck). Using my custom drm_info, I could clearly see the Plane Color Pipeline with eight color operations as below:

└───"COLOR_PIPELINE" (atomic): enum {Bypass, Color Pipeline 258} = Bypass
    ├───Bypass
    └───Color Pipeline 258
        ├───Color Operation 258
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 1D Curve
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   └───"CURVE_1D_TYPE" (atomic): enum {sRGB EOTF, PQ 125 EOTF, BT.2020 Inverse OETF} = sRGB EOTF
        ├───Color Operation 263
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = Multiplier
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   └───"MULTIPLIER" (atomic): range [0, UINT64_MAX] = 0
        ├───Color Operation 268
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 3x4 Matrix
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   └───"DATA" (atomic): blob = 0
        ├───Color Operation 273
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 1D Curve
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   └───"CURVE_1D_TYPE" (atomic): enum {sRGB Inverse EOTF, PQ 125 Inverse EOTF, BT.2020 OETF} = sRGB Inverse EOTF
        ├───Color Operation 278
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 1D LUT
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   ├───"SIZE" (atomic, immutable): range [0, UINT32_MAX] = 4096
        │   ├───"LUT1D_INTERPOLATION" (immutable): enum {Linear} = Linear
        │   └───"DATA" (atomic): blob = 0
        ├───Color Operation 285
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 3D LUT
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   ├───"SIZE" (atomic, immutable): range [0, UINT32_MAX] = 17
        │   ├───"LUT3D_INTERPOLATION" (immutable): enum {Tetrahedral} = Tetrahedral
        │   └───"DATA" (atomic): blob = 0
        ├───Color Operation 292
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 1D Curve
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   └───"CURVE_1D_TYPE" (atomic): enum {sRGB EOTF, PQ 125 EOTF, BT.2020 Inverse OETF} = sRGB EOTF
        └───Color Operation 297
            ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 1D LUT
            ├───"BYPASS" (atomic): range [0, 1] = 1
            ├───"SIZE" (atomic, immutable): range [0, UINT32_MAX] = 4096
            ├───"LUT1D_INTERPOLATION" (immutable): enum {Linear} = Linear
            └───"DATA" (atomic): blob = 0

Note that Gamescope is currently using AMD driver-specific color properties implemented by me, Autumn Ashton and Harry Wentland. It doesn't use this KMS Color API, and therefore COLOR_PIPELINE is set to Bypass. Once the API is accepted upstream, all users of the driver-specific API (including Gamescope) should switch to the KMS generic API, as this will be the official plane color management interface of the Linux kernel.

KMS Color API on Intel

On the Intel side, the driver implementation available upstream was built upon an earlier iteration of the API. This meant I had to apply a few tweaks to bring it in line with the latest specifications. You can explore their latest work here. For a more simplified handling, combining the V9 of the Linux Color API, Intel's contributions, and my necessary adjustments, check out my dedicated branch.

I then compiled a kernel from this integrated branch and deployed it on a system featuring Intel TigerLake GT2 graphics. Running my custom drm_info revealed a Plane Color Pipeline with three color operations as follows:

├───"COLOR_PIPELINE" (atomic): enum {Bypass, Color Pipeline 480} = Bypass
│   ├───Bypass
│   └───Color Pipeline 480
│       ├───Color Operation 480
│       │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, 1D LUT Mult Seg, 3x3 Matrix, Multiplier, 3D LUT} = 1D LUT Mult Seg
│       │   ├───"BYPASS" (atomic): range [0, 1] = 1
│       │   ├───"HW_CAPS" (atomic, immutable): blob = 484
│       │   └───"DATA" (atomic): blob = 0
│       ├───Color Operation 487
│       │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, 1D LUT Mult Seg, 3x3 Matrix, Multiplier, 3D LUT} = 3x3 Matrix
│       │   ├───"BYPASS" (atomic): range [0, 1] = 1
│       │   └───"DATA" (atomic): blob = 0
│       └───Color Operation 492
│           ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, 1D LUT Mult Seg, 3x3 Matrix, Multiplier, 3D LUT} = 1D LUT Mult Seg
│           ├───"BYPASS" (atomic): range [0, 1] = 1
│           ├───"HW_CAPS" (atomic, immutable): blob = 496
│           └───"DATA" (atomic): blob = 0

Observe that Intel's approach introduces additional properties like "HW_CAPS" at the color operation level, along with two new color operation types: 1D LUT with Multiple Segments and 3x3 Matrix. It's important to remember that this implementation is based on an earlier stage of the KMS Color API and is awaiting review.

A Shout-Out to Those Who Made This Happen

I'm impressed by the solid implementation and clear direction of the V9 of the KMS Color API. It aligns with the many insightful discussions we've had over the past years. A huge thank you to Harry Wentland and Alex Hung for their dedication in bringing this to fruition!

Beyond their efforts, I deeply appreciate Uma and Chaitanya's commitment to updating Intel's driver implementation to align with the freshest version of the KMS Color API. The collaborative spirit of the AMD and Intel developers in sharing their color pipeline work upstream is invaluable. We're now gaining a much clearer picture of the color capabilities embedded in modern display hardware, all thanks to their hard work, comprehensive documentation, and engaging discussions.

Finally, thanks all the userspace developers, color science experts, and kernel developers from various vendors who actively participate in the upstream discussions, meetings, workshops, each iteration of this API and the crucial code review process. I'm happy to be part of the final stages of this long kernel journey, but I know that when it comes to colors, one step is completed for new challenges to be unlocked.

Looking forward to meeting you in this year Linux Display Next hackfest, organized by AMD in Toronto, to further discuss HDR, advanced color management, and other display trends.

19 May 2025 9:05pm GMT

14 May 2025

feedplanet.freedesktop.org

Simon Ser: Status update, May 2025

Hi!

Today wlroots 0.19.0 has finally been released! Among the newly supported protocols, color-management-v1 lays the first stone of HDR support (backend and renderer bits are still being reviewed) and ext-image-copy-capture-v1 enhances the previous screen capture protocol with better performance. Explicit synchronization is now fully supported, and display-only devices such as gud or DisplayLink can now be used with wlroots. See the release notes for more details! I hope I'll be able to go back to some feature work and reviews now that the release is out of the way.

In other graphics news, I've finished my review of the core DRM patches for the new KMS color pipeline. Other kernel folks have reviewed the patches, we're just waiting on a user-space implementation now (which various compositor folks are working on). I've started a discussion about libliftoff support.

In addition to wlroots, this month I've also released a new version of my mobile IRC client, Goguma 0.8.0. delthas has sent a patch to synchronize pinned and muted conversations across devices via soju. Thanks to pounce, Goguma now supports message reactions (not included in the release):

A conversation with a reaction to a message Message menu with quick reaction buttons Emoji picker Detailed list of reactions to a message

My extended-isupport IRCv3 specification has been accepted. It allows servers to advertise metadata such as the maximum nickname length or IRC network name early (before the user provides a nickname and authentication details), which is useful for building nice connection UIs. I've posted another proposal for IRC network icons.

go-smtp 0.22.0 has been released with an improved DATA command API, RRVS support (Require Recipient Valid Since), and custom hello after reset or STARTTLS. I've also spent quite a bit of time reaching out to companies for XDC 2025 sponsorships.

See you next month!

14 May 2025 10:00pm GMT

12 May 2025

feedplanet.freedesktop.org

Tomeu Vizoso: Rockchip NPU update 5: Progress on the kernel driver

It has been almost a year since my last update on the Rockchip NPU, and though I'm a bit sad that I haven't had more time to work on it, I'm happy that I found some time earlier this year for this.

Quoting from my last update on the Rockchip NPU driver:

The kernel driver is able to fully use the three cores in the NPU, giving us the possibility of running 4 simultaneous object detection inferences such as the one below on a stream, at almost 30 frames per second.


All feedback has been incorporated in a new revision of the kernel driver and it was submitted to the Linux kernel mailing list.

Though I'm very happy with the direction the kernel driver is taking, I would have liked to make faster progress on it. I have spent the time since the first revision on making the Etnaviv NPU driver ready to be deployed in production (will be blogging about this soon), and also had to take some non-upstream work to pay my bills.

Next I plan to cleanup the userspace driver so it's ready for review, and then I will go for a third revision of the kernel driver.

12 May 2025 5:30am GMT

22 Apr 2025

feedplanet.freedesktop.org

Melissa Wen: 2025 FOSDEM: Don't let your motivation go, save time with kworkflow

2025 was my first year at FOSDEM, and I can say it was an incredible experience where I met many colleagues from Igalia who live around the world, and also many friends from the Linux display stack who are part of my daily work and contributions to DRM/KMS. In addition, I met new faces and recognized others with whom I had interacted on some online forums and we had good and long conversations.

During FOSDEM 2025 I had the opportunity to present about kworkflow in the kernel devroom. Kworkflow is a set of tools that help kernel developers with their routine tasks and it is the tool I use for my development tasks. In short, every contribution I make to the Linux kernel is assisted by kworkflow.

The goal of my presentation was to spread the word about kworkflow. I aimed to show how the suite consolidates good practices and recommendations of the kernel workflow in short commands. These commands are easily configurable and memorized for your current work setup, or for your multiple setups.

For me, Kworkflow is a tool that accommodates the needs of different agents in the Linux kernel community. Active developers and maintainers are the main target audience for kworkflow, but it is also inviting for users and user-space developers who just want to report a problem and validate a solution without needing to know every detail of the kernel development workflow.

Something I didn't emphasize during the presentation but would like to correct this flaw here is that the main author and developer of kworkflow is my colleague at Igalia, Rodrigo Siqueira. Being honest, my contributions are mostly on requesting and validating new features, fixing bugs, and sharing scripts to increase feature coverage.

So, the video and slide deck of my FOSDEM presentation are available for download here.

And, as usual, you will find in this blog post the script of this presentation and more detailed explanation of the demo presented there.


Kworkflow at FOSDEM 2025: Speaker Notes and Demo

Hi, I'm Melissa, a GPU kernel driver developer at Igalia and today I'll be giving a very inclusive talk to not let your motivation go by saving time with kworkflow.

So, you're a kernel developer, or you want to be a kernel developer, or you don't want to be a kernel developer. But you're all united by a single need: you need to validate a custom kernel with just one change, and you need to verify that it fixes or improves something in the kernel.

And that's a given change for a given distribution, or for a given device, or for a given subsystem…

Look to this diagram and try to figure out the number of subsystems and related work trees you can handle in the kernel.

So, whether you are a kernel developer or not, at some point you may come across this type of situation:

There is a userspace developer who wants to report a kernel issue and says:

But the userspace developer has never compiled and installed a custom kernel before. So they have to read a lot of tutorials and kernel documentation to create a kernel compilation and deployment script. Finally, the reporter managed to compile and deploy a custom kernel and reports:

And then, the kernel developer needs to reproduce this issue on their side, but they have never worked with this distribution, so they just created a new script, but the same script created by the reporter.

What's the problem of this situation? The problem is that you keep creating new scripts!

Every time you change distribution, change architecture, change hardware, change project - even in the same company - the development setup may change when you switch to a different project, you create another script for your new kernel development workflow!

You know, you have a lot of babies, you have a collection of "my precious scripts", like Sméagol (Lord of the Rings) with the precious ring.

Instead of creating and accumulating scripts, save yourself time with kworkflow. Here is a typical script that many of you may have. This is a Raspberry Pi 4 script and contains everything you need to memorize to compile and deploy a kernel on your Raspberry Pi 4.

With kworkflow, you only need to memorize two commands, and those commands are not specific to Raspberry Pi. They are the same commands to different architecture, kernel configuration, target device.

What is kworkflow?

Kworkflow is a collection of tools and software combined to:

I don't know if you will get this analogy, but kworkflow is for me a megazord of scripts. You are combining all of your scripts to create a very powerful tool.

What is the main feature of kworflow?

There are many, but these are the most important for me:

This is the list of commands you can run with kworkflow. The first subset is to configure your tool for various situations you may face in your daily tasks.

# Manage kw and kw configurations
kw init             - Initialize kw config file
kw self-update (u)  - Update kw
kw config (g)       - Manage kernel .config files

The second subset is to build and deploy custom kernels.

# Build & Deploy custom kernels
kw kernel-config-manager (k) - Manage kernel .config files
kw build (b)        - Build kernel
kw deploy (d)       - Deploy kernel image (local/remote)
kw bd               - Build and deploy kernel

We have some tools to manage and interact with target machines.

# Manage and interact with target machines
kw ssh (s)          - SSH support
kw remote (r)       - Manage machines available via ssh
kw vm               - QEMU support

To inspect and debug a kernel.

# Inspect and debug
kw device           - Show basic hardware information
kw explore (e)      - Explore string patterns in the work tree and git logs
kw debug            - Linux kernel debug utilities
kw drm              - Set of commands to work with DRM drivers

To automatize best practices for patch submission like codestyle, maintainers and the correct list of recipients and mailing lists of this change, to ensure we are sending the patch to who is interested in it.

# Automatize best practices for patch submission
kw codestyle (c)    - Check code style
kw maintainers (m)  - Get maintainers/mailing list
kw send-patch       - Send patches via email

And the last one, the upcoming patch hub.

# Upcoming
kw patch-hub        - Interact with patches (lore.kernel.org)

How can you save time with Kworkflow?

So how can you save time building and deploying a custom kernel?

First, you need a .config file.

Then you want to build the kernel:

Finally, to deploy the kernel in a target machine.

You can also save time on debugging kernels locally or remotely.

You can save time on managing multiple kernel images in the same work tree.

Finally, you can save time when submitting kernel patches. In kworkflow, you can find everything you need to wrap your changes in patch format and submit them to the right list of recipients, those who can review, comment on, and accept your changes.

This is a demo that the lead developer of the kw patch-hub feature sent me. With this feature, you will be able to check out a series on a specific mailing list, bookmark those patches in the kernel for validation, and when you are satisfied with the proposed changes, you can automatically submit a reviewed-by for that whole series to the mailing list.


Demo

Now a demo of how to use kw environment to deal with different devices, architectures and distributions in the same work tree without losing compiled files, build and deploy settings, .config file, remote access configuration and other settings specific for those three devices that I have.

Setup

Demo script

In the same terminal and worktree.

First target device: Laptop (debian|x86|intel|local)
$ kw env --list # list environments available in this work tree
$ kw env --use LOCAL # select the environment of local machine (laptop) to use: loading pre-compiled files, kernel and kworkflow settings.
$ kw device # show device information
$ sudo modinfo vkms # show VKMS module information before applying kernel changes.
$ <open VKMS file and change module info>
$ kw bd # compile and install kernel with the given change
$ sudo modinfo vkms # show VKMS module information after kernel changes.
$ git checkout -- drivers
Second target device: RaspberryPi 4 (raspbian|arm64|broadcomm|remote)
$ kw env --use RPI_64 # move to the environment for a different target device.
$ kw device # show device information and kernel image name
$ kw drm --gui-off-after-reboot # set the system to not load graphical layer after reboot
$ kw b # build the kernel with the VKMS change
$ kw d --reboot # deploy the custom kernel in a Raspberry Pi 4 with Raspbian 64, and reboot
$ kw s # connect with the target machine via ssh and check the kernel image name
$ exit
Third target device: SteamDeck (steamos|x86|amd|remote)
$ kw env --use STEAMDECK # move to the environment for a different target device
$ kw device # show device information
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output
$ kw debug --dmesg --follow --history --cmd="modprobe -r vkms" # run a command and show the related dmesg output
$ <add a printk with a random msg to appear on dmesg log>
$ kw bd # deploy and install custom kernel to the target device
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output after build and deploy the kernel change

Q&A

Most of the questions raised at the end of the presentation were actually suggestions and additions of new features to kworkflow.

The first participant, that is also a kernel maintainer, asked about two features: (1) automatize getting patches from patchwork (or lore) and triggering the process of building, deploying and validating them using the existing workflow, (2) bisecting support. They are both very interesting features. The first one fits well the patch-hub subproject, that is under-development, and I've actually made a similar request a couple of weeks before the talk. The second is an already existing request in kworkflow github project.

Another request was to use kexec and avoid rebooting the kernel for testing. Reviewing my presentation I realized I wasn't very clear that kworkflow doesn't support kexec. As I replied, what it does is to install the modules and you can load/unload them for validations, but for built-in parts, you need to reboot the kernel.

Another two questions: one about Android Debug Bridge (ADB) support instead of SSH and another about support to alternative ways of booting when the custom kernel ended up broken but you only have one kernel image there. Kworkflow doesn't manage it yet, but I agree this is a very useful feature for embedded devices. On Raspberry Pi 4, kworkflow mitigates this issue by preserving the distro kernel image and using config.txt file to set a custom kernel for booting. For ADB, there is no support too, and as I don't see currently users of KW working with Android, I don't think we will have this support any time soon, except if we find new volunteers and increase the pool of contributors.

The last two questions were regarding the status of b4 integration, that is under development, and other debugging features that the tool doesn't support yet.

Finally, when Andrea and I were changing turn on the stage, he suggested to add support for virtme-ng to kworkflow. So I opened an issue for tracking this feature request in the project github.

With all these questions and requests, I could see the general need for a tool that integrates the variety of kernel developer workflows, as proposed by kworflow. Also, there are still many cases to be covered by kworkflow.

Despite the high demand, this is a completely voluntary project and it is unlikely that we will be able to meet these needs given the limited resources. We will keep trying our best in the hope we can increase the pool of users and contributors too.

22 Apr 2025 7:30pm GMT

16 Apr 2025

feedplanet.freedesktop.org

Simon Ser: Status update, April 2025

Hi!

Last week wlroots 0.19.0-rc1 has been released! It includes the new color management protocol, however it doesn't include HDR10 support because the renderer and backend bits haven't yet been merged. Also worth noting is full explicit synchronization support as well as the new screen capture protocols. I plan to release new release candidates weekly until we're happy with the stability. Please test!

Sway is also getting close to its first release candidate. I plan to publish version 1.11.0-rc1 this week-end. Thanks to Ferdinand Bachmann, Sway no longer aborts on shutdown due to dangling signal listeners. I've also updated my HDR10 patch to add an output hdr command (but it's Sway 1.12 material).

I've spent a bit of time on libicc, my C library to manipulate ICC profiles. I've introduced an encoder to make it easy to write new ICC profiles, and used that to write a small program to create an ICC profile which inverts colors. The encoder doesn't support as many ICC elements as the decoder yet (patches welcome!), but does support many interesting bits for display profiles: basic matrices and curves, lut16Type elements and more advanced lutAToBType elements. New APIs have been introduced to apply ICC profile transforms to a color value. I've also added tests which compare the results given by libicc and by LittleCMS. For some reason lut16Type and lutAToBType results are multiplied by 2 by LittleCMS, I haven't yet understood why that is, even after reading the spec in depth and staring at LittleCMS source code for a few hours (if you have a guess please ping me). In the future I'd like to add a small tool to convert ICC profiles to and from JSON files to make it easy to create new files or adjust exist ones.

Version 0.9.0 of the soju IRC bouncer has been released. Among the most notable changes, the database is used by default to store messages, pinned/muted channels and buffers can be synchronized across devices, and database queries have been optimized. I've continued working on the Goguma mobile IRC client, fixing a few bugs such as dangling Firebase push subscriptions and message notifications being dismissed too eagerly.

Max Ehrlich has contributed a mako patch to introduce a Notifications property to the mako-specific D-Bus API, so that external programs can monitor active notifications (e.g. display a count in a status bar, or display a list on a lockscreen).

That's all I have in store, see you next month!

16 Apr 2025 10:00pm GMT

Mike Blumenkrantz: Another Milestone

It's CLover.

16 Apr 2025 12:00am GMT