01 Jan 2026
planet.freedesktop.org
Timur Kristóf: A love song for Linux gamers with old GPUs (EOY 2025)
AMD GPUs are famous for working very well on Linux. However, what about the very first GCN GPUs? Are they working as well as the new ones? In this post, I'm going to summarize how well these old GPUs are supported and what I've been doing to improve them.
This story is about the first two generations of GCN: Southern Islands (aka. SI, GCN1, GFX6) and Sea Islands (aka. CIK, GCN2, GFX7).
Working on old GPUs
While AMD GPUs generally have a good reputation on Linux, these old GCN graphics cards have been a sore spot for as long as I've been working on the driver stack.
It occurred to me that resolving some of the long-standing issues on these old GPUs might be a great way to get me started on working on the amdgpu kernel driver and would help improve the default user experience of Linux users on these GPUs. I figured that it would give me a good base understanding, and later I could also start contributing code and bug fixes to newer GPUs.
Where I started
The RADV team has supported RADV on SI and CIK GPUs for a long time. RADV support was already there even before I joined the team in mid-2019. Daniel added ACO support for GFX7 in November 2019, and Samuel added ACO support for GFX6 in January 2020. More recently, Martin added a Tahiti (GFX6) and Hawaii (GFX7) GPU to the Mesa CI which are running post-merge "nightly" jobs. So we can catch regressions and test our work on these GPUs quite quickly.
The kernel driver situation was less fortunate.
On the kernel side, amdgpu (the newer kernel driver) has supported CIK since June 2015 and SI since August 2016. DC (the new display driver) has supported CIK since September 2017 (the beginning), and SI support was added in July 2020 by Mauro. However, the old radeon driver was the default driver. Unfortunately, radeon doesn't support Vulkan, so in the default user experience, users couldn't play most games or benefit from any of the Linux gaming related work we've been doing for the last 10 years.
In order to get working Vulkan support on SI and CIK, we needed to use the following kernel params:
radeon.si_support=0 radeon.cik_support=0 amdgpu.si_support=1 amdgpu.cik_support=1
Then, you could boot with amdgpu and enjoy a semblance of a good user experience until the GPU crashed / hung, or until you tried to use some functionality which was missing from amdgpu, or until you plugged in a display which the display driver couldn't handle.
It was… not the best user experience.
Where to go from there?
The first question that came to mind is, why wasn't amdgpu the default kernel driver for these GPUs? Since the "experimental" support had been there for 10 years, we had thought the kernel devs would eventually just enable amdgpu by default, but that never happened. At XDC 2024 I met Alex Deucher, the lead developer of amdgpu and asked him what was missing. Alex explained to me that the main reason the default wasn't switched is to avoid regressions for users who rely on some functionality not supported by amdgpu:
- Display features: analog connectors in DC (or DP/HDMI audio support in non-DC)
- VCE1 for video coding on SI
It doesn't seem like much, does it? How hard can it be?
Display features
On a 2025 summer afternoon…
I messaged Alex Deucher to get some advice on where to start. Alex was very considerate and helped me to get a good understanding of how the code is organized, how the parts fit together and where I should start reading. Harry Wentland also helped a lot with making a plan how to fit analog connectors in DC. Then, I plugged my monitors into my Raphael iGPU to be used as a primary GPU, then plugged in an old Oland card as a secondary GPU, and started hacking.
Focus
For the display, I decided that the best way forward is to add what is missing from DC for these GPUs and use DC by default. That way, we can eventually get rid of the legacy display code (which was always meant as a temporary solution until DC landed).
Additionally, I decided to focus on dedicated GPUs because these are the most useful for gaming and are easy to test using a desktop computer. There is still work left to do for CIK APUs.
Analog connector support in DC
Analog connectors have been actually quite easy to deal with, once I understood the structure of the DC (display core) codebase. I could use the legacy display code as a reference. The DAC (digital to analog converter) is actually programmed by the VBIOS, and the driver just needs to call the VBIOS to tell it what to do. Easier said than done, but not too hard.
It also turned out that some chips that already defaulted to DC (eg. Tonga, Hawaii) also have analog connectors, which apparently just didn't work on Linux by default. I managed to submit the first version of this in July. Then I was sidetracked with a lot of other issues, so I submitted the second version of the series in September, which then got merged.
Going shopping
It is incredibly difficult to debug issues when you don't have the hardware to reproduce them yourself. Some developers have a good talent for writing patches to fix issues without actually seeing the issue, but I feel I still have a long way to go to be that good. It was pretty clear from the beginning that the only way to make sure my work actually works on all SI/CIK GPUs is to test all of them myself.
So, I went ahead an acquired at least one of each SI and CIK chip. I got most of them from used hardware ad sites, and Leonardo Frassetto sent me a few as well.
Fixing DC support on SI (DCE6)
After I got the analog connector working using the old GPUs as a secondary GPU, I thought it's time to test how well it works as a primary GPU. You know, the way most actual users would use them. So I disabled the iGPU and booted my computer with each dGPU with amdgpu.dc=1 to see what happens. This is where things started going crazy…
- Tahiti (R9 280X) booted into "no signal", absolutely dead
- Oland (R7 250) had heavy flickering and hung very quickly
- Oland (Radeon 520) booted into "unsupported signal" with DC and massive flickering with non-DC
- Pitcairn (R9 270X) had some artifacts
- Cape Verde (HD 7770) I didn't even plug it in at this point…
- Hainan fortunately doesn't have DCE (display controller engine) so that wasn't a problem
The way to debug these problems is the following:
- Boot with
amdgpu.dc=0, dump all DCE registers using umr:umr -r oland.dce600..* > oland_nodc_good.txt - Boot with
amdgpu.dc=1, dump all DCE registers using umr:umr -r oland.dce600..* > oland_dc_bad.txt - Compare the two DCE register dumps using a diff viewer, eg. Meld. Try to find what are the register differences that are responsible for the bad behaviour.
- Use umr (either the GUI or CLI) to try to change the registers in real time, poke at it until the issue is gone.
- Wait until headache is gone.
- Read the code that sets the problematic registers and develop an actual fix.
I decided to fix the bugs before adding new features. I sent a few patch series to address a bunch of display issues mainly with SI (DCE6):
- Fixed broken PLL programming and some mistakes
- Fixed DC "overclocking" the display clock and a few other issues ― many years ago someone fixed an issue on Polaris by unconditionally raising the display clock by 15%, but unfortunately they also applied this to older GPUs; additionally the display clock was already set higher than the max
- Fixed DVI-D/HDMI adapters ― these would just give a black screen when I plugged in a 4K monitor
- While at it, I also fixed them in the legacy code
- Fixed a freeze caused by relying on a non-existent interrupt ― it seems that DCE6 is not capable of VRR, so we just shouldn't try to enable it or rely on interrupts that don't exist on this HW
- Fixed another black screen by rejecting too high pixel clocks ― technically, DP supports the bandwidth required by 4K 120Hz using 6-bit color with YUV420 on SI/CIK/VI, so DC would happily advertise this mode, but the GPUs didn't actually support a high enough display clock for 4K 120Hz
- Fixed an issue with the DCE6 scaler not being properly disabled ― apparently the VBIOS sets up the scaler automatically on some GPUs, which needs to be disabled by the kernel, otherwise you get weird artifacts
DisplayPort/HDMI audio support on SI (DCE6)
I noticed that HDMI audio worked alright on all GPUs with DC (as expected), however DP audio didn't (which was unexpected). However it worked when both DP and HDMI were plugged in… After consulting with Alex and doing some trial and error, it turned out that this was just due to setting some clock frequency in the wrong way: Fix DP audio DTO1 clock source on DCE6.
In order to figure out the correct frequencies, I wrote a script that set the frequency using umr then played a sound. I just laid back and let the script run until I heard the sound. Then it was just a matter of figuring out why that frequency was correct.
A small fun fact: it turns out that DP audio on Tahiti didn't work on any Linux driver before. Now it works with DC.
Poweeeeer
The DCE (Display Controller Engine), just like other parts of the GPU, has its own power requirements and needs certain clocks, voltages, etc. It is the responsibility of the power management code to make sure DCE gets the power it needs. Unfortunately, DC didn't talk to the legacy power management code. Even more unfortunately, the power management code was buggy so that's what I started with.
- SI power management fixes ― contains some fixes to get Tahiti to boot with DC, also it turns out that the SMC (system management controller) needs a longer timeout to complete some tasks
- More SI power management and PLL fixes ― most importantly this disables ASPM on SI, which caused "random hangs"
- Finally, this series hooks up SI to the legacy DPM code ― this is required because the power management code needs to know how many and what kind of displays are connected
After I was mostly done with SI, I also fixed an issue with CIK, where the shutdown temperature was incorrectly reported.
VCE1 video encoding on SI
Video encoding is usually an afterthought, not something that most users think about unless they are interested in streaming or video transcoding. It was definitely an afterthought for the hardware designers of SI, which has the first generation VCE (video coding engine) that only supports H264 and only up to 2048 x 1152. However, the old radeon kernel driver supports this engine and for anyone relying on this functionality, it would be a regression when switching to amdgpu. So we need to support it. There was already some work by Alexandre Demers to support VCE1, but that work was stalled due to issues caused by the firmware validation mechanism.
In order to switch SI to amdgpu by default, I needed to deal with VCE1. So I started a conversation with Christian König (amdgpu expert) to identify what the problem actually was, and with Alexandre to see how far along his work was.
- It turns out that due to some HW/FW limitations, the firmware (VCPU BO) needs to be mapped at a low 32-bit address.
- It also needs to be in VRAM for optimal performance (otherwise it would incur too many roundtrips to system RAM).
- However, there was no way for amdgpu to place something in VRAM and map it in the low 32-bit address space. (Note, it can actually do this for the address space of userspace apps, just not inside the kernel itself.)
Christian helped me a lot with understanding how the memory controller and the page table work.
After I got over the headache, I came up with this idea:
- Let amdgpu place the VCPU BO in VRAM
- Map the GART (graphics address remapping table) in the low 32-bit address space (instead of using best fit)
- Insert a few page table entries in the GART which would practically map the VCPU BO into the low 32-bit address space
With that out of the way, the rest of the work on VCE1 was pretty straightforward. I could use Alexandre's research, as well as the VCE2 code from amdgpu and the VCE1 code from radeon as a reference. Finally, a few reviews and three revisions later, the VCE1 series was accepted.
Final thoughts
Who is this for?
In the current economic situation of our world, I expect that people are going to use GPUs for much longer, and replace them less often. And when an old GPU is replaced, it doesn't die, it goes to somebody who upgrades an even older GPU. Eventually it will reach somebody that can't afford a better one. There are some efforts to use Linux to keep old computers alive, for example this one. My goal with this work is to make Linux gaming a good experience also for those who use old GPUs.
Other than that, I also did it for myself. Not because I want to run old GPUs myself, but because it has been a great learning experience to get into the amdgpu kernel driver.
Why amdgpu? Why DC?
The open source community including AMD themselves as well as other entities like Valve, Igalia, Red Hat etc. have invested a lot of time and effort into amdgpu and DC, which now support many generations of AMD GPUs: GCN1-5, RDNA1-4, as well as CDNA. In fact amdgpu supports more generations of GPUs now than what came before, and it looks like it will support many generations of future GPUs.
By making amdgpu work well with SI and CIK, we ensure that these GPUs remain competently supported for the foreseeable future.
By switching SI and CIK to use DC by default, we enable display features like atomic modesetting, VRR, HDR, etc. and this also allows the amdgpu maintainers to eventually move on from the legacy display code without losing functionality.
What is left to do?
Now that amdgpu is at feature parity with radeon on old GPUs, we switched the default to amdgpu on SI and CIK dedicated GPUs. It's time to start thinking about what else is left to do.
- Add support for DRM format modifiers for all SI/CIK/VI/Polaris GPUs. This would be a huge step forward for the Vulkan ecosystem, it would enable using purely Vulkan based compositors, Zink, and other features.
- Add support for TRAVIS and NUTMEG display bridges, so that we can also switch to amdgpu by default for CIK APUs. I couldn't find the hardware for this work (mainly looking for Kaveri APUs), if you have it and want to help, please reach out. Your
dmesglog will mention if the APU uses TRAVIS or NUTMEG. - Refactor SI and KV power management so that we can retire the legacy power management code, which would further ease the maintenance burden of these GPUs.
- Eventually retire the non-DC legacy display code from amdgpu to ease the maintenance burden.
- Deal with a few lingering bugs, such as power limit on Radeon 430, black screen with the analog connector on Radeon HD 7790, as well as reenable compute queues on SI, mitigate VM faults on SI/CIK, etc.
- Verify sparse mapping (PRT) support. I already wrote a kernel fix and a Mesa MR for enabling it.
- Implement transfer queue support for old GPUs in RADV.
What have I learned from all this?
It isn't that scary
Kernel development is not as scary as it looks. It is a different technical challenge than what I was used to, but not in a bad way. Just needed to figure out a good workflow for how to configure a code editor, as well as what is a good way to test my work without rebuilding everything all the time.
Maintainers are friendly
AMD engineers have been very friendly and helpful to me all the way. Although there are a lot of memes and articles on the internet about attitude and rude/toxic messages by some kernel developers, I didn't see that in amdgpu at least.
My approach was that even before I wrote a single line of code, I started talking to the maintainers (who would eventually review my patches) to find out what would be the good solution to them and how to get my work accepted. Communicating with the maintainers saved a lot of time and made the work faster, more pleasant and more collaborative.
Development latency
Sadly, there is a huge latency between a Linux kernel developer working on something and the work reaching end users. Even if the patches are accepted quickly, it can take 3~6 months until users can actually use it.
- In Mesa, we can merge any features to the next Mesa release up to the branch point, and afterwards we backport bug fixes to that release. Simple and efficient.
- In the Linux kernel, there is a deadline for merging new features to the next release, (it's not clearly communicated when that is). Bug fixes may be backported to previous releases any time, but upstream maintainers aren't involved in that.
In hindsight, if I had focused on finishing the analog support and VCE1 first (instead of fixing all the bugs I found), my work would have ended up in Linux 6.18 (including the bug fixes, as there is no deadline for those). Due to how I prioritized bug fixing, the features I've developed are only included in Linux 6.19, so this will be the version where SI and CIK will default to amdgpu by default.
XDC 2025
I presented a lightning talk on this topic at XDC 2025, where I talked about the state of SI and CIK support as of September 2025. You can find the slide deck here and the video here.
Acknowledgements
I'd like to say a big thank you to all of these people. All of the work I mentioned in this post would not have been possible without them.
- Alex Deucher, Christian König (amdgpu devs)
- Harry Wentland, Rodrigo Siquiera (DC devs)
- Marek Olsák, Pierre-Eric Pelloux-Prayer (radeonsi devs)
- Bas Nieuwenhuizen, Samuel Pitoiset (radv devs)
- Martin Roukala, né Peres (CI expert)
- Tom St Dennis (umr dev)
- Mauro Rossi (DCE6 in DC)
- Alexandre Demers (VCE1 research)
- Leonardo Frassetto (HW donation)
- Roman Elshin and others (testing)
- Pierre-Loup Griffais (Valve)
Which graphics cards are affected exactly?
When in doubt, consult Wikipedia.
GFX6 aka. GCN1 - Southern Islands (SI) dedicated GPUs: amdgpu is now the default kernel driver as of Linux 6.19. The DC display driver is now the default and is now usable for these GPUs. DC now supports analog connectors, power management is less buggy, and video encoding is now supported by amdgpu.
- Tahiti
- Radeon HD 7870 XT, 7950, 7970, 7990, 8950, 8970, 8990
- Radeon R9 280, 280X
- FirePro W8000, W9000, D500, D700, S9000, S9050, S10000
- Radeon Sky 700, 900
- Pitcairn
- Radeon HD 7850, 7870, 7970M, 8870, 8970M
- Radeon R9 265, 270, 270X, 370, 370X, M290X, M390
- FirePro W5000, W7000, D300, R5000, S7000
- Cape Verde
- Radeon HD 7730, 7750, 7770, 8730, 8760
- Radeon R7 250E, 250X, 350, 450
- FirePro W600, W4100, M4000, M6000
- Oland
- Radeon HD 8570, 8670
- Radeon R5 240, 250, 330, 340, 350, 430, 520, 610
- FirePro W2100
- various mobile GPUs
- Hainan
- various mobile GPUs
GFX7 aka. GCN2 - Sea Islands (CIK) dedicated GPUs: amdgpu is now the default kernel driver as of Linux 6.19. The DC display driver is now the default for Bonaire (was already the case for Hawaii). DC now supports analog connectors. Minor bug fixes.
- Hawaii
- Radeon R9 290, 290X, 295X2, 390, 390X
- FirePro W8100, W9100, S9100, S9150, S9170
- Bonaire
- Radeon HD 7790/8870
- Radeon R7 260/360/450
- Radeon RX 455, FirePro W5100, etc.
- various mobile GPUs
GFX8 aka. GCN3 - Volcanic Islands (VI) dedicated GPUs: DC now supports analog connectors.
(Note that amdgpu and DC were already supported on these GPUs since release.)
- Tonga
- Radeon R9 285, 380, 380X
- (other chips of this family are not affected by the work in this post)
01 Jan 2026 12:00am GMT
30 Dec 2025
planet.freedesktop.org
Lennart Poettering: Mastodon Stories for systemd v259
On Dec 17 we released systemd v259 into the wild.
In the weeks leading up to that release (and since then) I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd259 hash tag. In case you aren't using Mastodon, but would like to read up, here's a list of all 25 posts:
- Post #1: systemd-resolved Hooks
- Post #2: dlopen() everything
- Post #3: systemd-analyze dlopen-metadata
- Post #4: run0 --empower
- Post #5: systemd-vmspawn --bind-user=
- Post #6: Musl libc support
- Post #7: systemd-repart without device name
- Post #8: Parallel kmod loading in systemd-modules-load.service
- Post #9: NvPCR Support
- Post #10: systemd-analyze nvcpcrs
- Post #11: systemd-repart Varlink IPC API
- Post #12: systemd-vmspawn block device serial
- Post #13: systemd-repart --defer-partitions-empty= + --defer-partitions-factory-reset=
- Post #14: userdb support for UUID queries
- Post #15: Wallclock time in service completion logging
- Post #16: systemd-firstboot --prompt-keymap-auto
- Post #17: $LISTEN_PIDFDID
- Post #18: Incremental partition rescanning
- Post #19: ExecReloadPost=
- Post #20: Transaction order cycle tracking
- Post #21: systemd-firstboot facelift
- Post #22: Per-User systemd-machined + systemd-importd
- Post #23: systemd-udevd's OPTIONS="dump-json"
- Post #24: systemd-resolved's DumpDNSConfiguration() IPC Call
- Post #25: DHCP Server EmitDomain= + Domain=
I intend to do a similar series of serieses of posts for the next systemd release (v260), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.
My series for v260 will begin in a few weeks most likely, under the #systemd260 hash tag.
In case you are interested, here is the corresponding blog story for systemd v258, here for v257, and here for v256.
30 Dec 2025 11:00pm GMT
21 Dec 2025
planet.freedesktop.org
Timur Kristóf: Understanding your Linux open source drivers
After introducing how graphics drivers work in general, I'd like to give a brief overview about what is what in the Linux graphics stack, what are the important parts and what the key projects are where the development happens, as well as what you need to do to get the best user experience out of it.
The open source Linux graphics driver stack
Please refer to my previous post for a more detailed general explanation on graphics drivers in general. This post focuses on how things work in the open source graphics stack on Linux.
Which GPUs are supported?
We have open source drivers for the GPUs from all major manufacturers with varying degrees of success.
- Some companies (eg. AMD, Intel and others) choose to participate in the open source community and develop open source drivers themselves, with contributions also coming from other parties (eg. Valve, Red Hat, Google, etc). If you have an AMD or Intel GPU, their open source drivers typically work better than their alternatives (if any), and have been for a long time.
- Others (eg. NVidia and Apple) don't recognize the benefits of this style of development and leave it to the community to create drivers based on reverse-engineering or sometimes minimal help from the manufacturer. If you have an NVidia GPU, as of late 2025 the open source drivers may not be quite ready just yet. In this post I am not going to discuss the proprietary drivers.
What parts do you need?
The components you need in order to get your GPU working on open source drivers on a Linux distro are the following:
- Linux kernel (obviously) ― contains the open source kernel drivers (KMD).
- linux-firmware ― contains the necessary firmware. Note that even when the actual drivers are open source, most (all?) GPUs require closed source firmware to function.
- Mesa ― contains the most userspace drivers (relevant to gaming) as well as a shader compiler stack.
- LLVM ― needed by some Mesa drivers for shader compilation.
- Some vendors have other projects (eg. AMD ROCm) for supporting other features that aren't part of Mesa. These parts are out of scope for this blog post.
To make your GPU work, you need new enough versions of the Linux kernel, linux-firmware and Mesa (and LLVM) that include support for your GPU.
To make your GPU work well, I highly recommend to use the latest stable versions of all of the above. If you use old versions, you are missing out. By using old versions you are choosing not to benefit from the latest developments (features and bug fixes) that open source developers have worked on, and you will have a sub-par experience (especially on relatively new hardware).
Wait, aren't the drivers in the kernel?
If you read Reddit posts, you will stumble upon some people who believe that "the drivers are in the kernel" on Linux. This is a half-truth. Only the KMDs are part of the kernel, everything else (linux-firmware, Mesa, LLVM) is distributed in separate packges. How exactly those packages are organized, depends on your distribution.
What is the Mesa project?
Mesa is a collection of userspace drivers which implement various different APIs. It is the cornerstone of the open source graphics stack. I'm going to attempt to give a brief overview of what are the most relevant parts of Mesa.
Gallium
An important part of Mesa is the Gallium driver infrastructure, which contains a lot of common code for implementing different APIs, such as:
- Graphics: OpenGL, OpenGL ES, EGL
- Compute: OpenCL
- Video decoding and encoding: VAAPI (previously also VDPAU)
Vulkan
Mesa also contains a collection of Vulkan drivers. Originally, Vulkan was deemed "lower level than Gallium", so Vulkan drivers are not part of the Gallium driver infrastructure. However, Vulkan has a lot of overlapping functionality with the aforementioned APIs, so Vulkan drivers still share a lot of code with their Gallium counterparts when appropriate.
NIR
Another important part of Mesa is the NIR shader compiler stack, which is at the heart of every Mesa driver that is still being maintained. This enables sharing a lot of compiler code across different drivers. I highly recommend Faith Ekstrand's post In defense of NIR to learn more about it.
Compatibility layers and API translation
Technically they are not drivers, but in practice, if you want to run Windows games, you will need a compatibility layer like Wine or Proton, including graphics translation layers. The recommended ones are:
- DXVK: translates DirectX 8-11 to Vulkan.
- VKD3D-Proton: translates DirectX 12 to Vulkan.
Those are default in Proton and offer the best performance. However, for "political" reasons, these are sadly not the defaults in Wine, so either you'll have to use Proton or make sure to install the above in Wine manually.
Just for the sake of completeness, I'll also mention the Wine defaults:
- WineD3D: translates DirectX 1-11 to OpenGL. It is what we all used before Vulkan and DXVK existed, sadly its performance and compatibility has always been lacking.
- VKD3D: translates DirectX 12 to Vulkan. Not practically usable. Can only actually run a select few games, and those with lackluster performance. (Not to be confused with VKD3D-Proton, which is actually fully-featured.)
Side note about window systems
Despite the X server being abandoned for a long time, there is still a debate between Linux users whether to use a Wayland compositor or the X server. I'm not going to discuss the advantages and disadvantages of these, because I don't participate in their development and I feel it has already been well-explained by the community.
I'm just going to say that it helps to choose a competent compositor that implements direct scanout. This means that the frames as rendered by your game can be sent directly to the display without the need for the compositor to do any additional processing on it.
In this blog post I focus on just the driver stack, because that is largely shared between both solutions.
Making your games run (well)
Sadly, many Linux distributions choose to ship old versions of the kernel and/or other parts of the driver stack, giving their users a sub-par experience. Debian, Ubuntu LTS and their derivatives like Mint, Pop OS, etc. are all guilty of this. They justify this by claiming that older versions are more reliable, but this is actually not true.
In reality, us driver developers as well as the developers of the API translation layers work hard to implement new features that are needed to get new games working, as well as fixing bugs that are exposed by new games (or updates of old games).
Regressions are real, but they are usually quickly fixed thanks to the extensive testing that we do: every time we merge new code, our automated testing system runs the full Vulkan conformance test suite to make sure that all functionality is still intact, thanks to Martin's genious.
21 Dec 2025 11:52pm GMT
Simon Ser: Status update, December 2025
Hi all!
This month the new KMS plane color pipeline API has finally been merged! It took multiple years and continued work and review by engineers from multiple organizations, but at last we managed to push it over the finish line. This new API exposes to user-space new hardware blocks: these applying color transformations before blending multiple KMS planes as a final composited image to be sent on the wire. This API unlocks power-efficient and low-latency color management features such as HDR.
Still, much remains to be done. Color pipelines are now exposed on AMD and VKMS, Intel and other vendors are still working on their driver implementation. Melissa Wen has written a drm_info patch to show pipeline information, some more work is needed to plumb it through drmdb. Some patches have been floated to leverage color pipelines for post-blending transforms too (currently KMS only supports a fixed rudimentary post-blending pipeline with two LUTs and one 3×3 matrix).
On the wlroots side, Félix Poisot has redesigned the way post-blending color transforms are applied by the renderer. The API used to be a mix of descriptive (describing which primaries and transfer functions the output buffer uses) and prescriptive (passing a list of operations to apply). Now it's fully prescriptive, which will help for offloading these transformations to the DRM backend.
GnSight has contributed support for the wlr-foreign-toplevel-management-v1 protocol to the Cage kiosk compositor. This enables better control over windows running inside the compositor: external tools can close or bring windows to the front.
mhorky has added client support for one-way method calls to go-varlink, as well as a nice Registry enhancement to add support for the org.varlink.service interface for free, for discovery and introspection of Varlink services. Now that the module is feature-complete I've released version 0.1.0.
delthas has introduced support for authenticating with the soju IRC bouncer via TLS client certificates. He has contributed a simple audio recorder to the Goguma mobile IRC client, plus new buttons above the reaction list to be able to easily +1 another user's reaction. Hubert Hirtz has sent a collection of bug fixes and has added a button to reveal the password field contents on the connection screen.
I've resurrected work on some old projects I'd almost forgotten about. I've pushed a few patches for libicc, adding support for encoding multi-process transforms, luminance and metadata. I've added a basic test suite to libjsonschema, and improved handling of objects and arrays without enough information to automatically generate types from.
But the old project I've spent most of my time on is go-mls, a Go implementation of the Messaging Layer Security (MLS) protocol. MLS is an end-to-end encryption protocol for chat messages. My goal is twofold: learn how MLS works under the hood (implementing something is one of the best ways for me to understand that something), and lay the groundwork for a future end-to-end encryption IRC extension. This month I've fixed up the remaining failures in the test suite and I've implemented just enough to be able to create a group, add members to it, and exchange an encrypted message. I'll work on remaining group operations (e.g. removing a member) next.
Last, I've migrated FreeDesktop's Mailman 3 installation to PostgreSQL from SQLite. Mailman 3's SQLite integration had pretty severe performance issues, these are gone with PostgreSQL. The migration wasn't straightforward: there is no tooling to migrate Mailman 3 core's data between database engines, so I had to manually fill the new database with the old data. I've migrated two more mailing lists to Mailman 3: fhs and nouveau. I plan to continue the migration in the coming months, and hopefully we'll be able to decommission Mailman 2 in a not-so-distant future.
See you next year!
21 Dec 2025 10:00pm GMT
16 Dec 2025
planet.freedesktop.org
Timur Kristóf: How do graphics drivers work?
I'd like to give an overview on how graphics drivers work in general, and then write a little bit about the Linux graphics stack for AMD GPUs. The intention of this post is to clear up a bunch of misunderstandings that people have on the internet about open source graphics drivers.
What is a graphics driver?
A graphics driver is a piece of software code that is written for the purpose of allowing programs on your computer to access the features of your GPU. Every GPU is different and may have different capabilities or different ways of achieving things, so they need different drivers, or at least different code paths in a driver that may handle multiple GPUs from the same vendor and/or the same hardware generation.
The main motivation for graphics drivers is to allow applications to utilize your hardware efficiently. This enables games to render pretty pixels, scientific apps to calculate stuff, as well as video apps to encode / decode efficiently.
Organization of graphics drivers
Compared to drivers for other hardware, graphics is very complicated because the functionality is very broad and the differences between each piece of hardware can be also vast.
Here is a simplified explanation on how a graphics driver stack usually works. Note that most of the time, these components (or some variation) are bundled together to make them easier to use.
- GPU firmware (FW) ― low-level code for power management, context switching, command processing, display engine, video encoding/decoding, etc.
- Kernel driver, aka. kernel-mode driver (KMD) ― makes it possible for multiple userspace applications to submit commands to the GPU, and is responsible for memory management and display functionality.
- Userspace driver, aka. user-mode driver (UMD) ― responsible for implementing an API, such as Vulkan, OpenGL, etc. For each piece of hardware, there may be multiple different UMDs implementing different APIs.
- Shader compiler ― a userspace library that compiles shader programs for your GPU from the HW-independent code that applications have. Can be possibly shared between UMDs, sometimes developed as a separate project.
I'll give a brief overview of each component below.
GPU firmware
Most GPUs have additional processors (other than the shader cores) which run a firmware that is responsible for operating the low-level details of the hardware, usually stuff that is too low-level even for the kernel.
The firmware on those processors are responsible for: power management, context switching, command processing, display, video encoding/decoding etc. Among other things it parses the commands we submitted to it, launches shaders, distributes work between the shader cores etc.
Some GPU manufacturers are moving more and more functionality to firmware, which means that the GPU can operate more autonomously and less intervention is needed by the CPU. This tendency is generally positive for reducing CPU time spent on programming the GPU (as well as "CPU bubbles"), but at the same time it also means that the way the GPU actually works becomes less transparent.
Kernel driver
You might ask, why not implement all driver functionality in the kernel? Wouldn't it be simpler to "just" have everything in the kernel? The answer is no, mainly because there is a LOT going on which nobody wants in the kernel.
- You don't want to have your kernel crash when a game misbehaves. Sadly it can still happen, but it would happen a lot more if the kernel and userspace components weren't separated.
- You definitiely don't want to run a fully-fledged compiler inside your kernel which takes arbitrary input from the user.
- You want to avoid having to upgrade your kernel to deploy most fixes and improvements to the graphics stack. (This is not always avoidable but can be minimized.)
So, usually, the KMD is only left with some low-level tasks that every user needs:
- Command submission userspace API: an interface that allows userspace processes to submit commands to the GPU, query information about the GPU, etc.
- Memory management: deciding which process gets to use how much VRAM, defining GTT, handling low-memory situations, etc.
- Display functionality: making display connectors work, by programming the registers of the display controller. There is also a separate uAPI for just this purpose.
- Power management: making sure the GPU doesn't draw too much power when not needed, and also making sure applications can get the best clock speeds etc. when that is needed, in cooperation with the power management firmware.
- GPU recovery: when the GPU hangs or crashes for some reason, it's the kernel's responsibility to ensure that the GPU can be recovered and that the crash doesn't affect other processes.
Userspace driver
Applications interact with userspace drivers instead of the kernel (or the hardware directly). Userspace drivers are compiled as shared libraries and are responsible for implementing one or more specific APIs for graphics, compute or video for a specific family of GPUs. (For example, Vulkan, OpenGL or OpenCL, etc.) Each graphics API has entry points which load the available driver(s) for the GPU(s) in the user's system. The Vulkan loader is an example of this; other APIs have similar components for this purpose.
The main functionality of a userspace driver is to take the commands from the API (for example, draw calls or compute dispatches) and turn them into low level commands in a binary format that the GPU can understand. In Vulkan, this is analogous to recording a command buffer. Additionally, they utilize a shader compiler to turn a higher level shader language (eg. GLSL) or bytecode (eg. SPIR-V) into hardware instructions which the GPU's shader cores can execute.
Furthermore, userspace drivers also take part in memory management, they basically act as an interface between the memory model of the graphics API and kernel's memory manager.
The userspace driver calls the aforementioned kernel uAPI to submit the recorded commands to the kernel which then schedules it and hands it to the firmware to be executed.
Shader compiler
If you've seen a loading screen in your favourite game which told you it was "compiling shaders…" you probably wondered what that's about and why it's necessary.
Unlike CPUs which have converged to a few common instruction set architectures (ISA), GPUs are a mess and don't share the same ISA, not even between different GPU models from the same manufacturer. Although most modern GPUs have converged to SIMD based architectures, the ISA is still very different between manufacturers and it still changes from generation to generation (sometimes different chips of the same generation have slightly different ISA). GPU makers keep adding new instructions when they identify new ways to implement some features more effectively.
To deal with all that mess, graphics drivers have to do online compilation of shaders (as opposed to offline compilation which usually happens for apps running on your CPU).
This means that shaders have to be recompiled when the userspace graphics driver is updated either because new functionality is available or because bug fixes were added to the driver and/or compiler.
But I only downloaded one driver!
On some systems (especially proprietary operating systems like Windows), GPU manufacturers intend to make users' lives easier by offering all of the above in a single installer package, which is just called "the driver".
Typically such a package includes:
- Firmware files for all hardware that the package supports
- A kernel driver
- Several userspace drivers for various APIs
- A shader compiler (sometimes more) that is used by those userspace drivers
- A "user-friendly" application (ie. a control panel) to present all the functionality to the user
- Various other utilities and libraries (that you may or may not need).
But I didn't download any drivers!
On some systems (typically on open source systems like Linux distributions), usually you can already find a set of packages to handle most common hardware, so you can use most functionality out of the box without needing to install anything manually.
Neat, isn't it?
However, on open source systems, the graphics stack is more transparent, which means that there are many parts that are scattered across different projects, and in some cases there is more than one driver available for the same HW. To end users, it can be very confusing.
However, this doesn't mean that open source drivers are designed worse. It is just that due to their community oriented nature, they are organized differently.
One of the main sources of confusion is that various Linux distributions mix and match different versions of the kernel with different versions of different UMDs which means that users of different distros can get a wildly different user experience based on the choices made for them by the developers of the distro.
Another source of confusion is that we driver developers are really, really bad at naming things, so sometimes different projects end up having the same name, or some projects have nonsensical or outdated names.
The Linux graphics stack
In the next post, I'll continue this story and discuss how the above applies to the open source Linux graphics stack.
16 Dec 2025 12:09am GMT
Hari Rana: Please Fund My Continued Accessibility Work on GNOME!
Hey, I have been under distress lately due to personal circumstances that are outside my control. I cannot find a permanent job that allows me to function, I am not eligible for government benefits, my grant proposals to work on free and open-source projects got rejected, paid internships are quite difficult to find, especially when many of them prioritize new contributors. Essentially, I have no stable, monthly income that allows me to sustain myself.
Nowadays, I mostly volunteer to improve accessibility throughout GNOME apps, either by enhancing the user experience for people with disabilities, or enabling them to use them. I helped make most of GNOME Calendar accessible with a keyboard and screen reader, with additional ongoing effort involving merge requests !564 and !598 to make the month view accessible, all of which is an effort no company has ever contributed to, or would ever contribute to financially. These merge requests require literal thousands of hours for research, development, and testing, enough to sustain me for several years if I were employed.
I would really appreciate any kinds of donations, especially ones that happen periodically to increase my monthly income. These donations will allow me to sustain myself while allowing me to work on accessibility throughout GNOME, essentially 'crowdfunding' development without doing it on the behalf of the GNOME Foundation or another organization.
16 Dec 2025 12:00am GMT
13 Dec 2025
planet.freedesktop.org
Sebastian Wick: Flatpak Pre-Installation Approaches
Together with my then-colleague Kalev Lember, I recently added support for pre-installing Flatpak applications. It sounds fancy, but it is conceptually very simple: Flatpak reads configuration files from several directories to determine which applications should be pre-installed. It then installs any missing applications and removes any that are no longer supposed to be pre-installed (with some small caveats).
For example, the following configuration tells Flatpak that the devel branch of the app org.test.Foo from remotes which serve the collection org.test.Collection, and the app org.test.Bar from any remote should be installed:
[Flatpak Preinstall org.test.Foo]
CollectionID=org.test.Collection
Branch=devel
[Flatpak Preinstall org.test.Bar]
By dropping in another confiuration file with a higher priority, pre-installation of the app org.test.Foo can be disabled:
[Flatpak Preinstall org.test.Foo]
Install=false
The installation procedure is the same as it is for the flatpak-install command. It supports installing from remotes and from side-load repositories, which is to say from a repository on a filesystem.
This simplicity also means that system integrators are responsible for assembling all the parts into a functioning system, and that there are a number of choices that need to be made for installation and upgrades.
The simplest way to approach this is to just ship a bunch of config files in /usr/share/flatpak/preinstall.d and config files for the remotes from which the apps are available. In the installation procedure, flatpak-preinstall is called and it will download the Flatpaks from the remotes over the network into /var/lib/flatpak. This works just fine, until someone needs one of those apps but doesn't have a suitable network connection.
The next way one could approach this is exactly the same way, but with a sideload repository on the installation medium which contains the apps that will get pre-installed. The flatpak-preinstall command needs to be pointed at this repository at install time, and the process which creates the installation medium needs to be adjusted to create this repository. The installation process now works without a network connection. System updates are usually downloaded over the network, just as new pre-installed applications will be.
It is also possible to simply skip flatpak-preinstall, and use flatpak-install to create a Flatpak installation containing the pre-installed apps which get shipped on the installation medium. This installation can then be copied over from the installation medium to /var/lib/flatpak in the installation process. It unfortunately also makes the installation process less flexible because it becomes impossible to dynamically build the configuration.
On modern, image-based operating systems, it might be tempting to just ship this Flatpak installation on the image because the flexibility is usually neither required nor wanted. This currently does not work for the simple reason that the default system installation is in /var/lib/flatpak, which is not in /usr which is the mount point of the image. If the default system installation was in the image, then it would be read-only because the image is read-only. This means we could not update or install anything new to the system installation. If we make it possible to have two different system installations - one in the image, and one in /var - then we could update and install new things, but the installation on the image would become useless over time because all the runtimes and apps will be in /var anyway as they get updated.
All of those issues mean that even for image-based operating systems, pre-installation via a sideload repository is not a bad idea for now. It is however also not perfect. The kind of "pure" installation medium which is simply an image now contains a sideload repository. It also means that a factory reset functionality is not possible because the image does not contain the pre-installed apps.
In the future, we will need to revisit these approaches to find a solution that works seamlessly with image-based operating systems and supports factory reset functionality. Until then, we can use the systems mentioned above to start rolling out pre-installed Flatpaks.
13 Dec 2025 5:17pm GMT
24 Nov 2025
planet.freedesktop.org
Dave Airlie (blogspot): fedora 43: bad mesa update oopsie
F43 picked up the two patches I created to fix a bunch of deadlocks on laptops reported in my previous blog posting. Turns out Vulkan layers have a subtle thing I missed, and I removed a line from the device select layer that would only matter if you have another layer, which happens under steam.
The fedora update process caught this, but it still got published which was a mistake, need to probably give changes like this more karma thresholds.
I've released a new update https://bodhi.fedoraproject.org/updates/FEDORA-2025-2f4ba7cd17 that hopefully fixes this. I'll keep an eye on the karma.
24 Nov 2025 1:42am GMT
23 Nov 2025
planet.freedesktop.org
Juan A. Suarez: Major Upgrades to the Raspberry Pi GPU Driver Stack (XDC 2025 Recap)
XDC 2025 happened at the end of September, beginning of October this year, in Kuppelsaal, the historic TU Wien building in Vienna. XDC, The X.Org Developer's Conference, is truly the premier gathering for open-source graphics development. The atmosphere was, as always, highly collaborative and packed with experts across the entire stack.
I was thrilled to present, together with my workmate Ella Stanforth, on the progress we have made in enhancing the Raspberry Pi GPU driver stack. Representing the broader Igalia Graphics Team that work on this GPU, Ella and I detailed the strides we have made in the OpenGL driver, though part of the improvements affect also the Vulkan driver.
The presentation was divided in two parts. In the first one, we talked about the new features that we were implementing, or are under implementation, mainly to make the driver more closely aligned with OpenGL 3.2. Key features explained were 16-bit Normalized Format support, Robust Context support, and Seamless cubemap implementation.
Beyond these core OpenGL updates, we also highlighted other features, such as NIR printf support, framebuffer fetch or dual source blend, which is important for some game emulators.
The second part was focused on specific work done to improve the performance. Here, we started with different traces from the popular GFXBench application, and explained the main improvements done throughout the year, with a look at how much each of these changes improved the performance for each of the benchmarks (or in average).
At the end, for some benchmarks we nearly doubled the performance compared to last year. I won't explain here each of the changes done, But I encourage the reader to watch the talk, which is already available.
For those that prefer to check the slides instead of the full video, you can view them here:
Outside of the technical track, the venue's location provided some excellent down time opportunities to have lunch at different nearby places. I need to highlight here one that I really enjoyed: An's Kitchen Karlsplatz. This cozy Vietnamese street food spot quickly became one of my favourite places, and I went there a couple of times.
On the last day, I also had the opportunity to visit some of the most recomendable sightseeings spots in Vienna. Of course, one needs more than a half-day to do a proper visit, but at least it helps to spark an interest to write it down to pay a full visit to the city.
Meanwhile, I would like to thank all the conference organizers, as well as all the attendees, and I look forward to see them again.
23 Nov 2025 11:00pm GMT
17 Nov 2025
planet.freedesktop.org
Lennart Poettering: Mastodon Stories for systemd v258
Already on Sep 17 we released systemd v258 into the wild.
In the weeks leading up to that release I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd258 hash tag. It was my intention to post a link list here on this blog right after completing that series, but I simply forgot! Hence, in case you aren't using Mastodon, but would like to read up, here's a list of all 37 posts:
- Post #1: systemctl start -v
- Post #2: Home areas
- Post #3: systemd-resolved delegate zones
- Post #4: Foreign UID range
- Post #5: /etc/hostname ??? wildcards
- Post #6: Quota on /tmp/
- Post #7: ConcurretnySoftMax= + ConcurrencyHardMax=
- Post #8: Product UUID in ConditionHost=
- Post #9: Context OSC terminal sequences
- Post #10: uki-url Boot Loader Spec Type #1 fields
- Post #11: rd.break= boot breakpoints
- Post #12: Factory Reset Rework
- Post #13: systemd-resolved DNS Configuration Change IPC Subscription API
- Post #14: io.systemd.boot-entries.extra= SMBIOS Type #11 Key
- Post #15: Bring Your Own Firmware
- Post #16: userdb record aliases
- Post #17: systemd-validatefs and its xattrs
- Post #18: Offline Signing of Artifacts
- Post #19: PAMName= in services hooked up to ask-password protocol
- Post #20: x-systemd.graceful-option= mount option
- Post #21: systemd-userdb-load-credentials.service
- Post #22: systemd-vmspawn --grow-image=a
- Post #23: systemd-notify --fork
- Post #24: $TERM auto-discovery
- Post #25: Rebooting/Powering off systemd-nspawn containers via hotkey
- Post #26: ExecStart= | modifier
- Post #27: systemctl reload reloads confexts
- Post #28: Server side userdb filtering
- Post #29: Quota on StateDirectory= and friends
- Post #30: systemd-analyze unit-shell
- Post #31: /etc/issue.d/ drop-in for AF_VSOCK CID
- Post #32: fsverity in systemd-repart
- Post #33: AcceptFileDescriptor= + PassPIDFD=
- Post #34: Tab completion in interactive systemd-firstboot
- Post #35: rd.systemd.pull= kernel command line option/Boot into tarball
- Post #36: ConditionKernelModuleLoaded=
- Post #37: systemd-analyze chid
- Post #38: homectl list-signing-keys/get-signing-key/add-signing-key/remove-signing-key
- Post #39: DDI Image Filters
- Post #40: Android USB Debugging udev rules
- Post #41: systemd-vmspawn's --smbios11= switch
- Post #42: $MAINPIDFDID + $MANAGERPIDFDID
- Post #43: $DEBUG_INVOCATION=1 Respected by all systemd services
- Post #44: LoaderDeviceURL EFI Variable and systemd.pull='s origin kernel command line switch
- Post #45: cgroupv1 removal
- Post #46: ProtectHostname=private
- Post #47: homectl adopt + homectl register
- Post #48: systemd-machined Varlink APIs
- Post #49: DeferTrigger and "lenient" job mode
- Post #50: Automatic Removal of foreign UID owned delegate subgroups in the per-user service manager
- Post #51: Per-user ask-password protocol
- Post #52: PrivateUsers=full
- Post #53: LoadCredentialEncrypted= in the per-user service manager
- Post #54: dissect_image builtin in systemd-udevd
- Post #55: BPF Delegation via Tokens
I intend to do a similar series of serieses of posts for the next systemd release (v259), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.
We intend to shorten the release cycle a bit for the future, and in fact managed to tag v259-rc1 already yesterday, just 2 months after v258. Hence, my series for v259 will begin soon, under the #systemd259 hash tag.
In case you are interested, here is the corresponding blog story for systemd v257, and here for v256.
17 Nov 2025 11:00pm GMT
Rodrigo Siqueira: XDC 2025
It has been a long time since I published any update in this space. Since this was a year of colossal changes for me, maybe it is also time for me to make something different with this blog and publish something just for a change - why not start talking about XDC 2025?
This year, I attended XDC 2025 in Vienna as an Igalia developer. I was thrilled to see some faces from people I worked with in the past and people I'm working with now. I had a chance to hang out with some folks I worked with at AMD (Harry, Alex, Leo, Christian, Shashank, and Pierre), many Igalians (Žan, Job, Ricardo, Paulo, Tvrtko, and many others), and finally some developers from Valve. In particular, I met Tímur in person for the first time, even though we have been talking for months about GPU recovery. Speaking of GPU recovery, we held a workshop on this topic together.
The workshop was packed with developers from different companies, which was nice because it added different angles on this topic. We began our discussion by focusing on the topic of job resubmission. Christian began sharing a brief history of how the AMDGPU driver started handling resubmission and the associated issues. After learning from erstwhile experience, amdgpu ended up adopting the following approach:
- When a job cause a hang, call driver specific handler.
- Stop the scheduler.
- Copy all jobs from the ring buffer, minus the job that caused the issue, to a temporary ring.
- Reset the ring buffer.
- Copy back the other jobs to the ring buffer.
- Resume the scheduler.
Below, you can see one crucial series associated with amdgpu recovery implementation:
The next topic was a discussion around the replacement of drm_sched_resubmit_jobs() since this function became deprecated. Just a few drivers still use this function, and they need a replacement for that. Some ideas were floating around to extract part of the specific implementation from some drivers into a generic function. The next day, Philipp Stanner continued to discuss this topic in his workshop, DRM GPU Scheduler.
Another crucial topic discussed was improving GPU reset debuggability to narrow down which operations cause the hang (keep in mind that GPU recovery is a medicine, not the cure to the problem). Intel developers shared their strategy for dealing with this by obtaining hints from userspace, which helped them provide a better set of information to append to the devcoredump. AMD could adopt this alongside dumping the IB data into the devcoredump (I am already investigating this).
Finally, we discussed strategies to avoid hang issues regressions. In summary, we have two lines of defense:
- IGT: At the IGT level, we can have more tests that insert malicious instructions into the ring buffer, forcing the driver into an invalid state and triggering the recovery process.
- HangTest suite: HangTest suite is a tool that simulates some potential hangs using Vulkan. Some tests are already available in this suite, but we should explore more creative combinations for trying to trigger hangs.
This year, as always, XDC was super cool, packed with many engaging presentations which I highly recommend everyone check out. If you are interested, check the schedule and the presentation recordings available on the X.Org Foundation Youtube page. Anyway, I hope this blog post marks the inauguration of a new era for this site, where I will start posting more content ranging from updates to tutorials. See you soon.
17 Nov 2025 12:00am GMT
15 Nov 2025
planet.freedesktop.org
Simon Ser: Status update, November 2025
Hi!
This month a lot of new features have added to the Goguma mobile IRC client. Hubert Hirtz has implemented drafts so that unsent text gets saved and network disconnections don't disrupt users typing a message. He also enabled replying to one's own messages, changed the appearance of short messages containing only emoji, upgraded our emoji library to Unicode version 16, fixed some linkifier bugs and added unit tests.

Markus Cisler has added a new option in the message menu to show a user's profile. I've added an on-disk cache for images (with our own implementation, because the widely used cached_network_image package is heavyweight). I've been working on displaying network icons and blocking users, but that work is not finished yet. I've also contributed some maintenance fixes for our webcrypto.dart dependency (toolkit upgrades and CI fixes).
The soju IRC bouncer has also got some love this month. delthas has contributed support for labeled-response for soju clients, allowing more reliable matching of server replies with client commands. I've introduced a new icon directive to configure an image representing the bouncer. soju v0.10.0 has been released, followed by soju v0.10.1 including bug fixes from Karel Balej and Taavi Väänänen.
In Wayland news, wlroots v0.19.2 and v0.18.3 have been released thanks to Simon Zeni. I've added support for the color-representation protocol for the Vulkan renderer, allowing clients to configure the color encoding and range for YCbCr content. Félix Poisot has been hard at work with more color management patches: screen default color primaries are now extracted from the EDID and exposed to compositors, the cursor is now correctly converted to the output's primaries and transfer function, and some work-in-progress patches switch the renderer API from a descriptive model to a prescriptive model.
go-webdav v0.7.0 has been released with a patch from prasad83 to play well with Thunderbird. I've updated clients to make multi-status errors non-fatal, returning partial data alongside the error.
I've released drm_info v2.9.0 with improvements mentioned in the previous status update plus support for the TILE connector property.
See you next month!
15 Nov 2025 10:00pm GMT
10 Nov 2025
planet.freedesktop.org
Dave Airlie (blogspot): a tale of vulkan/nouveau/nvk/zink/mutter + deadlocks
I had a bug appear in my email recently which led me down a rabbit hole, and I'm going to share it for future people wondering why we can't have nice things.
Bug:
1. Get an intel/nvidia (newer than Turing) laptop.
2. Log in to GNOME on Fedora 42/43
3. Hotplug a HDMI port that is connected to the NVIDIA GPU.
4. Desktop stops working.
My initial reproduction got me a hung mutter process with a nice backtrace which pointed at the Vulkan Mesa device selection layer, trying to talk to the wayland compositor to ask it what the default device is. The problem was the process was the wayland compositor, and how was this ever supposed to work. The Vulkan device selection was called because zink called EnumeratePhysicalDevices, and zink was being loaded because we recently switched to it as the OpenGL driver for newer NVIDIA GPUs.
I looked into zink and the device select layer code, and low and behold someone has hacked around this badly already, and probably wrongly and I've no idea what the code does, because I think there is at least one logic bug in it. Nice things can't be had because hacks were done instead of just solving the problem.
The hacks in place ensured under certain circumstances involving zink/xwayland that the device select code to probe the window system was disabled, due to deadlocks seen. I'd no idea if more hacks were going to help, so I decided to step back and try and work out better.
The first question I had is why WAYLAND_DISPLAY is set inside the compositor process, it is, and if it wasn't I would never hit this. It's pretty likely on the initial compositor start this env var isn't set, so the problem only becomes apparent when the compositor gets a hotplugged GPU output, and goes to load the OpenGL driver, zink, which enumerates and hits device select with env var set and deadlocks.
I wasn't going to figure out a way around WAYLAND_DISPLAY being set at this point, so I leave the above question as an exercise for mutter devs.
How do I fix it?
Attempt 1:
At the point where zink is loading in mesa for this case, we have the file descriptor of the GPU device that we want to load a driver for. We don't actually need to enumerate all the physical devices, we could just find the ones for that fd. There is no API for this in Vulkan. I wrote an initial proof of concept instance extensions call VK_MESA_enumerate_devices_fd. I wrote initial loader code to play with it, and wrote zink code to use it. Because this is a new instance API, device-select will also ignore it. However this ran into a big problem in the Vulkan loader. The loader is designed around some internals that PhysicalDevices will enumerate in similiar ways, and it has to trampoline PhysicalDevice handles to underlying driver pointers so that if an app enumerates once, and enumerates again later, the PhysicalDevice handles remain consistent for the first user. There is a lot of code, and I've no idea how hotplug GPUs might fail in such situations. I couldn't find a decent path forward without knowing a lot more about the Vulkan loader. I believe this is the proper solution, as we know the fd, we should be able to get things without doing a full enumeration then picking the answer using the fd info. I've asked Vulkan WG to take a look at this, but I still need to fix the bug.
Attempt 2:
Maybe I can just turn off device selection, like the current hacks do, but in a better manner. Enter VK_EXT_layer_settings. This extensions allows layers to expose a layer setting in the instance creation. I can have the device select layer expose a setting which says don't touch this instance. Then in the zink code where we have a file descriptor being passed in and create an instance, we set the layer setting to avoid device selection. This seems to work but it has some caveats, I need to consider, but I think should be fine.
zink uses a single VkInstance for it's device screen. This is shared between all pipe_screens. Now I think this is fine inside a compositor, since we shouldn't ever be loading zink via the non-fd path, and I hope for most use cases it will work fine, better than the current hacks and better than some other ideas we threw around. The code for this is in [1].
What else might be affected:
If you have a vulkan compositor, it might be worth setting the layer setting if the mesa device select layer is loaded, esp if you set the DISPLAY/WAYLAND_DISPLAY and do any sort of hotplug later. You might be safe if you EnumeratePhysicalDevices early enough, the reason it's a big problem in mutter is it doesn't use Vulkan, it uses OpenGL and we only enumerate Vulkan physical devices at runtime through zink, never at startup.
AMD and NVIDIA I think have proprietary device selection layers, these might also deadlock in similiar ways, I think we've seen some wierd deadlocks in NVIDIA driver enumerations as well that might be a similiar problem.
[1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/38252
10 Nov 2025 3:16am GMT
04 Nov 2025
planet.freedesktop.org
Sebastian Wick: Flatpak Happenings
Yesterday I released Flatpak 1.17.0. It is the first version of the unstable 1.17 series and the first release in 6 months. There are a few things which didn't make it for this release, which is why I'm planning to do another unstable release rather soon, and then a stable release still this year.
Back at LAS this year I talked about the Future of Flatpak and I started with the grim situation the project found itself in: Flatpak was stagnant, the maintainers left the project and PRs didn't get reviewed.
Some good news: things are a bit better now. I have taken over maintenance, Alex Larsson and Owen Taylor managed to set aside enough time to make this happen and Boudhayan Bhattcharya (bbhtt) and Adrian Vovk also got more involved. The backlog has been reduced considerably and new PRs get reviewed in a reasonable time frame.
I also listed a number of improvements that we had planned, and we made progress on most of them:
- It is now possible to define which Flatpak apps shall be pre-installed on a system, and Flatpak will automatically install and uninstall things accordingly. Our friends at Aurora and Bluefin already use this to ship core apps from Flathub on their bootc based systems (shout-out to Jorge Castro).
- The OCI support in Flatpak has been enhanced to support pre-installing from OCI images and remotes, which will be used in RHEL 10
- We merged the backwards-compatible permission system. This allows apps to use new, more restricting permissions, while not breaking compatibility when the app runs on older systems. Specifically access to input devices such as gamepads, and access to the USB portal can now be granted in this way. It will also help us to transition to PipeWire.
- We have up-to-date docs for libflatpak again
Besides the changes directly in Flatpak, there are a lot of other things happening around the wider ecosystem:
- bbhtt released a new version of flatpak-builder
- Enhanced License Compliance Tools for Flathub
- Adrian and I have made plans for a service which allows querying running app instances (systemd-appd). This provides a new way of authenticating Flatpak instances and is a prerequisite for nested sandboxing, PipeWire support, and getting rid of the D-Bus proxy. My previous blog post went into a few more details.
- Our friends at KDE have started looking into the XDG Intents spec, which will hopefully allow us to implement deep-linking, thumbnailing in Flatpak apps, and other interesting features
- Adrian made progress on the session save/restore Portal
- Some rather big refactoring work in the Portals frontend, and GDBus and libdex integration work which will reduce the complexity of asynchronous D-Bus
What I have also talked about at my LAS talk is the idea of a Flatpak-Next project. People got excited about this, but I feel like I have to make something very clear:
If we redid Flatpak now, it would not be significantly better than the current Flatpak! You could still not do nested sandboxing, you would still need a D-Bus proxy, you would still have a complex permission system, and so on.
Those problems require work outside of Flatpak, but have to integrate with Flatpak and Flatpak-Next in the future. Some of the things we will be doing include:
- Work on the systemd-appd concept
- Make varlink a feasible alternative to D-Bus
- D-Bus filtering in the D-Bus daemons
- Network sandboxing via pasta
- PipeWire policy for sandboxes
- New Portals
So if you're excited about Flatpak-Next, help us to improve the Flatpak ecosystem and make Flatpak-Next more feasible!
04 Nov 2025 8:28pm GMT
03 Nov 2025
planet.freedesktop.org
Melissa Wen: Kworkflow at Kernel Recipes 2025

This was the first year I attended Kernel Recipes and I have nothing but say how much I enjoyed it and how grateful I'm for the opportunity to talk more about kworkflow to very experienced kernel developers. What I mostly like about Kernel Recipes is its intimate format, with only one track and many moments to get closer to experts and people that you commonly talk online during your whole year.
In the beginning of this year, I gave the talk Don't let your motivation go, save time with kworkflow at FOSDEM, introducing kworkflow to a more diversified audience, with different levels of involvement in the Linux kernel development.
At this year's Kernel Recipes I presented the second talk of the first day: Kworkflow - mix & match kernel recipes end-to-end.
The Kernel Recipes audience is a bit different from FOSDEM, with mostly long-term kernel developers, so I decided to just go directly to the point. I showed kworkflow being part of the daily life of a typical kernel developer from the local setup to install a custom kernel in different target machines to the point of sending and applying patches to/from the mailing list. In short, I showed how to mix and match kernel workflow recipes end-to-end.
As I was a bit fast when showing some features during my presentation, in this blog post I explain each slide from my speaker notes. You can see a summary of this presentation in the Kernel Recipe Live Blog Day 1: morning.
Introduction

Hi, I'm Melissa Wen from Igalia. As we already started sharing kernel recipes and even more is coming in the next three days, in this presentation I'll talk about kworkflow: a cookbook to mix & match kernel recipes end-to-end.

This is my first time attending Kernel Recipes, so lemme introduce myself briefly.
- As I said, I work for Igalia, I work mostly on kernel GPU drivers in the DRM subsystem.
- In the past, I co-maintained VKMS and the v3d driver. Nowadays I focus on the AMD display driver, mostly for the Steam Deck.
- Besides code, I contribute to the Linux kernel by mentoring several newcomers in Outreachy, Google Summer of Code and Igalia Coding Experience. Also, by documenting and tooling the kernel.

And what's this cookbook called kworkflow?
Kworkflow (kw)

Kworkflow is a tool created by Rodrigo Siqueira, my colleague at Igalia. It's a single platform that combines software and tools to:
- optimize your kernel development workflow;
- reduce time spent in repetitive tasks;
- standardize best practices;
- ensure that deployment data flows smoothly and reliably between different kernel workflows;

It's mostly done by volunteers, kernel developers using their spare time. Its features cover real use cases according to kernel developer needs.

Basically it's mixing and matching the daily life of a typical kernel developer with kernel workflow recipes with some secret sauces.
First recipe: A good GPU driver for my AMD laptop

So, it's time to start the first recipe: A good GPU driver for my AMD laptop.

Before starting any recipe we need to check the necessary ingredients and tools. So, let's check what you have at home.
With kworkflow, you can use:


-
kw device: to get information about the target machine, such as: CPU model, kernel version, distribution, GPU model, -
kw remote: to set the address of this machine for remote access


kw config: you can configure kw with kw config. With this command you can basically select the tools, flags and preferences that kw will use to build and deploy a custom kernel in a target machine. You can also define recipients of your patches when sending it using kw send-patch. I'll explain more about each feature later in this presentation.


kw kernel-config manager(or justkw k): to fetch the kernel .config file from a given machine, store multiple .config files, list and retrieve them according to your needs.

Now, with all ingredients and tools selected and well portioned, follow the right steps to prepare your custom kernel!
First step: Mix ingredients with kw build or just kw b


kw band its options wrap many routines of compiling a custom kernel.- You can run
kw b -ito check the name and kernel version and the number of modules that will be compiled andkw b --menuto change kernel configurations. - You can also pre-configure compiling preferences in kw config regarding kernel building. For example, target architecture, the name of the generated kernel image, if you need to cross-compile this kernel for a different system and which tool to use for it, setting different warning levels, compiling with CFlags, etc.
- Then you can just run
kw bto compile the custom kernel for a target machine.
- You can run
Second step: Bake it with kw deploy or just kw d


After compiling the custom kernel, we want to install it in the target machine. Check the name of the custom kernel built: 6.17.0-rc6 and with kw s SSH access the target machine and see it's running the kernel from the Debian distribution 6.16.7+deb14-amd64.
As with building settings, you can also pre-configure some deployment settings, such as compression type, path to device tree binaries, target machine (remote, local, vm), if you want to reboot the target machine just after deploying your custom kernel, and if you want to boot in the custom kernel when restarting the system after deployment.
If you didn't pre-configured some options, you can still customize as a command option, for example: kw d --reboot will reboot the system after deployment, even if I didn't set this in my preference.
With just running kw d --reboot I have installed the kernel in a given target machine and rebooted it. So when accessing the system again I can see it was booted in my custom kernel.
Third step: Time to taste with kw debug


kw debugwraps many tools for validating a kernel in a target machine. We can log basic dmesg info but also tracking events and ftrace.- With
kw debug --dmesg --historywe can grab the full dmesg log from a remote machine, if you use the--followoption, you will monitor dmesg outputs. You can also run a command withkw debug --dmesg --cmd="<my command>"and just collect the dmesg output related to this specific execution period. - In the example, I'll just unload the amdgpu driver. I use
kw drm --gui-offto drop the graphical interface and release the amdgpu for unloading it. So I runkw debug --dmesg --cmd="modprobe -r amdgpu"to unload the amdgpu driver, but it fails and I couldn't unload it.
- With
Cooking Problems

Oh no! That custom kernel isn't tasting good. Don't worry, as in many recipes preparations, we can search on the internet to find suggestions on how to make it tasteful, alternative ingredients and other flavours according to your taste.

With kw patch-hub you can search on the lore kernel mailing list for possible patches that can fix your kernel issue. You can navigate in the mailing lists, check series, bookmark it if you find it relevant and apply it in your local kernel tree, creating a different branch for tasting… oops, for testing. In this example, I'm opening the amd-gfx mailing list where I can find contributions related to the AMD GPU driver, bookmark and/or just apply the series to my work tree and with kw bd I can compile & install the custom kernel with this possible bug fix in one shot.
As I changed my kw config to reboot after deployment, I just need to wait for the system to boot to try again unloading the amdgpu driver with kw debug --dmesg --cm=modprobe -r amdgpu. From the dmesg output retrieved by kw for this command, the driver was unloaded, the problem is fixed by this series and the kernel tastes good now.
If I'm satisfied with the solution, I can even use kw patch-hub to access the bookmarked series and marking the checkbox that will reply the patch thread with a Reviewed-by tag for me.
Second Recipe: Raspberry Pi 4 with Upstream Kernel

As in all recipes, we need ingredients and tools, but with kworkflow you can get everything set as when changing scenarios in a TV show. We can use kw env to change to a different environment with all kw and kernel configuration set and also with the latest compiled kernel cached.

I was preparing the first recipe for a x86 AMD laptop and with kw env --use RPI_64 I use the same worktree but moved to a different kernel workflow, now for Raspberry Pi 4 64 bits. The previous compiled kernel 6.17.0-rc6-mainline+ is there with 1266 modules, not the 6.17.0-rc6 kernel with 285 modules that I just built&deployed. kw build settings are also different, now I'm targeting a arm64 architecture with a cross-compiled kernel using aarch64-linu-gnu- cross-compilation tool and my kernel image calls kernel8 now.

If you didn't plan for this recipe in advance, don't worry. You can create a new environment with kw env --create RPI_64_V2 and run kw init --template to start preparing your kernel recipe with the mirepoix ready.
I mean, with the basic ingredients already cut…
I mean, with the kw configuration set from a template.

And you can use kw remote to set the IP address of your target machine and kw kernel-config-manager to fetch/retrieve the .config file from your target machine. So just run kw bd to compile and install a upstream kernel for Raspberry Pi 4.

Third Recipe: The Mainline Kernel Ringing on my Steam Deck (Live Demo)

Let's show you how easy is to build, install and test a custom kernel for Steam Deck with Kworkflow. It's a live demo, but I also recorded it because I know the risks I'm exposed to and something can go very wrong just because of reasons :)

Report: how was the live demo
For this live demo, I took my OLED Steam Deck to the stage. I explained that, if I boot mainline kernel on this device, there is no audio. So I turned it on and booted the mainline kernel I've installed beforehand. It was clear that there was no typical Steam Deck startup audio when the system was loaded.

As I started the demo in the kw environment for Raspberry Pi 4, I first moved to another environment previously used for Steam Deck. In this STEAMDECK environment, the mainline kernel was already compiled and cached, and all settings for accessing the target machine, compiling and installing a custom kernel were retrieved automatically.
My live demo followed these steps:
-
With
kw env --use STEAMDECK, switch to a kworkflow environment for Steam Deck kernel development. -
With
kw b -i, shows that kw will compile and install a kernel with 285 modules named6.17.0-rc6-mainline-for-deck. -
Run
kw configto show that, in this environment, kw configuration changes to x86 architecture and without cross-compilation. -
Run
kw deviceto display information about the Steam Deck device, i.e. the target machine. It also proves that the remote access - user and IP - for this Steam Deck was already configured when using the STEAMDECK environment, as expected. -
Using
git am, as usual, apply a hot fix on top of the mainline kernel. This hot fix makes the audio play again on Steam Deck. -
With
kw b, build the kernel with the audio change. It will be fast because we are only compiling the affected files since everything was previously done and cached. Compiled kernel, kw configuration and kernel configuration is retrieved by just moving to the "STEAMDECK" environment. -
Run
kw d --force --rebootto deploy the new custom kernel to the target machine. The--forceoption enables us to install the mainline kernel even if mkinitcpio complains about missing support for downstream packages when generating initramfs. The--rebootoption makes the device reboot the Steam Deck automatically, just after the deployment completion. -
After finishing deployment, the Steam Deck will reboot on the new custom kernel version and made a clear resonant or vibrating sound. [Hopefully]
Finally, I showed to the audience that, if I wanted to send this patch upstream, I just needed to run kw send-patch and kw would automatically add subsystem maintainers, reviewers and mailing lists for the affected files as recipients, and send the patch to the upstream community assessment. As I didn't want to create unnecessary noise, I just did a dry-run with kw send-patch -s --simulate to explain how it looks.
What else can kworkflow already mix & match?
In this presentation, I showed that kworkflow supported different kernel development workflows, i.e., multiple distributions, different bootloaders and architectures, different target machines, different debugging tools and automatize your kernel development routines best practices, from development environment setup and verifying a custom kernel in bare-metal to sending contributions upstream following the contributions-by-e-mail structure. I exemplified it with three different target machines: my ordinary x86 AMD laptop with Debian, Raspberry Pi 4 with arm64 Raspbian (cross-compilation) and the Steam Deck with SteamOS (x86 Arch-based OS). Besides those distributions, Kworkflow also supports Ubuntu, Fedora and PopOS.
Now it's your turn: Do you have any secret recipes to share? Please share with us via kworkflow.
Useful links
- Talk Recording of Kworkflow at Kernel Recipes 2025 on Igalia's Channel
- Talk Abstract, Recording and Slide Deck of Kworkflow at Kernel Recipes 2025 on Kernel Recipes Website
- Talk Slide Deck for Download with some Videos instead of GIFs
03 Nov 2025 9:30pm GMT
31 Oct 2025
planet.freedesktop.org
Mike Blumenkrantz: Hibernate On
Take A Break
We've reached Q4 of another year, and after the mad scramble that has been crunch-time over the past few weeks, it's time for SGC to once again retire into a deep, restful sleep.
2025 saw a lot of ground covered:
- NVK-Zink synergy
- Continued Rusticl improvements
- Viewperf perf and general CPU overhead reduction
- Tiler GPU perf
- Mesh shaders
- apitrace perf
- More GL extensions released than any other year this decade
It's been a real roller coaster ride of a year as always, but I can say authoritatively that fans of the blog, you need to take care of yourselves. You need to use this break time wisely. Rest. Recover. Train your bodies. Travel and broaden your horizons. Invest in night classes to expand your minds.
You are not prepared for the insanity that will be this blog in 2026.
31 Oct 2025 12:00am GMT