29 Apr 2024

feedplanet.freedesktop.org

Hans de Goede: Moving GPU drivers out of the initramfs

The firmware which drm/kms drivers need is becoming bigger and bigger and there is a push to move to generating a generic initramfs on distro's builders and signing the initramfs with the distro's keys for security reasons. When targetting desktops/laptops (as opposed to VMs) this means including firmware for all possible GPUs which leads to a very big initramfs.

This has made me think about dropping the GPU drivers from the initramfs and instead make plymouth work well/better with simpledrm (on top of efifb). A while ago I discussed making this change for Fedora with the Red Hat graphics team spoiler: For now nothing is going to change.

Let me repeat that: For now there are no plans to implement this idea so if you believe you would be impacted by such a change: Nothing is going to change.

Still this is something worthwhile to explore further.

Advantages:

1. Smaller initramfs size:

* E.g. a host specific initramfs with amdgpu goes down from 40MB to 20MB
* No longer need to worry about Nvidia GSP firmware size in initrd
* This should also significantly shrink the initrd used in liveimages

2. Faster boot times:

* Loading + unpacking the initrd can take a surprising amount of time. E.g. on my old AMD64 embedded PC (with BobCat cores) the reduction of 40MB -> 20MB in initrd size shaves approx. 3 seconds of initrd load time + 0.6s seconds from the time it takes to unpack the initrd
* Probing drm connectors can be slow and plymouth blocks the initrd -> rootfs transition while it is busy probing

3. Earlier showing of splash. By using simpledrm for the splash the splash can be shown earlier, avoiding the impression the machine is hanging during boot. An extreme example of this is my old AMD64 embedded PC, where the time to show the first frame of the splash goes down from 47 to 9 seconds.

4. One less thing to worry about when trying to create a uniform desktop pre-generated and signed initramfs (these would still need support for nvme + ahci and commonly used rootfs + lvm + luks).

Disadvantages:

Doing this will lead to user visible changes in the boot process:

1. Secondary monitors not lit up by the efifb will stay black during full-disk encryption password entry, since the GPU drivers will now only load after switching to the encrypted root. This includes any monitors connected to the non boot GPU in dual GPU setups.

Generally speaking this is not really an issue, the secondary monitors will light up pretty quickly after the switch to the real rootfs. However when booting a docked laptop, with the lid closed and the only visible monitor(s) are connected to the non boot GPU, then the full-disk encryption password dialog will simply not be visible at all.

This is the main deal-breaker for not implementing this change.

Note because of the strict version lock between kernel driver and userspace with nvidia binary drivers, the nvidia binary drivers are usually already not part of the initramfs, so this problem already exists and moving the GPU drivers out of the initramfs does not really make this worse.


2. With simpledrm plymouth does not get the physical size of the monitor, so plymouth will need to switch to using heuristics on the resolution instead of DPI info to decide whether or not to use hidpi (e.g. 2x size) rendering and even when switching to the real GPU driver plymouth needs to stay with its initial heuristics based decision to avoid the scaling changing when switching to the real driver which would lead to a big visual glitch / change halfway through the boot.

This may result in a different scaling factor for some setups, but I do not expect this really to be an issue.

3. On some (older) systems the efifb will not come up in native mode, but rather in 800x600 or 1024x768.

This will lead to a pretty significant discontinuity in the boot experience when switching from say 800x600 to 1920x1080 while plymouth was already showing the spinner at 800x600.

One possible workaround here is to add: 'video=efifb:auto' to the kernel commandline which will make the efistub switch to the highest available resolution before starting the kernel. But it seems that the native modes are simply not there on systems which come up at 800x600 / 1024x768 so this does not really help.

This does not actually break anything but it does look a bit ugly. So we will just need to document this as an unfortunate side-effect of the change and then we (and our users) will have to live with this (on affected hardware).

4. On systems where a full modeset is done the monitor going briefly black from the modeset will move from being just before plymouth starts to the switch from simpledrm drm to the real driver. So that is slightly worse. IMHO the answer here is to try and get fast modesets working on more systems.

5. On systems where the efifb comes up in the panel's native mode and a fast modeset can be done, the spinner will freeze for a (noticeable) fraction of a second as the switch to the real driver happens.

Preview:

To get an impression what this will look / feel like on your own systems, you can implement this right now on Fedora 40 with some manual configuration changes:

1. Create /etc/dracut.conf.d/omit-gpu-drivers.conf with:

omit_drivers+=" amdgpu radeon nouveau i915 "

And then run "sudo dracut -f" to regenerate your current initrd.

2. Add to kernel commandline: "plymouth.use-simpledrm"

3. Edit /etc/selinux/config, set SELINUX=permissive this is necessary because ATM plymouth has issues with accessing drm devices after the chroot from the initrd to the rootfs.

Note this all assumes EFI booting with efifb used to show the plymouth boot splash. For classic BIOS booting it is probably best to stick with having the GPU drivers inside the initramfs.

29 Apr 2024 1:46pm GMT

26 Apr 2024

feedplanet.freedesktop.org

Robert McQueen: Update from the GNOME board

It's been around 6 months since the GNOME Foundation was joined by our new Executive Director, Holly Million, and the board and I wanted to update members on the Foundation's current status and some exciting upcoming changes.

Finances

As you may be aware, the GNOME Foundation has operated at a deficit (nonprofit speak for a loss - ie spending more than we've been raising each year) for over three years, essentially running the Foundation on reserves from some substantial donations received 4-5 years ago. The Foundation has a reserves policy which specifies a minimum amount of money we have to keep in our accounts. This is so that if there is a significant interruption to our usual income, we can preserve our core operations while we work on new funding sources. We've now "hit the buffers" of this reserves policy, meaning the Board can't approve any more deficit budgets - to keep spending at the same level we must increase our income.

One of the board's top priorities in hiring Holly was therefore her experience in communications and fundraising, and building broader and more diverse support for our mission and work. Her goals since joining - as well as building her familiarity with the community and project - have been to set up better financial controls and reporting, develop a strategic plan, and start fundraising. You may have noticed the Foundation being more cautious with spending this year, because Holly prepared a break-even budget for the Board to approve in October, so that we can steady the ship while we prepare and launch our new fundraising initiatives.

Strategy & Fundraising

The biggest prerequisite for fundraising is a clear strategy - we need to explain what we're doing and why it's important, and use that to convince people to support our plans. I'm very pleased to report that Holly has been working hard on this and meeting with many stakeholders across the community, and has prepared a detailed and insightful five year strategic plan. The plan defines the areas where the Foundation will prioritise, develop and fund initiatives to support and grow the GNOME project and community. The board has approved a draft version of this plan, and over the coming weeks Holly and the Foundation team will be sharing this plan and running a consultation process to gather feedback input from GNOME foundation and community members.

In parallel, Holly has been working on a fundraising plan to stabilise the Foundation, growing our revenue and ability to deliver on these plans. We will be launching a variety of fundraising activities over the coming months, including a development fund for people to directly support GNOME development, working with professional grant writers and managers to apply for government and private foundation funding opportunities, and building better communications to explain the importance of our work to corporate and individual donors.

Board Development

Another observation that Holly had since joining was that we had, by general nonprofit standards, a very small board of just 7 directors. While we do have some committees which have (very much appreciated!) volunteers from outside the board, our officers are usually appointed from within the board, and many board members end up serving on multiple committees and wearing several hats. It also means the number of perspectives on the board is limited and less representative of the diverse contributors and users that make up the GNOME community.

Holly has been working with the board and the governance committee to reduce how much we ask from individual board members, and improve representation from the community within the Foundation's governance. Firstly, the board has decided to increase its size from 7 to 9 members, effective from the upcoming elections this May & June, allowing more voices to be heard within the board discussions. After that, we're going to be working on opening up the board to more participants, creating non-voting officer seats to represent certain regions or interests from across the community, and take part in committees and board meetings. These new non-voting roles are likely to be appointed with some kind of application process, and we'll share details about these roles and how to be considered for them as we refine our plans over the coming year.

Elections

We're really excited to develop and share these plans and increase the ways that people can get involved in shaping the Foundation's strategy and how we raise and spend money to support and grow the GNOME community. This brings me to my final point, which is that we're in the run up to the annual board elections which take place in the run up to GUADEC. Because of the expansion of the board, and four directors coming to the end of their terms, we'll be electing 6 seats this election. It's really important to Holly and the board that we use this opportunity to bring some new voices to the table, leading by example in growing and better representing our community.

Allan wrote in the past about what the board does and what's expected from directors. As you can see we're working hard on reducing what we ask from each individual board member by increasing the number of directors, and bringing additional members in to committees and non-voting roles. If you're interested in seeing more diverse backgrounds and perspectives represented on the board, I would strongly encourage you consider standing for election and reach out to a board member to discuss their experience.

Thanks for reading! Until next time.

Best Wishes,
Rob
President, GNOME Foundation

Update 2024-04-27: It was suggested in the Discourse thread that I clarify the interaction between the break-even budget and the 1M EUR committed by the STF project. This money is received in the form of a contract for services rather than a grant to the Foundation, and must be spent on the development areas agreed during the planning and application process. It's included within this year's budget (October 23 - September 24) and is all expected to be spent during this fiscal year, so it doesn't have an impact on the Foundation's reserves position. The Foundation retains a small % fee to support its costs in connection with the project, including the new requirement to have our accounts externally audited at the end of the financial year. We are putting this money towards recruitment of an administrative assistant to improve financial and other operational support for the Foundation and community, including the STF project and future development initiatives.

(also posted to GNOME Discourse, please head there if you have any questions or comments)

26 Apr 2024 10:39am GMT

25 Apr 2024

feedplanet.freedesktop.org

Mike Blumenkrantz: Startup

It Happened Again.

I've been seeing a lot of ultra technical posts fly past my news feed lately and I'm tired of it. There's too much information out there, too many analyses of vague hardware capabilities, too much handwaving in the direction of compiler internals.

It's too much.

Take it out. I know you've got it with you. I know all my readers carry them at all times.

pastamaker.jpg

That's right.

It's time to make some pasta.

Everyone understands pasta.

Target Locked

Today I'll be firing up the pasta maker on this ticket that someone nerdsniped me with. This is the sort of simple problem that any of us smoothbrains can understand: app too slow.

Here at SGC, we're all experts at solving app too slow by now, so let's take a gander at the problem area.

I'm in a hurry to get to the gym today, so I'll skip over some of the less interesting parts of my analysis. Instead, let's look at some artisanal graphics.

This is an image, but let's pretend it's a graph of the time between when an app is started to when it displays its first frame:

firstframe.png

At the start is when the user launched the app, the body of the arrow is what happens during "startup", and the head of the arrow is when the app has displayed its first frame to the user. The "startup" period is what the user perceives as latency. More technical blogs would break down here into discussions and navel-gazing about "time to first light" and "photon velocity" or whatever, but we're keeping things simple. If SwapBuffers is called, the app has displayed its frame.

Where are we at with this now?

Initial Findings

I did my testing on an Intel Icelake CPU/GPU because I'm lazy. Also because the original ticket was for Intel systems. Also because deal with it, this isn't an AMD blog.

The best way to time this is to:

On iris, the average startup time for gtk4-demo was between 190-200ms.

On zink, the average startup time was between 350-370ms.

Uh-oh.

More Graphics (The Fun Kind)

shaders.png

Initial analysis revealed something very stupid for the zink case: a lot of time was being spent on shaders.

Now, I'm not saying a lot of time was spent compiling shaders. That would be smart. Shaders have to be compiled, and it's not like that can be skipped or anything. A cold run of this app that compiles shaders takes upwards of 1.0 seconds on any driver, and I'm not looking to improve that case since it's rare. And hard. And also I gotta save some work for other people who want to make good blog posts.

The problem here is that when creating shaders, zink blocks while it does some initial shader rewrites and optimizations. This is like if you're going to make yourself a sandwich, before you put smoked brisket on the bread you have to first slice the bread so it's ready when you want to put the brisket on it. Sure, you could slice it after you've assembled your pile of pulled pork and slaw, but generally you slice the bread, you leave the bread sitting somewhere while you find/make/assemble the burnt ends for your sandwich, and then you finish making your sandwich. Compiling shaders is basically the same as making a sandwich.

But slicing bread takes time. And when you're slicing the bread, you're not doing anything else. You can't. You're holding a knife and a loaf of bread. You're physically incapable of doing anything else until you finish slicing.

Similarly, zink can't do anything else while it's doing that shader creation. It's sitting there creating the shaders. And while it's doing that, the rest of the app (or just the main GL thread if glthread is active) is blocked. It can't do anything else. It's waiting on zink to finish, and it cannot make forward progress until the shader creation has completed.

Now this process happens dozens or hundreds of times during app startup, and every time it happens, the app blocks. Its own initialization routines-reading configuration data, setting up global structs and signal handlers, making display server connections, etc-cannot proceed until GL stops blocking.

If you're unsure where I'm going with this, it's a bad thing that zink is slicing all this bread while the app is trying to make sandwiches.

Improvement

The year is whatever year you're reading this, and in that year we have very powerful CPUs. CPUs so powerful that you can do lots of things at once. Instead of having only two hands to hold the bread and slice it, you have your own hands and then the hands of another 10+ of your clones which are also able to hold bread and slice it. So if you tell one of those clones "slice some bread for me", you can do other stuff and come back to some nicely sliced bread. When exactly that bread arrives is another issue depensynchronizationding on how well you understand the joke here.

But this is me, so I get all the jokes, and that means I can do something like this:

smrt.png

By moving all that bread slicing into a thread, the rest of the startup operations can proceed without blocking. This frees up the app to continue with its own lengthy startup routines.

After the change, zink starts up in a average of 260-280ms, a 25% improvement.

I know not everyone wants pasta on their sandwiches, but that's where we ended up today.

pastasandwich.jpg

Not The End

That changeset is the end of this post, but it's not the end of my investigation. There's still mysteries to uncover here.

Like why the farfalle is this app calling glXInitialize and eglInitialize?

Can zink get closer to iris's startup time?

We'll find out in a future installment of Who Wants To Eat Lunch?

25 Apr 2024 12:00am GMT

19 Apr 2024

feedplanet.freedesktop.org

Tomeu Vizoso: Rockchip NPU update 3: Real-time object detection on RK3588

Progress

Yesterday I managed to implement in my open-source driver all the remaining operations so the SSDLite MobileDet model can run on Rockchip's NPU in the RK3588 SoC.

Performance is pretty good at 30 frames per second when using just one of the 3 cores that the NPU contains.


I uploaded the generated video to YouTube at:

You can get the source code at my branch here.

Next steps

Now that we got to this level of usefulness, I'm going to switch to writing a kernel driver suited for inclusion into the Linux kernel, to the drivers/accel subsystem.

There is still lots of work to do, but progress is going pretty fast, though as I write more drivers for different NPUs I will have to split my time among them. At least, until we get more contributors! :)

19 Apr 2024 8:17am GMT

18 Apr 2024

feedplanet.freedesktop.org

Peter Hutterer: udev-hid-bpf: quickstart tooling to fix your HID devices with eBPF

For the last few months, Benjamin Tissoires and I have been working on and polishing a little tool called udev-hid-bpf [1]. This is the scaffolding required quickly and easily write, test and eventually fix your HID input devices (mouse, keyboard, etc.) via a BPF program instead of a full-blown custom kernel driver or a semi-full-blown kernel patch. To understand how it works, you need to know two things: HID and BPF [2].

Why BPF for HID?

HID is the Human Interface Device standard and the most common way input devices communicate with the host (HID over USB, HID over Bluetooth, etc.). It has two core components: the "report descriptor" and "reports", both of which are byte arrays. The report descriptor is a fixed burnt-in-ROM byte array that (in rather convoluted terms) tells us what we'll find in the reports. Things like "bits 16 through to 24 is the delta x coordinate" or "bit 5 is the binary button state for button 3 in degrees celcius". The reports themselves are sent at (usually) regular intervals and contain the data in the described format, as the devices perceives reality. If you're interested in more details, see Understanding HID report descriptors.

BPF or more correctly eBPF is a Linux kernel technology to write programs in a subset of C, compile it and load it into the kernel. The magic thing here is that the kernel will verify it, so once loaded, the program is "safe". And because it's safe it can be run in kernel space which means it's fast. eBPF was originally written for network packet filters but as of kernel v6.3 and thanks to Benjamin, we have BPF in the HID subsystem. HID actually lends itself really well to BPF because, well, we have a byte array and to fix our devices we need to do complicated things like "toggle that bit to zero" or "swap those two values".

If we want to fix our devices we usually need to do one of two things: fix the report descriptor to enable/disable/change some of the values the device pretends to support. For example, we can say we support 5 buttons instead of the supposed 8. Or we need to fix the report by e.g. inverting the y value for the device. This can be done in a custom kernel driver but a HID BPF program is quite a lot more convenient.

HID-BPF programs

For illustration purposes, here's the example program to flip the y coordinate. HID BPF programs are usually device specific, we need to know that the e.g. the y coordinate is 16 bits and sits in bytes 3 and 4 (little endian):

SEC("fmod_ret/hid_bpf_device_event")
int BPF_PROG(hid_y_event, struct hid_bpf_ctx *hctx)
{
        s16 y;
        __u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 9 /* size */);

        if (!data)
                return 0; /* EPERM check */

        y = data[3] | (data[4] << 8);
        y = -y;

        data[3] = y & 0xFF;
        data[4] = (y >> 8) & 0xFF;

        return 0;
}
  

That's it. HID-BPF is invoked before the kernel handles the HID report/report descriptor so to the kernel the modified report looks as if it came from the device.

As said above, this is device specific because where the coordinates is in the report depends on the device (the report descriptor will tell us). In this example we want to ensure the BPF program is only loaded for our device (vid/pid of 04d9/a09f), and for extra safety we also double-check that the report descriptor matches.

// The bpf.o will only be loaded for devices in this list
HID_BPF_CONFIG(
        HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 0x04D9, 0xA09F)
);

SEC("syscall")
int probe(struct hid_bpf_probe_args *ctx)
{
        /*
        * The device exports 3 interfaces.
        * The mouse interface has a report descriptor of length 71.
        * So if report descriptor size is not 71, mark as -EINVAL
        */
        ctx->retval = ctx->rdesc_size != 71;
        if (ctx->retval)
                ctx->retval = -EINVAL;

        return 0;
}

Obviously the check in probe() can be as complicated as you want.

This is pretty much it, the full working program only has a few extra includes and boilerplate. So it mostly comes down to compiling and running it, and this is where udev-hid-bpf comes in.

udev-hid-bpf as loader

udev-hid-bpf is a tool to make the development and testing of HID BPF programs simple, and collect HID BPF programs. You basically run meson compile and meson install and voila, whatever BPF program applies to your devices will be auto-loaded next time you plug those in. If you just want to test a single bpf.o file you can udev-hid-bpf install /path/to/foo.bpf.o and it will install the required udev rule for it to get loaded whenever the device is plugged in. If you don't know how to compile, you can grab a tarball from our CI and test the pre-compiled bpf.o. Hooray, even simpler.

udev-hid-bpf is written in Rust but you don't need to know Rust, it's just the scaffolding. The BPF programs are all in C. Rust just gives us a relatively easy way to provide a static binary that will work on most tester's machines.

The documentation for udev-hid-bpf is here. So if you have a device that needs a hardware quirk or just has an annoying behaviour that you always wanted to fix, well, now's the time. Fixing your device has never been easier! [3].

[1] Yes, the name is meh but you're welcome to come up with a better one and go back in time to suggest it a few months ago.
[2] Because I'm lazy the terms eBPF and BPF will be used interchangeably in this article. Because the difference doesn't really matter in this context, it's all eBPF anyway but nobody has the time to type that extra "e".
[3] Citation needed

18 Apr 2024 4:17am GMT

15 Apr 2024

feedplanet.freedesktop.org

Simon Ser: Status update, April 2024

Hi!

The X.Org Foundation results are in, and I'm now officially part of the Board of Directors. I hope I can be of use to the community on more organizational issues! Speaking of which, I've spent quite a bit of time dealing with Code of Conduct matters lately. Of course I can't disclose details for privacy, but hopefully our actions can gradually improve the contribution experience for FreeDesktop.Org projects.

New extensions have been merged in wayland-protocols. linux-drm-syncobj-v1 enables explicit synchronization which is a better architecture than what we have today (implicit synchronization) and will improve NVIDIA support. alpha-modifier-v1 allows Wayland clients to set an alpha channel multiplier on its surfaces, it can be used to implement effects such as fade-in or fade-out without redrawing, and can even be offloaded to KMS. The tablet-v2 protocol we've used for many years has been stabilized.

In other Wayland news, a new API has been added to dynamically resize libwayland's internal buffer. By default, the server-side buffer size is still 4 KiB but the client-side buffer will grow as needed. This should help with bursts (e.g. long format lists) and high poll rate mice. I've added a new wayland-scanner mode to generate headers with only enums to help libraries such as wlroots which use these in their public API. And I've sent an announcement for the next Wayland release, it should happen at the end of May if all goes well.

With the help of Sebastian Wick, libdisplay-info has gained support for more bits, in particular DisplayID type II, III and VII timings, as well as CTA Video Format Preference blocks, Room Configuration blocks and Speaker Location blocks. I've worked on libicc to finish up the parser, next I'd like to add the math required to apply an ICC profile. gamja now has basic support for file uploads (only when pasting a file for now) and hides no-op nickname changes (e.g. from "emersion" to "emersion_" and back).

See you next month!

15 Apr 2024 10:00pm GMT

Christian Gmeiner: hwdb - The only truth

Trusting hardware, particularly the registers that describe its functionality, is fundamentally risky.

tl;dr

The etnaviv GPU stack is continuously improving and becoming more robust. This time, a hardware database was incorporated into Mesa, utilizing header files provided by the SoC vendors.

If you are interested in the implementation details, I recommend checking out this Mesa MR.

Are you employed at Versilicon and want to help? You could greatly simplify our work by supplying the community with a comprehensive header that includes all the models you offer.

Last but not least: I deeply appreciate Igalia's passion for open source GPU driver development, and I am grateful to be a part of the team. Their enthusiasm for open source work not only pushes the boundaries of technology but also builds a strong, collaborative community around it.

The good old days

Years ago, when I began dedicating time to hacking on etnaviv, the kernel driver in use would read a handful of registers and relay the gathered information to the user space blob. This blob driver was then capable of identifying the GPU (including model, revision, etc.), supported features (such as DXT texture compression, seamless cubemaps, etc.), and crucial limits (like the number of registers, number of varyings, and so on).

For reverse engineering purposes, this interface is super useful. Image if you could change one of these feature bits on a target running the binary blob.

With libvivhook it is possible to do exactly this. From time to time, I am running such an old vendor driver stack on an i.MX 6QuadPlus SBC, which features a Vivante GC3000 as its GPU.

Somewhere, I have a collection of scripts that I utilized to acquire additional knowledge about unknown GPU states activated when a specific feature bit was set.

To explore a simple example, let's consider the case of misrepresenting a GPU's identity as a GC2000. This involves modifying the information provided by the kernel driver to the user space, making the user space driver believe it is interacting with a GC2000 GPU. This scenario could be used for testing, debugging, or understanding how specific features or optimizations are handled differently across GPU models.

export ETNAVIV_CHIP_MODEL="0x2000"
export ETNAVIV_CHIP_REVISION="0x5108"
export ETNAVIV_FEATURES0_CLEAR="0xFFFFFFFF"
export ETNAVIV_FEATURES1_CLEAR="0xFFFFFFFF"
export ETNAVIV_FEATURES2_CLEAR="0xFFFFFFFF"
export ETNAVIV_FEATURES0_SET="0xe0296cad"
export ETNAVIV_FEATURES1_SET="0xc9799eff"
export ETNAVIV_FEATURES2_SET="0x2efbf2d9"
LD_PRELOAD="/lib/viv_interpose.so" ./test-case

If you capture the generated command stream and compare it with the one produced under the correct identity, you'll observe many differences. This is super useful - I love it.

Changing Tides: The Shift in ioctl() Interface

At some point in time, Vivante changed their ioctl() interface and modified the gcvHAL_QUERY_CHIP_IDENTITY command. Instead of providing a very detailed chip identity, they reduced the data set to the following values:

This shift could indeed hinder reverse engineering efforts significantly. At a glance, it becomes impossible to alter any feature value, and understanding how the vendor driver processes these values is out of reach. Determining the function or impact of an unknown feature bit now seems unattainable.

However, the kernel driver also requires a mechanism to verify the existing features of the GPU, as it needs to accommodate a wide variety of GPUs. Therefore, there must be some sort of system or method in place to ensure the kernel driver can effectively manage and support the diverse functionalities and capabilities of different GPUs.

A New Approach: The Hardware Database Dilemma

Let's welcome: gc_feature_database.h, or hwdb for short.

Vivante transitioned to using a database that stores entries for limit values and feature bits. This database is accessed by querying with model, revision, product id, eco id and customer id.

There is some speculation why this move was done. My theory posits that they became frustrated with the recurring cycle of introducing feature bits to indicate the implementation of a feature, subsequently discovering problems with said feature, and then having to introduce additional feature bits to signal that the feature now truly operates as intended. It became far more straightforward to deactivate a malfunctioning feature by modifying information in the hardware database (hwdb). After they began utilizing the hwdb within the driver, updates to the feature registers in the hardware ceased.

Here is a concrete example of such a case that can be found in the etnaviv gallium driver:

screen->specs.tex_astc = VIV_FEATURE(screen, chipMinorFeatures4, TEXTURE_ASTC) &&
                            !VIV_FEATURE(screen, chipMinorFeatures6, NO_ASTC);

Meanwhile, in the etnaviv world there was a hybrid in the making. We stuck with the detailed feature words and found a smart way to convert from Vivante's hwdb entries to our own in-kernel database. There is even a full blown Vivante -> etnaviv hwdb convert.

At that time, I did not fully understand all the consequences this approach would bring - more on that later. So, I dedicated my free time to reverse engineering and tweaking the user space driver, while letting the kernel developers do their thing.

About a year after the initial hwdb landed in the kernel, I thought it might be a good idea to read out the extra id values, and provide them via sysfs to the user space. At that time, I already had the idea of moving the hardware database to user space in mind. However, I was preoccupied with other priorities that were higher on my to-do list, and I ended up forgetting about it.

Challange accepted

Tomeu Vizoso began to work on teflon and a Neural Processing Unit (NPU) driver within Mesa, leveraging a significant amount of the existing codebase and concepts, including the same kernel driver for the GPU. During this process, he encountered a need for some NPU-specific limit values. To address this, he added an in-kernel hwdb entry and made the limit values accessible to user space.

That's it - the kernel supplies all the values the NPU driver requires. We're finished, aren't we?

It turns out, that there are many more NPU related values that need to be exposed in the same manner, with seemingly no end in sight.

One of the major drawbacks when the hardware database (hwdb) resides in the kernel is the considerable amount of time it takes for hwdb patches to be written, reviewed, and eventually merged into Linus's git tree. This significantly slows down the development of user space drivers. For end users, this means they must either run a bleeding-edge kernel or backport the necessary changes on their own.

For me personally, the in-kernel hardware database should never have been implemented in its current form. If I could go back in time, I would have voiced my concerns.

As a result, moving the hardware database (hwdb) to user space quickly became a top priority on my to-do list, and I began working on it. However, during the testing phase of my proof of concept (PoC), I had to pause my work due to a kernel issue that made it unreliable for user space to trust the ID values provided by the kernel. Once my fix for this issue began to be incorporated into stable kernel versions, it was time to finalize the user space hwdb.

There is only one little but important detail we have not talked about yet. There are vendor specific versions of gc_feature_database.h based on different versions of the binary blob. For instance, there is one from NXP, ST, Amlogic and some more.

Here is a brief look at the differences:

nxp/gc_feature_database.h (autogenerated at 2023-10-24 16:06:00, 861 struct members, 27 entries)
stm/gc_feature_database.h (autogenerated at 2022-12-29 11:13:00, 833 struct members, 4 entries)
amlogic/gc_feature_database.h (autogenerated at 2021-04-12 17:20:00, 733 struct members, 8 entries)

We understand that these header files are generated and adhere to a specific structure. Therefore, all we need to do is write an intelligent Python script capable of merging the struct members into a single consolidated struct. This script will also convert the old struct entries to the new format and generate a header file that we can use.

I'm consistently amazed by how swiftly and effortlessly Python can be used for such tasks. Ninety-nine percent of the time, there's a ready-to-use Python module available, complete with examples and some documentation. To address the C header parsing challenge, I opted for pycparser.

The final outcome is a generated hwdb.h file that looks and feels similar to those generated from the binary blob.

Future proof

This header merging approach offers several advantages:

While working on this topic I decided to do a bigger refactoring with the end goal to provide a struct etna_core_info that is located outside of the gallium driver.

This makes the code future proof and moves the filling of struct etna_core_info directly into the lowest layer - libetnaviv_drm (src/etnaviv/drm).

We have not yet talked about one important detail.

What happens if there is no entry in the user space hwdb?

The solution is straightforward: we fallback to the previous method and request all feature words from the kernel driver. However, in an ideal scenario, our user space hardware database should supply all necessary entries. If you find that an entry for your GPU/NPU is missing, please get in touch with me.

What about the in-kernel hwdb?

The existing system, despite its limitations, is set to remain indefinitely, with new entries being added to accommodate new GPUs. Although it will never contain as much information as the user space counterpart, this isn't necessarily a drawback. For the purposes at hand, only a handful of feature bits are required.

15 Apr 2024 12:00am GMT

12 Apr 2024

feedplanet.freedesktop.org

Mike Blumenkrantz: Quick Post

Super Fast

Just a quick post to let everyone know that I have clicked merge on the vroom MR. Once it lands, you can test the added performance gains with ZINK_DEBUG=ioopt.

I'll be enabling this by default in the next month or so once a new GL CTS release happens that fixes all the hundreds of broken tests which would otherwise regress. With that said, I've tested it on a number of games and benchmarks, and everything works as expected.

Have fun.

12 Apr 2024 12:00am GMT

04 Apr 2024

feedplanet.freedesktop.org

Mike Blumenkrantz: Descending

Into The Spiral of Madness

I know what you're all thinking: there have not been enough blog posts this year. As always, my highly intelligent readers are right, and as always, you're just gonna have to live with that because I'm not changing the way anything works. SGC happens when it happens.

And today. As it snows in April. SGC. Is. Happening.

Let's begin.

In The Beginning, A Favor Was Asked

I was sitting at my battlestation doing some very ordinary REDACTED work for REDACTED, and friend of the blog, Samuel "Shader Objects" Pitoiset (he has legally changed his name, please be respectful), came to me with a simple request. He wanted to enable VK_EXT_shader_object for the radv-zink jobs in mesa CI as the final part of his year-long bringup for the extension. This meant that all the tests passing without shader objects needed to also pass with shader objects.

This should've been easy; it was over a year ago that the Khronos blog famously and confusingly announced that pipelines were dead and nobody should ever use them again (paraphrased). A year is more than enough time for everyone to collectively get their shit together. Or so you might think.

Turns out shader objects are hard. This simple ask sent me down a rabbithole the likes of which I had never imagined.

It started normally enough. There were a few zink tests which failed when shader objects were enabled. Nobody was surprised; I wrote the zink usage before validation support had landed and also before anything but lavapipe supported it. As everyone is well aware, lavapipe is the best and most handsome Vulkan driver, and just by using it you eliminate all bugs that your application may have. RADV is not, and so there are bugs.

A number of them were simple:

The list goes on, and longtime followers of the blog are nodding to themselves as they skim the issues, confirming that they would have applied all the same one-liner fixes.

Then it started to get crazy.

Locations, How Do They Work?

I'm a genius, so obviously I know how this all works. That's why I'm writing this blog. Right?

smart-or-blogger.png

Right. Good. So Samuel comes to me, and he hits me with this absolute brainbuster of an issue. An issue so tough that I have to perform an internet search to find a credible authority on the topic. I found this amazing and informative site that exactly described the issue Samuel had posted. I followed the staggering intellect of the formidable author and blah blah blah yeah obviously the only person I'd find writing about an issue I have to solve is past-me who was too fucking lazy to actually solve it.

I started looking into this more deeply after taking a moment to fix a different issue related to location assignment that Samuel was too lazy to file a ticket for and thus has deprived the blog of potential tests that readers could run to examine and debug the issue for themselves. But the real work was happening elsewhere.

Deeper

Now we're getting to the good stuff. I hope everyone has their regulation-thickness safety helmet strapped on and splatter guards raised to full height because you'll need them both.

As I said in Adventures In Linking, nir_assign_io_var_locations is the root of all evil. In the case where shaders have mismatched builtins, the assigned locations are broken. I decided to take the hammer to this. I mean I took the forbidden action, did the very thing that I railed about live at XDC.

Sidebar: at this exact moment, Samuel told me his issue was already fixed.

I added a new pipe cap.

I know. It was a last resort, but I wanted the issue fixed. The result was this MR, which gave nir_assign_io_var_locations the ability to ignore builtins with regard to assigning locations. This would resolve the issue once and for all, as drivers which treat builtins differently could pass the appropriate param to the NIR pass and then get good results.

Problem solved.

Deeper.

I got some review comments which were interesting, but ultimately the problem remained: lavapipe (and maybe some other vulkan drivers) use this pass to assign locations, and no amount of pipe caps will change that.

It was a tough problem to solve, but someone had to do it. That's why I dug in and began examining this MR from the only man who is both a Mesa expert and a Speed Force user, Marek Olšák, to enable his new NIR optimized linker for RadeonSI. This was a big, meaty triangles-go-brrr thing to sink my teeth into. I had to get into a different headspace to figure out what I was even doing anymore.

code-motion.png

The gist of opt_varyings is that you give all the shaders in a pipeline to Marek, and Marek says "trust me, buddy, this is gonna be way faster" and gives you back new shaders that do the same thing except only the vertex shader actually has any code. Read the design document if you want more info.

Now I'm deep into it though, and I'm reading the commits, and I see there's this new lower_mediump_io callback which lowers mediump I/O to 16bit. Which is allowed by GLSL. And I use GLSL, so naturally I could do this too. And I did, and I ran it in zink, and I put it through CTS and OH FUCK OH SHIT OH FUCK WHAT THE FUCK EVEN-

mediump.png

mediump? More Like… Like… Medium… Stupid.

Here's the thing. In GLSL, you can have mediump I/O which drivers can translate to mean 16bit I/O, and this works great. In Vulkan, we have this knockoff brand, dumpster tier VK_KHR_16bit_storage extension which seems like it should be the same, except for one teeny tiny little detail:

• VUID-StandaloneSpirv-Component-04920
  The Component decoration value must not be greater than 3

Brilliant. So I can have up to four 16bit components at a given location. Two whole dwords. Very useful. Great. Just what I wanted. Thanks.

Also, XFB is a thing, and, well, pardon my saying so, but mediump xfb? Fuck right off.

Next Up: IO Lowering-FRONTEND EDITION

With mediump safely ejected from the codebase and my life, I was free to pursue other things. I didn't, but I was free to. And even with Samuel screaming somewhere distant that his issue was already long since fixed, I couldn't stop. There were other people struggling to implement opt_varyings in their own drivers, and as we all know, half of driver performance is the speed with which they implement new features. That meant that, as expected, RadeonSI had a significant lead on me since I'm always just copying Marek's homework anyway, but the hell if I was about to let some other driver copy homework faster than me.

Fans of the blog will recall way, way, way, way back in Q3 '23 when I blogged about very dumb things. Specifically about how I was going to start using "lowered I/O" in zink. Well, I did that. And then I let the smoking rubble cool for a few months. And now it's Q2 '24, and I'm older and unfathomably wiser, and I am about to put this rake into the wheel of my bicycle once more.

In this case, the rake is nir_io_glsl_lower_derefs, which moves all the I/O lowering into the frontend rather than doing it manually. The result is the same: zink gets lowered I/O, and the only difference is that it happens earlier. It's less code in zink, and…

frontend-doit.png

Of course there is no driver but RadeonSI which sets nir_io_glsl_lower_derefs.

clown.png

And, of course, RadeonSI doesn't use any of the common Gallium NIR passes.

clown.png

But surely they'd still work.

clown.png

Surely at least some of them would work.

clown.png

Surely there wouldn't be that many of them.

clown.png

Surely fucking all of themthe ones that didn't work would be easy to fix.

clown.png

Surely they wouldn't uncover any other, more complex, more time-consuming issues that would drag in the entire Mesa compiler ecosystem.

clown.png

Wouldn't be worth mentioning at SGC if any of those were true, would it.

SGC vs Old NIR Passes

By now I was pretty deep into this project, which is to say that I had inexplicably vanished from several other tasks I was supposed to be accomplishing, and the only way out was through. But before I could delve into any of the legacy GL compatibility stuff, I had bigger problems.

Namely everything was exploding because I failed to follow the directions and was holding opt_varyings wrong. In the fine print, the documentation for the pass very explicitly says that lower_to_scalar must be set in the compiler options. But did I read the directions? Obviously I did. If you're asking whether I read them comprehensively, however, or whether I remembered what I had read once I was deep within the coding fugue of fixing this damn bug Samuel had given me way back wh

With lower_to_scalar active, I actually came upon the big problem: my existing handling for lowered I/O was inadequate, and I needed to make my code better. Much better.

Originally when I switched to lowered I/O, I wrote some passes to unclown I/O back to variables and derefs. There was one NIR pass that ran early on to generate variables based on the loads and stores, and there was a second that ran just before spirv translation to convert all the load/store intrinsics back to load/store derefs. This worked great.

But it didn't work great now! Obviously it wouldn't, right? I mean, nothing in this entire compiler stack ever works, does it? It's all just a giant jenga tower that's one fat-finger away from total and utter-What? Oh, right, heh, yeah, no, I just got a little carried away remembering is all. No problem. Let's keep going. We have to now that we've already come this far. Don't we? I'll stop writing if you stop reading, how about that. No? Well, heh, of course it'd be that way! This is… We're SGC!

So I had this rework_io_vars function, and it. was. BIG. I'm talking over a hundred lines with loops and switches and all kinds of cool control flow to handle all the weird corner cases I found at 4:14am when I was working on it. The way that it worked was pretty simple:

It worked great. Really, there were no known bugs.

The problem with this came with the scalarized frontend I/O lowering, which would create patterns like:

In this scenario, there's indirect access mixed with direct access for the same location, but it's at an offset from the base of the array, and it kiiinda almost works except it totally doesn't because the first instruction has no metadata hint about being part of the second instruction's array. And since the pass iterates over the shader in instruction order, encountering the instructions in this order is a problem whereas encountering them in a different order potentially wouldn't be a problem.

I had two options available to me at that point. The first option was to add in some workarounds to enlarge the scalar to an array when encountering this pattern. And I tried that, and it worked. But then I came across a slightly different variant which didn't work. And that's when I chose the second option.

Burn it all down. The whole thing.

I mean, uh, just-just that one function. It's not like I want to BURN THE WHOLE THING DOWN after staring into the abyss for so long, definitely not.

The new pass! Right, the new pass. The new rework_io_vars pass that I wrote is a sequence of operations that ends up being far more robust than the original. It works something like this:

The "scan" process ends up being a function called loop_io_var_mask which iterates a shader_info mask for a given input/output mode and scans the shader for instructions which occur on each location for that mode. The gathered info includes a component mask as well as array size and fbfetch info-all that stuff. Everything needed to create variables. After the shader is scanned, variables are created for the given location. By processing the indirect mask first, it becomes possible to always detect the above case and handle it correctly.

Problem solved.

Problems Only Multiply

But that's fine, and I am so sane right now you wouldn't believe it if I told you. I wrote this great, readable, bulletproof variable generator, and it's tremendous, but then I tried using it without nir_io_glsl_lower_derefs because I value bisectability, and obviously there was zero chance that would ever work so why would I ever even bother. XFB is totally broken, and there's all kinds of other weird failures that I started examining and then had to go stand outside staring into the woods for a while, and it's just not happening. And nir_io_glsl_lower_derefs doesn't work without the new version either, which means it's gonna be impossible to bisect anything between the two changes.

Totally fine, I'm sure, just like me.

By now, I had a full stack of zink compiler cleanups and fixes that I'd accumulated in the course of all this. Multiple stacks, really. So many stacks. Fortunately I was able to slip them into the repo without anyone noticing. And also without CI slowing to a crawl due to the freedreno farm yet again being in an absolute state.

I was passing CTS again, which felt great. But then I ran piglit, and I remembered that I had skipped over all those Gallium compatibility passes. And I definitely had to go in and fix them.

mesa-doge.png

There were a lot of these passes to fix, and nearly all of them had the same two issues:

This meant I had to add handling for lowered I/O without variables, and then I also had to add generic handling for scalarized versions of both codepaths. Great, great, great. So I did that. And one of them really needed a lot of work, but most of the others were reasonably straightforward.

And then there's lower_clip.

lower_clip is a pass that rewrites shaders to handle user-specified clip planes when the underlying driver doesn't support them. The pass does this by leveraging clipdistance.

And here's the thing about clipdistance: unlike the other builtins, it's an array. But it's treated like a vector. Except you can still access it indirectly like an array. So is it an array or is it a vector? Decades from now, graphics engineers will still be arguing about this stupidity, but now is the time when I need to solve this, and it's not something that I as a mere, singular human, can possibly solve. Hah! There's no way I'd be able to do that. I'd have to be crazy. And I'm… Uh-oh, what's the right way to finish that statement? It's probably fine! Everything's fine!

But when you've got an array that's treated like a vector that's really an array, things get confusing fast, and in NIR there's the compact flag to indicate that you need to reassess your life choices. One of those choices needing reassessment is the use of nir_shader_gather_info, a simple function that populates shader_info with useful metadata after scanning the shader. And here's a pop quiz that I'm sure everyone can pass with ease after reading this far.

How many shader locations are consumed by gl_ClipDistance?

Simple question, right? It's a variably-sized float[] array-vector with up to 8 members, so it consumes up to two locations. Right? No, that's a question, not a rhetorical-But you're using nir_shader_gather_info, and it sees gl_ClipDistance, okay, so how many slots do you expect it to add to your outputs_written bitmask? Is it 8? Or is it 2? Does anybody really know?

Regardless of what you thought, the answer is 8, and you'll get 8, and you'll be happy with 8. And if you're trying to use outputs_written for anything, and you see any of the other builtins within 8 slots of gl_ClipDistance being used, then you should be able to just figure it out that this is clipdistance playing pranks again. Right?

clipdistance-ohyou.png

It's all fun and games until someone gets too deep into clipdistance is a proverb oft-repeated among compiler developers. Personally, I went back and forth until I cobbled together something to sort of almost fix the problem, but I posed the issue to the community at large, and now we are having plans with headings and subheadings. You're welcome.

And that's the end of it, right?

Nope

The problem with going in and fixing anything in core Mesa is that you end up breaking everything else. So while I was off fixing Gallium compatibility passes, specifically lower_clip, I ended up breaking freedreno and v3d. Someday maybe we'll get to the bottom of that.

But I'm fast-forwarding, because while I was working on this…

What even is this anymore? Right, I was fixing Samuel's bug. The one about not using opt_varyings. So I had my variable generator functioning, and I had the compat passes working (for me), and CTS and piglit were both passing. Then I decided to try out nir_io_glsl_opt_varyings. Just a little. Just to see what happened.

I don't have any more jokes here. It didn't work good. A lot of things went boom-boom. There were some opt_varyings bugs like these, and some related bugs like this, and there was missing core NIR stuff for zink, and there were GLSL bugs, and also CTS was broken. Also a bunch of the earlier zink stacks of compiler patches were fixing bugs here.

But eventually, over weeks, it started working.

The Deepest Depths

Other than verifying everything still works, I haven't tested much. If you're feeling brave, try out the MR with dependencies (or wait for rebase) and tell me how the perf looks. So far, all I've seen is about a 6000% improvement across the board.

Finally, it's over.

Samuel, your bug is fixed. Never ask me for anything again.

04 Apr 2024 12:00am GMT

02 Apr 2024

feedplanet.freedesktop.org

Maira Canal: Linux 6.8: AMD HDR and Raspberry Pi 5

The Linux kernel 6.8 came out on March 10th, 2024, bringing brand-new features and plenty of performance improvements on different subsystems. As part of Igalia, I'm happy to be an active part of many features that are released in this version, and today I'm going to review some of them.

Linux 6.8 is packed with a lot of great features, performance optimizations, and new hardware support. In this release, we can check the Intel Xe DRM driver experimentally, further support for AMD Zen 5 and other upcoming AMD hardware, initial support for the Qualcomm Snapdragon 8 Gen 3 SoC, the Imagination PowerVR DRM kernel driver, support for the Nintendo NSO controllers, and much more.

Igalia is widely known for its contributions to Web Platforms, Chromium, and Mesa. But, we also make significant contributions to the Linux kernel. This release shows some of the great work that Igalia is putting into the kernel and strengthens our desire to keep working with this great community.

Let's take a deep dive into Igalia's major contributions to the 6.8 release:

AMD HDR & Color Management

You may have seen the release of a new Steam Deck last year, the Steam Deck OLED. What you may not know is that Igalia helped bring this product to life by putting some effort into the AMD driver-specific color management properties implementation. Melissa Wen, together with Joshua Ashton (Valve), and Harry Wentland (AMD), implemented several driver-specific properties to allow Gamescope to manage color features provided by the AMD hardware to fit HDR content and improve gamers' experience.

She has explained all features implemented in the AMD display kernel driver in two blog posts and a 2023 XDC talk:

Async Flip

André Almeida worked together with Simon Ser (SourceHut) to provide support for asynchronous page-flips in the atomic API. This feature targets users who want to present a new frame immediately, even if after missing a V-blank. This feature is particularly useful for applications with high frame rates, such as gaming.

Raspberry Pi 5

Raspberry Pi 5 was officially released on October 2023 and Igalia was ready to bring top-notch graphics support for it. Although we still can't use the RPi 5 with the mainline kernel, it is superb to see some pieces coming upstream. Iago Toral worked on implementing all the kernel support needed for the V3D 7.1.x driver.

With the kernel patches, by the time the RPi 5 was released, it already included a fully 3.1 OpenGL ES and Vulkan 1.2 compliant driver implemented by Igalia.

GPU stats and CPU jobs for the Raspberry Pi 4/5

Apart from the release of the Raspberry Pi 5, Igalia is still working on improving the whole Raspberry Pi environment. I worked, together with José Maria "Chema" Casanova, implementing the support for GPU stats on the V3D driver. This means that RPi 4/5 users now can access the usage percentage of the GPU and they can access the statistics by process or globally.

I also worked, together with Melissa, implementing CPU jobs for the V3D driver. As the Broadcom GPU isn't capable of performing some operations, the Vulkan driver uses the CPU to compensate for it. In order to avoid stalls in the job submission, now CPU jobs are part of the kernel and can be easily synchronized though with synchronization objects.

If you are curious about the CPU job implementation, you can check this blog post.

Other Contributions & Fixes

Sometimes we don't contribute to a major feature in the release, however we can help improving documentation and sending fixes. André also contributed to this release by documenting the different AMD GPU reset methods, making it easier to understand by future users.

During Igalia's efforts to improve the general users' experience on the Steam Deck, Guilherme G. Piccoli noticed a message in the kernel log and readily provided a fix for this PCI issue.

Outside of the Steam Deck world, we can check some of Igalia's work on the Qualcomm Adreno GPUs. Although most of our Adreno-related work is located at the user-space, Danylo Piliaiev sent a couple of kernel fixes to the msm driver, fixing some hangs and some CTS tests.

We also had contributions from our 2023 Igalia CE student, Nia Espera. Nia's project was related to mobile Linux and she managed to write a couple of patches to the kernel in order to add support for the OnePlus 9 and OnePlus 9 Pro devices.

If you are a student interested in open-source and would like to have a first exposure to the professional world, check if we have openings for the Igalia Coding Experience. I was a CE student myself and being mentored by a Igalian was a incredible experience.

Check the complete list of Igalia's contributions for the 6.8 release

Authored (57):

André Almeida (2)

Danylo Piliaiev (2)

Guilherme G. Piccoli (1)

Iago Toral Quiroga (4)

Maíra Canal (17)

Melissa Wen (27)

Nia Espera (4)

Signed-off-by (88):

André Almeida (4)

Danylo Piliaiev (2)

Guilherme G. Piccoli (1)

Iago Toral Quiroga (4)

Jose Maria Casanova Crespo (2)

Maíra Canal (28)

Melissa Wen (43)

Nia Espera (4)

Acked-by (4):

Jose Maria Casanova Crespo (2)

Maíra Canal (1)

Melissa Wen (1)

Reviewed-by (30):

André Almeida (1)

Christian Gmeiner (1)

Iago Toral Quiroga (20)

Maíra Canal (4)

Melissa Wen (4)

Tested-by (1):

Guilherme G. Piccoli (1)

02 Apr 2024 11:00am GMT

31 Mar 2024

feedplanet.freedesktop.org

Hari Rana: Coming Out as Trans

Vocabularies

Before I delve into my personal experience, allow me to define several key terms:

Backstory

Allow me to share a little backstory. I come from a neighborhood where being anti-LGBTQ+ was considered "normal" a decade ago. This outlook was quite common in the schools I attended, but I wouldn't be surprised if a considerably significant portion of the people around here are still anti-LGBTQ+ today. Many individuals, including former friends and teachers, have expressed their opposition to LGBTQ+ in the past, which influenced my own view against the LGBTQ+ community at the time.

Due to my previous experiences and the environment I live(d) in, I tried really hard to avoid thinking about my sexuality and gender identity for almost a decade. Every time I thought about my sexuality and gender identity, I'd do whatever I could to distract myself. I kept forcing myself to be as masculine as possible. However, since we humans have a limit, I eventually reached a limit to the amount of thoughts I could suppress.

I always struggled with communicating and almost always felt lonely whenever I was around the majority of people, so I pretended to be "normal" and hid my true feelings. About 5 years ago, I began to spend most of my time online. I met people who are just like me, many of which I'm still friends with 3-4 years later. At the time, despite my strong biases against LGBTQ+ from my surroundings, I naturally felt more comfortable within the community, far more than I did outside. I was able to express myself more freely and have people actually understand me. It was the only time I didn't feel the need to act masculine. However, despite all this, I was still in the mindset of suppressing my feelings. Truly an egg irl moment

Eventually, I was unable hold my thoughts anymore, and everything exploded. All I could think about for a few months was my gender identity: my biases between my childhood environment often clashed with me questioning my own identity, and whether I really saw myself as a man. I just had these recurring thoughts and a lot of anxiety about where I'm getting these thoughts from, and why.

Since then, my work performance got exponentially worse by the week. I quickly lost interest in my hobbies, and began to distance myself from communities and friends. I often lashed out on people because my mental health was getting worse. My sleep quality was also getting worse, which only worsened the situation. On top of that, I still had to hide my feelings, which continued to exhaust me. All I could think about for months was my gender identity.

After I slowly became comfortable with and accepting of my gender identity, I started having suicidal thoughts on a daily basis, which I was able to endure… until I reached a breaking point once again. I was having suicidal thoughts on a bi-hourly basis. It escalated to hourly, and finally almost 24/7. I obviously couldn't work anymore, nor could I do my hobbies. I needed to hide my pain because of my social anxiety. I also didn't have the courage to call the suicide hotline either. What happened was that I talked to many people, some of whom have encouraged and even helped me seek professional help.

However, that was all in the past. I feel much better and more comfortable with myself and the people I opened up to, and now I'm confident enough to share it publicly 😊

Coming Out‎ ‎🏳️‍⚧️

I identify as agender. My pronouns are any/all - I'll accept any pronouns. I don't think I have a preference, so feel free to call me as whatever you want; whatever you think fits me best :)

I'm happy with agender because I feel disconnected from my own masculinity. I don't think I belong at either end of the spectrum (or even in between), so I'm pretty happy that there is something that best describes me.

Why the Need to Come Out Publicly?

So… why come out publicly? Why am I making a big deal out of this?

Simply put, I am really proud and relieved for discovering myself. For so long, I tried to suppress my thoughts and force myself to be someone I was fundamentally not. While that never worked, I explored myself instead and discovered that I'm trans. However, I also wrote this article to explain how much it affected me for living in a transphobic environment, even before I discovered myself.

For me, displaying my gender identity is like displaying a username or profile picture. We choose a username and profile picture when possible to give a glimpse of who we are.

I chose "TheEvilSkeleton" as my username because I used to play Minecraft regularly when I was 10 years old. While I don't play Minecraft anymore, it helped me discover my passion: creating and improving things and working together - that's why I'm a programmer and contribute to software. I chose Chrome-chan as my profile picture because I think she is cute and I like cute things :3. I highly value my username and profile picture, the same way I now value my gender identity.

Am I Doing Better?

While I'm doing much better than before, I did go through a depressive episode that I'm still recovering from at the time of writing, and I'm still processing the discovery because of my childhood environment, but I certainly feel much better after discovering myself and coming out.

However, coming out won't magically heal the trauma I've experienced throughout my childhood environment. It won't make everyone around me accept who I am, or even make them feel comfortable around me. It won't drop the amount of harassment I receive online to zero - if anything, I write this with the expectation that I will be harassed and discriminated against more than ever.

There will be new challenges that I will have to face, but I still have to deal with the trauma, and I will have to deal with possible trauma in the future. The best thing I can do is train myself to be mentally resilient. I certainly feel much better coming out, but I'm still worried about the future. I sometimes wish I wasn't trans, because I'm genuinely terrified about the things people have gone through in the past, and are still going through right now.

I know I'm going to have to fight for my life now that I've come out publicly, because apparently the right to live as yourself is still controversial in 2024.

Seeking Help

Of course, I wasn't alone in my journey. What helped me get through it was talking to my friends and seeking help in other places. I came out to several of my friends in private. They were supportive and listened to me vent; they reassured me that there's nothing wrong with me, and congratulated me for discovering myself and coming out.

Some of my friends encouraged and helped me seek professional help at local clinics for my depression. I have gained more confidence in myself; I am now capable to call clinics by myself, even when I'm nervous. If these suicidal thoughts escalate again, I will finally have the courage to call the suicide hotline.

If you're feeling anxious about something, don't hesitate to talk to your friends about it. Unless you know that they'll take it the wrong way and/or are currently dealing with personal issues, they will be more than happy to help.

I have messaged so many people in private and felt much better after talking. I've never felt so comforted by friends who try their best to be there for me. Some friends have listened without saying anything, while some others have shared their experiences with me. Both were extremely valuable to me, because sometimes I just want (and need) to be heard and understood.

If you're currently trying to suppress your thoughts and really trying to force yourself into the gender you were assigned at birth, like I was, the best advice I can give you is to give yourself time to explore yourself. It's perfectly fine to acknowledge that you're not cisgender (that is, if you're not). You might want to ask your trans friends to help you explore yourself. From experience, it's not worth forcing yourself to be someone you're not.

Closing Thoughts

I feel relieved about coming out, but to be honest, I'm still really worried about the future of my mental health. I really hope that everything will work out and that I'll be more mentally resilient.

I'm really happy that I had the courage to take the first steps, to go to clinics, to talk to people, to open up publicly. It's been really difficult for me to write and publish the article. I'm really grateful to have wonderful friends, and legitimately, I couldn't ask for better friends.

31 Mar 2024 12:00am GMT

28 Mar 2024

feedplanet.freedesktop.org

Christian Schaller: Fedora Workstation 40 – what are we working on

So Fedora Workstation 40 Beta has just come out so I thought I share a bit about some of the things we are working on for Fedora Workstation currently and also major changes coming in from the community.

Flatpak

Flatpaks has been a key part of our strategy for desktop applications for a while now and we are working on a multitude of things to make Flatpaks an even stronger technology going forward. Christian Hergert is working on figuring out how applications that require system daemons will work with Flatpaks, using his own Sysprof project as the proof of concept application. The general idea here is to rely on the work that has happened in SystemD around sysext/confext/portablectl trying to figure out who we can get a system service installed from a Flatpak and the necessary bits wired up properly. The other part of this work, figuring out how to give applications permissions that today is handled with udev rules, that is being worked on by Hubert Figuière based on earlier work by Georges Stavracas on behalf of the GNOME Foundation thanks to the sponsorship from the Sovereign Tech Fund. So hopefully we will get both of these two important issues resolved soon. Kalev Lember is working on polishing up the Flatpak support in Foreman (and Satellite) to ensure there are good tools for managing Flatpaks when you have a fleet of systems you manage, building on the work of Stephan Bergman. Finally Jan Horak and Jan Grulich is working hard on polishing up the experience of using Firefox from a fully sandboxed Flatpak. This work is mainly about working with the upstream community to get some needed portals over the finish line and polish up some UI issues in Firefox, like this one.

Toolbx

Toolbx, our project for handling developer containers, is picking up pace with Debarshi Ray currently working on getting full NVIDIA binary driver support for the containers. One of our main goals for Toolbx atm is making it a great tool for AI development and thus getting the NVIDIA & CUDA support squared of is critical. Debarshi has also spent quite a lot of time cleaning up the Toolbx website, providing easier access to and updating the documentation there. We are also moving to use the new Ptyxis (formerly Prompt) terminal application created by Christian Hergert, in Fedora Workstation 40. This both gives us a great GTK4 terminal, but we also believe we will be able to further integrate Toolbx and Ptyxis going forward, creating an even better user experience.

Nova

So as you probably know, we have been the core maintainers of the Nouveau project for years, keeping this open source upstream NVIDIA GPU driver alive. We plan on keep doing that, but the opportunities offered by the availability of the new GSP firmware for NVIDIA hardware means we should now be able to offer a full featured and performant driver. But co-hosting both the old and the new way of doing things in the same upstream kernel driver has turned out to be counter productive, so we are now looking to split the driver in two. For older pre-GSP NVIDIA hardware we will keep the old Nouveau driver around as is. For GSP based hardware we are launching a new driver called Nova. It is important to note here that Nova is thus not a competitor to Nouveau, but a continuation of it. The idea is that the new driver will be primarily written in Rust, based on work already done in the community, we are also evaluating if some of the existing Nouveau code should be copied into the new driver since we already spent quite a bit of time trying to integrate GSP there. Worst case scenario, if we can't reuse code, we use the lessons learned from Nouveau with GSP to implement the support in Nova more quickly. Contributing to this effort from our team at Red Hat is Danilo Krummrich, Dave Airlie, Lyude Paul, Abdiel Janulgue and Phillip Stanner.

Explicit Sync and VRR

Another exciting development that has been a priority for us is explicit sync, which is critical for especially the NVidia driver, but which might also provide performance improvements for other GPU architectures going forward. So a big thank you to Michel Dänzer , Olivier Fourdan, Carlos Garnacho; and Nvidia folks, Simon Ser and the rest of community for working on this. This work has just finshed upstream so we will look at backporting it into Fedora Workstaton 40. Another major Fedora Workstation 40 feature is experimental support for Variable Refresh Rate or VRR in GNOME Shell. The feature was mostly developed by community member Dor Askayo, but Jonas Ådahl, Michel Dänzer, Carlos Garnacho and Sebastian Wick have all contributed with code reviews and fixes. In Fedora Workstation 40 you need to enable it using the command

gsettings set org.gnome.mutter experimental-features "['variable-refresh-rate']"

PipeWire

Already covered PipeWire in my post a week ago, but to quickly summarize here too. Using PipeWire for video handling is now finally getting to the stage where it is actually happening, both Firefox and OBS Studio now comes with PipeWire support and hopefully we can also get Chromium and Chrome to start taking a serious look at merging the patches for this soon. Whats more Wim spent time fixing Firewire FFADO bugs, so hopefully for our pro-audio community users this makes their Firewire equipment fully usable and performant with PipeWire. Wim did point out when I spoke to him though that the FFADO drivers had obviously never had any other consumer than JACK, so when he tried to allow for more functionality the drivers quickly broke down, so Wim has limited the featureset of the PipeWire FFADO module to be an exact match of how these drivers where being used by JACK. If the upstream kernel maintainer is able to fix the issues found by Wim then we could look at providing a more full feature set. In Fedora Workstation 40 the de-duplication support for v4l vs libcamera devices should work as soon as we update Wireplumber to the new 0.5 release.

To hear more about PipeWire and the latest developments be sure to check out this interview with Wim Taymans by the good folks over at Destination Linux.

Remote Desktop

Another major feature landing in Fedora Workstation 40 that Jonas Ådahl and Ray Strode has spent a lot of effort on is finalizing the remote desktop support for GNOME on Wayland. So there has been support for remote connections for already logged in sessions already, but with these updates you can do the login remotely too and thus the session do not need to be started already on the remote machine. This work will also enable 3rd party solutions to do remote logins on Wayland systems, so while I am not at liberty to mention names, be on the lookout for more 3rd party Wayland remoting software becoming available this year.

This work is also important to help Anaconda with its Wayland transition as remote graphical install is an important feature there. So what you should see there is Anaconda using GNOME Kiosk mode and the GNOME remote support to handle this going forward and thus enabling Wayland native Anaconda.

HDR

Another feature we been working on for a long time is HDR, or High Dynamic Range. We wanted to do it properly and also needed to work with a wide range of partners in the industry to make this happen. So over the last year we been contributing to improve various standards around color handling and acceleration to prepare the ground, work on and contribute to key libraries needed to for instance gather the needed information from GPUs and screens. Things are coming together now and Jonas Ådahl and Sebastian Wick are now going to focus on getting Mutter HDR capable, once that work is done we are by no means finished, but it should put us close to at least be able to start running some simple usecases (like some fullscreen applications) while we work out the finer points to get great support for running SDR and HDR applications side by side for instance.

PyTorch

We want to make Fedora Workstation a great place to do AI development and testing. First step in that effort is packaging up PyTorch and making sure it can have working hardware acceleration out of the box. Tom Rix has been leading that effort on our end and you will see the first fruits of that labor in Fedora Workstation 40 where PyTorch should work with GPU acceleration on AMD hardware (ROCm) out of the box. We hope and expect to be able to provide the same for NVIDIA and Intel graphics eventually too, but this is definitely a step by step effort.

28 Mar 2024 6:56pm GMT

Tomeu Vizoso: Rockchip NPU update 2: MobileNetV1 is done

Progress

For the last couple of weeks I have kept chipping at a new userspace driver for the NPU in the Rockchip RK3588 SoC.

I am very happy to report that the work has gone really smooth and I reached my first milestone: running the MobileNetV1 model with all convolutions accelerated by the NPU.

And it not only runs flawlessly, but at the same performance level as the blob.

It has been great having access to the register list as disclosed by Rockchip in their TRM, and to the NVDLA and ONNC documentation and source code. This has allowed for the work to proceed at a pace several times faster than with my previous driver for the VeriSilicon NPU, for which a lot of painstaking reverse engineering had to be done.

by Julien Langlois CC BY-SA 3.0

tomeu@arm-64:~/mesa$ TEFLON_DEBUG=verbose python3.10 classification.py -i hens.jpg -m mobilenet_v1_1.0_224_quant.tflite -l labels_mobilenet_quant_v1_224.txt -e libteflon.so
Loading external delegate from libteflon.so with args: {}
Teflon delegate: loaded rknpu driver

teflon: compiling graph: 89 tensors 27 operations
...
teflon: compiled graph, took 413 ms
teflon: invoked graph, took 11 ms
teflon: invoked graph, took 11 ms
teflon: invoked graph, took 11 ms
teflon: invoked graph, took 10 ms
teflon: invoked graph, took 10 ms
0.984314: hen
0.019608: cock
0.000000: toilet tissue
0.000000: sea cucumber
0.000000: wood rabbit
time: 10.776ms

Notice how nothing in the invocation refers to the specific driver that TensorFlow Lite is using, that is completely abstracted by Mesa. Once all these bits are upstream and packaged by distros, one will be able to just download a model in INT8 quantization format and get accelerated inferences going fast irrespective of the hardware.

Thanks to TL Lim of PINE64 for sending me a QuartzPro64 board for me to hack on.

Next steps

I want to go back and get my last work on performance for the VeriSilicon driver upstreamed, so it is packaged in distros sooner rather than later.

After that, I'm a bit torned between working further on the userspace driver and implementing more operations and control flow, or start writing a kernel driver for mainline.

28 Mar 2024 7:47am GMT

17 Mar 2024

feedplanet.freedesktop.org

Simon Ser: Status update, March 2024

Hi! It's this time of the month once again it seems…

We've finally released Sway 1.9! Note that it uses the new wlroots rendering API, but doesn't use the scene-graph API: we've left that for 1.10. We've also released wlroots 0.17.2 with a whole bunch of bug fixes. Special thanks to Simon Zeni for doing the backporting work!

In other Wayland news, the wlroots merge request to atomically apply changes to multiple outputs has been merged! In addition, another merge request to help compositors allocate the right kind of buffers during modesets has been merged. These two combined should help lighting up correctly more multi-output setups on Intel GPUs, which previously required a workaround (WLR_DRM_NO_MODIFIERS=1). Thanks to Kenny for helping with that work!

I also got around to writing a Sway patch to gracefully handle GPU resets. This should be good news for users of a particular GPU vendor which tends to be a bit trigger happy with resets! Sway will now survive and continue running instead of being frozen. Note, clients may still glitch, need a nudge to redraw, or freeze. A few wlroots patches were also required to get this to work.

With the help of Jean Thomas, Goguma (and pushgarden) has gained support for Apple Push Notification service (APNs). This means that Goguma iOS users can now enjoy instantaneous notifications! This is also important to prove that it's possible to design a standard (as an IRC extension) which doesn't hardcode any proprietary platform (and thus doesn't force each IRC server to have one codepath per platform), but still interoperates with these proprietary platforms (important for usability) and ensures that said proprietary platforms have minimal access to sensible data (via end-to-end encryption between the IRC server and the IRC client).

It's now also possible to share links and files to Goguma. That is, when using another app (e.g. the gallery, your favorite fediverse client, and many others) and opening the share menu, Goguma will show up as an option. It will then ask which conversation to share the content with, and automatically upload any shared file.

No NPotM this time around sadly. To make up for it, I've implemented refresh tokens in sinwon, and made most of the remaining tests pass in go-mls.

See you next month!

17 Mar 2024 10:00pm GMT

16 Mar 2024

feedplanet.freedesktop.org

Tomeu Vizoso: Rockchip NPU update 1: A walk in the park?

During the past weeks I have paused work on the driver for the Vivante NPU and have started work on a new driver, for Rockchip's own NPU IP, as used in SoCs such as RK3588(S) and RK3568.

The version of the NPU in the RK3588 claims a performance of 6 TOPS across its 3 cores, though from what I have read, people are having trouble making use of more than one core in parallel, with the closed source driver.

A nice walk in the park

Rockchip, as most other vendors of NPU IP, provides a GPLed kernel driver and pushes out their userspace driver in binary form. The kernel driver is pleasantly simple and relatively up-to-date in regards of its use of internal kernel APIs. The userspace stack though is notoriously buggy and difficult to use, with basic features still unimplemented and performance being quite below what the hardware should be able to achieve.

To be clear, this is on top of the usual problems related to closed-source drivers. I get the impression that Rockchip's NPU team is really understaffed.

Other people had already looked at reverse-engineering the HW so they could address the limitations and bugs in the closed source driver, and use it in situations not supported by Rockchip. I used information acquired by Pierre-Hugues Husson and Jasbir Matharu to get started, a big thanks to them!

After the initial environment was setup (had to forward-port their kernel driver to v6.8), I wrote a simple library that can be loaded in the process with LD_PRELOAD and that, by overriding the ioctl and other syscalls, I was able to dump the buffers that the proprietary userspace driver sends to the hardware.

I started looking at a buffer that from the debug logs of the proprietary driver contained register writes, and when looking at the register descriptions in the TRM, I saw that it had to be closely based on NVIDIA's NVDLA open-source NPU IP.

With Rockchip's (terse) description of the registers, NVDLA's documentation and source code for both the hardware and the userspace driver, I have been able to make progress several times faster than I was able to when working on VeriSilicon's driver (for which I had zero documentation).

Right now I am at the stage at which I am able to correctly execute TensorFLow Lite's Conv2D and DepthwiseConv2D operations with different combinations of input dimensions, weight dimensions, strides and padding. Next is to support multiple output channels.

I'm currently using Rockchip's kernel, but as soon as I'm able to run object detection models with decent hardware utilization, I plan to start writing a new kernel driver for mainlining.

Rockchip's kernel driver has gems such as passing addresses in the kernel address space across the UAPI...

Tests run fast and reliably, even with high concurrency:

tomeu@arm-64:~/mesa$ TEFLON_TEST_DELEGATE=~/mesa/build/src/gallium/targets/teflon/libteflon.so TEFLON_TEST_DATA=src/gallium/targets/teflon/tests LD_LIBRARY_PATH=/home/tomeu/tflite-vx-delegate/build/_deps/tensorflow-build/ ~/.cargo/bin/gtest-runner run --gtest /home/tomeu/mesa/build/src/gallium/targets/teflon/test_teflon --output /tmp -j8 --tests-per-group 1 --baseline ~/mesa/src/gallium/drivers/rocket/ci/rocket-rk3588-fails.txt --flakes ~/mesa/src/gallium/drivers/rocket/ci/rocket-rk3588-flakes.txt --skips ~/mesa/src/gallium/drivers/rocket/ci/rocket-rk3588-skips.txt
Running gtest on 8 threads in 1-test groups
Pass: 0, Duration: 0
Pass: 139, Skip: 14, Duration: 2, Remaining: 2
Pass: 277, Skip: 22, Duration: 4, Remaining: 0
Pass: 316, Skip: 24, Duration: 4, Remaining: 0

You can find the source code in this branch.

16 Mar 2024 11:46am GMT

15 Mar 2024

feedplanet.freedesktop.org

Christian Schaller: PipeWire camera handling is now happening!

We hit a major milestones this week with the long worked on adoption of PipeWire Camera support finally starting to land!

Not long ago Firefox was released with experimental PipeWire camera support thanks to the great work by Jan Grulich.

Then this week OBS Studio shipped with PipeWire camera support thanks to the great work of Georges Stavracas, who cleaned up the patches and pushed to get them merged based on earlier work by himself, Wim Taymans and Colulmbarius. This means we now have two major applications out there that can use PipeWire for camera handling and thus two applications whose video streams that can be interacted with through patchbay applications like Helvum and qpwgraph.
These applications are important and central enough that having them use PipeWire are in itself useful, but they will now also provide two examples of how to do it for application developers looking at how to add PipeWire camera support to their own applications; there is no better documentation than working code.

The PipeWire support is also paired with camera portal support. The use of the portal also means we are getting closer to being able to fully sandbox media applications in Flatpaks which is an important goal in itself. Which reminds me, to test out the new PipeWire support be sure to grab the official OBS Studio Flatpak from Flathub.

PipeWire camera handling with OBS Studio, Firefox and Helvum.

PipeWire camera handling with OBS Studio, Firefox and Helvum.


Let me explain what is going on in the screenshot above as it is a lot. First of all you see Helvum there on the right showning all the connections made through PipeWire, both the audio and in yellow, the video. So you can see how my Logitech BRIO camera is feeding a camera video stream into both OBS Studio and Firefox. You also see my Magewell HDMI capture card feeding a video stream into OBS Studio and finally gnome-shell providing a screen capture feed that is being fed into OBS Studio. On the left you see on the top Firefox running their WebRTC test app capturing my video then just below that you see the OBS Studio image with the direct camera feed on the top left corner, the screencast of Firefox just below it and finally the 'no signal' image is from my HDMI capture card since I had no HDMI device connected to it as I was testing this.

For those wondering work is also underway to bring this into Chromium and Google Chrome browsers where Michael Olbrich from Pengutronix has been pushing to get patches written and merged, he did a talk about this work at FOSDEM last year as you can see from these slides with this patch being the last step to get this working there too.

The move to PipeWire also prepared us for the new generation of MIPI cameras being rolled out in new laptops and helps push work on supporting those cameras towards libcamera, the new library for dealing with the new generation of complex cameras. This of course ties well into the work that Hans de Goede and Kate Hsuan has been doing recently, along with Bryan O'Donoghue from Linaro, on providing an open source driver for MIPI cameras and of course the incredible work by Laurent Pinchart and Kieran Bingham from Ideas on board on libcamera itself.

The PipeWire support is of course fresh and I am sure we will find bugs and corner cases that needs fixing as more people test out the functionality in both Firefox and OBS Studio and there are some interface annoyances we are working to resolve. For instance since PipeWire support both V4L and libcamera as a backend you do atm get double entries in your selection dialogs for most of your cameras. Wireplumber has implemented de-deplucation code which will ensure only the libcamera listing will show for cameras supported by both v4l and libcamera, but is only part of the development version of Wireplumber and thus it will land in Fedora Workstation 40, so until that is out you will have to deal with the duplicate options.

Camera selection dialog

Camera selection dialog


We are also trying to figure out how to better deal with infraread cameras that are part of many modern webcams. Obviously you usually do not want to use an IR camera for your video calls, so we need to figure out the best way to identify them and ensure they are clearly marked and not used by default.

Another recent good PipeWire new tidbit that became available with the PipeWire 1.0.4 release PipeWire maintainer Wim Taymans also fixed up the FireWire FFADO support. The FFADO support had been in there for some time, but after seeing Venn Stone do some thorough tests and find issues we decided it was time to bite the bullet and buy some second hand Firewire hardware for Wim to be able to test and verify himself.

Focusrite firewire device

Focusrite firewire device

.
Once the Focusrite device I bought landed at Wims house he got to work and cleaned up the FFADO support and make it both work and be performant.
For those unaware FFADO is a way to use Firewire devices without going through ALSA and is popular among pro-audio folks because it gives lower latencies. Firewire is of course a relatively old technology at this point, but the audio equipment is still great and many audio engineers have a lot of these devices, so with this fixed you can plop a Firewire PCI card into your PC and suddenly all those old Firewire devices gets a new lease on life on your Linux system. And you can buy these devices on places like ebay or facebook marketplace for a fraction of their original cost. In some sense this demonstrates the same strength of PipeWire as the libcamera support, in the libcamera case it allows Linux applications a way to smoothly transtion to a new generation of hardware and in this Firewire case it allows Linux applications to keep using older hardware with new applications.

So all in all its been a great few weeks for PipeWire and for Linux Audio AND Video, and if you are an application maintainer be sure to look at how you can add PipeWire camera support to your application and of course get that application packaged up as a Flatpak for people using Fedora Workstation and other distributions to consume.

15 Mar 2024 4:30pm GMT