28 May 2020

feedplanet.freedesktop.org

Christian Schaller: Into the world of Robo vacums and Robo mops

So this is a blog post not related to Fedora or Red Hat, but rather my personal experience with getting a robo vacuum and robo mop into the house.

So about two Months ago my wife and I decided to get a Robo vacuum while shopping at Costco (a US wholesaler outfit). So we brought home the iRobot Roomba 980. Over the next week we ended up also getting the newer iRobot Roomba i7+ and the iRobot Braava m6 mopping robot. Our dream was that we would never have to vacuum or mop again, instead leaving that to our new robots to handle. With two little kids being able to cut that work from our todo list seemed like a dream come through.

I feel that whenever you get into a new technology it takes some time with your first product in that category to understand what questions to ask and what considerations to make. For instance I feel a lot of more informed and confident in my knowledge about electric cars having owned a Nissan Leaf for a few years now (enough to wish I had a Tesla instead for instance :). I guess our experience with robot vacuums here is similar.

Anyway, if you are considering buying a Robot vacuum or mop I think the first lesson we learned is that it is definitely not a magic solution. You have to prepare your house quite a bit before each run, including obvious things like tidying up anything on the floor like the kids legos etc., to discovering that certain furniture, like the IKEA Poang chairs are mortal enemies with your robo vacuum. We had to put our chair on top of the sofa as the Roomba would get stuck on it every time we tried to vacuum the floor. Also the door mat in front of our entrance door kept having its corners sucked into the vacuum getting it stuck. Anyway, our lesson learned is that vacuuming (or mopping) is not something we can do on an impulse or easily on a schedule, as it takes quite a bit of preparation. If you don't have small kid leaving random stuff all over the house all the time you might be able to just set the vacuum on a schedule, but for us that has turned out to be a big no :). So in practice we only vacuum at night now when my wife and I have had time to prep the house after the kids have gone to bed.

It is worth nothing that we only got one vacuum now. We got the i7+ after we got the 980 due to realizing that the 980 didn't have features like the smart map allowing you to for instance vacuum specific rooms. It also had other niceties like self emptying and it was supposed to be more quiet (which is nice when you run it at night). However in our experience it also had a less strong vacuum, so we felt it left more crap on the floor then the older 980 model. So in the end we returned the i7+ in favour of the 980, just because we felt it did a better job at vacuuming. It is quite loud though, so we can hear it very loud and clear up on the second floor while trying to fall asleep. So if you need a quiet house to sleep, this setup is not for you.

Another lesson we learned is that the vacuums or mops do not work great in darkness, so we now have to leave the light on downstairs at night when we want to vacuum or mop the floor. We should be able to automate that using Google Home, so Google Home could turn on the lights, start the vacuum and then once done, turn off the lights again. We haven't actually gotten around to test that yet though.

As for the mop, I would say that it is not a replacement for mopping yourself, but it can reduce the frequency of you mopping yourself and thus help maintain a nice clear floor for longer after you done a full manual mop yourself. Also the m6 is super sensitive to edges, which I assume is to avoid it trying to mop your rugs and mats, but it also means that it can not traverse even small thresholds. So for us who have small thresholds between our kitchen area and the rest of the house we have to carry the mop over the thresholds and mop the rest of the first floor as a separate action, which is a bit of an annoyance now that we are running these things at night. That said the kitchen is the one room which needs moping more regularly, so in some sense the current setup where the roomba vacuums the whole first floor and the braava mop mops just the kitchen is a workable solution for us. One nice feature here is that they can be set up to run in order, so the mop will only start once the vacuum is done (that feature is the main reason we haven't tested out other brand mops which might handle the threshold situation better).

So to conclude, would I recommend robot vacuums and robot mops to other parents with you kids? I would say yes, it has definitely helped us keep the house cleaner and nicer and let us spend less time cleaning the house. But it is not a miracle cure in any way or form, it still takes time and effort to prepare and set up the house and sometimes you still need to do especially the mopping yourself to get things really clean. As for the question of iRobot versus other brands I have no input as I haven't really tested any other brands. iRobot is a local company so their vacuums are available in a lot of stores around me and I drive by their HQ on a regular basis, so that is the more or less random reason I ended up with their products as opposed to competing ones.

28 May 2020 4:37pm GMT

25 May 2020

feedplanet.freedesktop.org

Roman Gilg: Wrapland redone

The KWinFT project with its two major open source offerings KWinFT and Wrapland was announced one month ago. This made quite some headlines back then but I decided to keep it down afterwards and push the project silently forward on a technical side.

Now I am pleased to announce the release of a beta version for the next stable release 5.19 in two weeks. The highlights of this release are a complete redesign of Wrapland's server library and two more projects joining KWinFT.

Wrapland Server library redesign

One of the goals of KWinFT is to facilitate large upsetting changes to the internal structure and technological base of its open source offerings. As mentioned one month ago in the project announcement these changes include pushing back the usage of Qt in lower-level libraries and instead making use of modern C++ to its full potential.

We achieved the first milestone on this route in an impressively short timeframe: the redesign of Wrapland's server library for improved encapsulation of external libwayland types and providing template-enhanced meta-classes for easy extension with new functionality in the future.

This redesign work was organized on a separate branch and merged this weekend into master. In the end that included over 200 commits and 40'000 changed lines. Here I have to thank in particular Adrien Faveraux who joined KWinFT shortly after its announcement and contributed several class refactors. Our combined work enabled us to deliver this major redesign already now with the upcoming release.

Aside from the redesign I used this opportunity to add clang-based tools for static code analysis: clang-format and clang-tidy. Adding to our autotests that run with and without sanitizers Wrapland's CI pipelines now provide efficient means for handling contributions by GitLab merge requests and checking back on the result after merge. You can see a full pipeline with linters, static code analysis, project build and autotests passing in the article picture above or check it out here directly in the project.

New in KWinFT: Disman and KDisplay

With this release Disman and KDisplay join the KWinFT project. Disman is a fork of libkscreen and KDisplay one of KScreen. KScreen is the main UI in a KDE Plasma workspace for display configuration and I was its main contributor and maintainer in the last years.

Disman can be installed in parallel with libkscreen. For KDisplay on the other side it is recommended to remove KScreen when KDisplay is installed. This way not both daemons try to meddle with the display configuration at the same time. KDisplay can make use of plugins for KWinFT, KWin and wlroots so you could also use KDisplay as a general replacement.

Forking libkscreen and KScreen to Disman and KDisplay was an unwanted move from my side because I would have liked to keep maintaining them inside KDE. But my efforts to integrate KWinFT with them were not welcomed by some members of the Plasma team. Form your own opinion by reading the discussion in the patch review.

I am not happy about this development but I decided to make the best out of a bad situation and after forking and rebranding directly created CI pipelines for both projects which now also run linters, project builds and autotests on all merge requests and branch pushes. And I defined some more courageous goals for the two projects now that I have more control.

One would think after years of being the only person putting real work into KScreen I would have had this kind of freedom also inside KDE but that is not how KDE operates.

Does it need to be this way? What are arguments for and against it? That is a discussion for another article in the future.

Very next steps

There is an overall technical vision I am following with KWinFT: building a modern C++ framework for Wayland compositor creation. A framework that is built up from independent yet well interacting small libraries.

Take a look at this task for an overview. The first one of these libraries that we have now put work in was Wrapland. I plan for the directly next one to be the backend library that provides interfacing capabilities with the kernel or a host window system the compositor runs on, what in most cases means talking to the Direct Rendering Manager.

The work in Wrapland is not finished though. After the basic representation of Wayland objects has been improved we can push further by layering the server library like this task describes. The end goal here is to get rid of the Qt dependency and make it an optional facade only.

How to try out or contribute to KWinFT

You can try out KWinFT on Manjaro. At the moment you can install KWinFT and its dependencies on Manjaro's unstable images but it is planned to make this possible also in the stable images with the upcoming 5.19 stable release.

I explicitly recommend the Manjaro distribution nowadays to any user from being new to Linux to experts. I have Manjaro running on several devices and I am very pleased with Manjaro's technology, its development pace and its community.

If you are an advanced user you can also use Arch Linux directly and install a KWinFT AUR package that builds KWinFT and its dependencies directly from Git. I hope a package of KWinFT's stable release will also be soon available from Arch' official repositories.

If you want to contribute to one of the KWinFT projects take a look at the open tasks and come join us in our Gitter channel. I am very happy that already several people joined the project who provide QA feedback and patches. There are also opportunities to work on DevOps, documentation and translations.

I am hoping KWinFT will be a welcoming place for everyone interested in high-quality open source graphics technology. A place with low entry barriers and many opportunities to learn and grow as an engineer.

25 May 2020 10:00am GMT

20 May 2020

feedplanet.freedesktop.org

Melissa Wen: Everyone makes a script

Meanwhile, in GSoC:

I took the second week of Community Bonding to make some improvements in my development environment. As I have reported before, I use a QEMU VM to develop kernel contributions. I was initially using an Arch VM for development; however, at the beginning of the year, I reconfigured it to use a Debian VM, since my host is a Debian installation - fewer context changes. In this movement, some ends were loose, and I did some workarounds, well… better round it off.

I also use kworkflow (KW) to ease most of the no-coding tasks included in the day-to-day coding for Linux kernel. The KW automates repetitive steps of a developer's life, such as compiling and installing my kernel modifications; finding information to format and send patches correctly; mounting or remotely accessing a VM, etc. During the time that preceded the GSoC project submission, I noticed that the feature of installing a kernel inside the VM was incompleted. At that time, I started to use the "remote" option as palliative. Therefore, I spent the last days learning more features and how to hack the kworkflow to improve my development environment (and send it back to the kw project).

I have started by sending a minor fix on alert message:

kw: small issue on u/mount alert message

Then I expanded the feature "explore" - looking for a string in directory contents - by adding GNU grep utility in addition to the already used git grep. I gathered many helpful suggestions for this patch, and I applied them together with the reviews received in a new version:

src: add grep utility to explore feature

Finally, after many hours of searching, reading and learning a little about guestfish, grub, initramfs-tools and bash, I could create the first proposal of code changes that enable kw to automate the build and install of a kernel in VM:

add support for deployment in a debian-VM

The main barrier to this feature was figuring out how to update the grub on the VM without running the update-grub command via ssh access. First, I thought about adding a custom file with a new entry to boot. Thinking and researching a little more, I realized that guestfish could solve the problem and, following this logic, I found a blog post describing how to run "update-grub" with guestfish. From that, I made some adaptations that created the solution.

However, in addition to updating grub, the feature still lacks some steps to install the kernel on the VM properly. I checked the missing code by visiting an old FLUSP tutorial that describes the step-by-step of compiling and install the Linux Kernel inside a VM. I also used the implementation of the "remote" mode of the "kw deploy" to wrap up.

Now I use kw to automatically compile and install a custom kernel on my development VM. So, time to sing: "Ooh, that's why I'm easy; I'm easy as Sunday morning!"

Maybe not now. It's time to learn more about IGT tests!

20 May 2020 6:00pm GMT

Dave Airlie (blogspot): DirectX on Linux - what it is/isn't

This morning I saw two things that were Microsoft and Linux graphics related.

https://devblogs.microsoft.com/commandline/the-windows-subsystem-for-linux-build-2020-summary/

a) DirectX on Linux for compute workloads
b) Linux GUI apps on Windows

At first I thought these were related, but it appears at least presently these are quite orthogonal projects.

First up clarify for the people who jump to insane conclusions:

The DX on Linux is a WSL2 only thing. Microsoft are not any way bringing DX12 to Linux outside of the Windows environment. They are also in no way open sourcing any of the DX12 driver code. They are recompiling the DX12 userspace drivers (from GPU vendors) into Linux shared libraries, and running them on a kernel driver shim that transfers the kernel interface up to the closed source Windows kernel driver. This is in no way useful for having DX12 on Linux baremetal or anywhere other than in a WSL2 environment. It is not useful for Linux gaming.

Microsoft have submitted to the upstream kernel the shim driver to support this. This driver exposes their D3DKMT kernel interface from Windows over virtual channels into a Linux driver that provides an ioctl interface. The kernel drivers are still all running on the Windows side.

Now I read the Linux GUI apps bit and assumed that these things were the same, well it turns out the DX12 stuff doesn't address presentation at all. It's currently only for compute/ML workloads using CUDA/DirectML. There isn't a way to put the results of DX12 rendering from the Linux guest applications onto the screen at all. The other project is a wayland/RDP integration server, that connects Linux apps via wayland to RDP client on Windows display, integrating that with DX12 will be a tricky project, and then integrating that upstream with the Linux stack another step completely.

Now I'm sure this will be resolved, but it has certain implications on how the driver architecture works and how much of the rest of the Linux graphics ecosystem you have to interact with, and that means that the current driver might not be a great fit in the long run and upstreaming it prematurely might be a bad idea.

From my point of view the kernel shim driver doesn't really bring anything to Linux, it's just a tunnel for some binary data between a host windows kernel binary and a guest linux userspace binary. It doesn't enhance the Linux graphics ecosystem in any useful direction, and as such I'm questioning why we'd want this upstream at all.

20 May 2020 12:01am GMT

19 May 2020

feedplanet.freedesktop.org

Peter Hutterer: xisxwayland checks for Xwayland ... or not

One of the more common issues we encounter debugging things is that users don't always know whether they're running on a Wayland or X11 session. Which I guess is a good advertisement for how far some of the compositors have come. The question "are you running on Xorg or Wayland" thus comes up a lot and suggestions previously included things like "run xeyes", "grep xinput list", "check xrandr" and so on and so forth. None of those are particularly scriptable, so there's a new tool around now: xisxwayland.

Run without arguments it simply exits with exit code 0 if the X server is Xwayland, or 1 otherwise. Which means use can use it like this:


$ cat my-xorg-only-script.sh
#!/bin/bash

if xisxwayland; then
echo "This is an Xwayland server!";
exit 1
fi

...

Or, in the case where you have a human user (gasp!), you can ask them to run:


$ xisxwayland --verbose
Xwayland: YES

And even non-technical users should be able to interpret that.

Note that the script checks for Xwayland (hence the name) via the $DISPLAY environment variable, just like any X application. It does not check whether there's a Wayland compositor running but for most use-cases this doesn't matter anyway. For those where it matters you get to write your own script. Congratulations, I guess.

19 May 2020 10:30am GMT

13 May 2020

feedplanet.freedesktop.org

Melissa Wen: I’m in - GSoC 2020 - X.Org Foundation

I submitted a project proposal to participate in this year's Google Summer of Code (GSoC). As I am curious about the DRM subsystem and have already started to work in contributing to it, I was looking for an organization that supports this subsystem. So, I applied to the X.Org foundation and proposed a project to Improve VKMS using IGT GPU Tools. Luckily, in May 4th, I received a e-mail from GSoC announcing that I was accepted! Happy happy happy!

Observing a conversation in #dri-devel channel, I realized that my project was the only one accepted on X.Org. WoW! I have no idea why the organization has only one intern this year, and even more, that this is me! Imediately after I received the result, Trevor Woerner congratulated me and kindly announced my project on his Twitter profile! It was fun to know that he enjoyed the things that I randomly posted in my blog, and was so nice to see that he read what I wrote!

Advices, not sure, but I can report the submission experience

From time to time, someone appears on communication channels of FLUSP (FLOSS@USP) asking how to participate in GSoC or Outreachy. They are usually trying to answer questions about the rules of participation and/or obtain reports from former participants. In my opinion, we have in Brazil many high-qualified IT people who, for some reason, do not feel safe to invest in a career abroad. They are very demanding with themselves. And I believe that groups like FLUSP can be a means of launching good developers internationally.

By viewing this contribution window, I consider it worthwhile to report some actions that I took in my application process.

First, take seriously, and do it as soon as possible.

About a year ago, I took my first efforts to understand the project and the Linux community. But that doesn't mean you need a year in advance for that. What I mean is that, for a project like Linux, I had to dedicate enough hours studying, understanding, and trying to develop my first contributions. And I, in particular, did not feel skilled enough to propose anything for the community. Participating in GSoC was not my initial plan. What I wanted (and still want) is to become a high-skilled developer of the Linux kernel project.

Second, take your first contribution steps.

Many times I prepared the environment, started to work on some things, and then ended up blocked by all the doubts that arose when I dived into all these lines of code. Breathe… Better start with the basics. If you have someone who can mentor you, one way is to work "with her/him." Find issues or code that s/he participates so you can start a conversation and some guidance. I prefer a "peer-to-peer" introduction than a "broadcast" message for the community. And then, things fly.

When the organizations are announced…

Ask the community (or the developers you have already developed a connexion) if any of the organizations would contemplate the system you are interested in proposing a project. You can propose something that you think relevant, you can look for project suggestions within organizations, or you can (always) ask. Honestly, I'm not very good at asking. In the case of X.Org Foundation, you can find tips here, and suggestions and potential mentors here and here.

Write your proposal and share!

As soon as you can.

In your proposal, you should introduce yourself and show what you already know so far, that is, make your portfolio. Also, try to be as transparent as possible about tasks, describe everything you already know about the issues, estimate time, and make a schedule. Finally, propose productions in addition to code (documentation, tutorial, blog posts). I think this content production will help your work and also the well-monitoring of your activities. Moreover, it is a way to spread the word and track development issues.

I wasn't fast enough to share my project, but I saw how my proposal evolved after I received the community feedback. The feature of "create and share" is available in the GSoC application platform using Google Docs. I think my internship work plan became more potent due to share my document and apply all suggestions.

Approved and now? I am the newer X.Org developer

After the welcome, Siqueira and Trevor also gave me some guidance, reading, and initial activities.

To better understand the X.Org organization and the system I will be working on in the coming months, I have read the following links (still in progress):

Hands on, it has already started!

Well, time to revisit the proposal, organize daily activities, and check the development environment. Before the GSoC results, I was working on some kworkflow issues and so, that's what I've been working on at the moment. I like to study "problem-oriented", so to dissect the kw, I took some issues to try to solve. I believe I can report my progress in an upcoming post.

To the next!

13 May 2020 5:00pm GMT

08 May 2020

feedplanet.freedesktop.org

Peter Hutterer: Wayland doesn't support $BLAH

Gather round children, it's analogy time! First, some definitions:

And now for the definitions we need for our analogy:

And now for the analogy:

The following complaints are technically correct but otherwise rather pointless to make:

And much in the same style, the following complaints are technically correct but otherwise rather pointless to make:

In most cases you may encounter (online or on your setup), saying "Wayland doesn't support $BLAH" is like saying "HTTP doesn't support $BLAH". The problem isn't in with Wayland itself, it's a missing piece or bug in the compositor you're using.

Likewise, saying "I don't like Wayland" is like saying "I don't like HTTP".The vast majority of users will have negative feelings towards the browser, not the transport protocol.

[1] Because they're implementations of a display server they can speak multiple protocols and some compositors can also be X11 window managers, much in the same way as you can switch between English and your native language(s).[2] Because they're implementations of a web browser they can speak multiple protocols and some browsers can also be FTP file managers, much in the same way as... you get the point

08 May 2020 12:40am GMT

07 May 2020

feedplanet.freedesktop.org

Christian Schaller: GNOME is not the default for Fedora Workstation

We recently had a Fedora AMA where one of the questions asked is why GNOME is the default desktop for Fedora Workstation. In the AMA we answered why GNOME had been chosen for Fedora Workstation, but we didn't challenge the underlying assumption built into the way the question was asked, and the answer to that assumption is that it isn't the default. What I mean with this is that Fedora Workstation isn't a box of parts, where you have default options that can be replaced, its a carefully procured and assembled operating system aimed at developers, sysadmins and makers in general. If you replace one or more parts of it, then it stops being Fedora Workstation and starts being 'build your own operating system OS'. There is nothing wrong with wanting to or finding it interesting to build your own operating systems, I think a lot of us initially got into Linux due to enjoying doing that. And the Fedora project provides a lot of great infrastructure for people who want to themselves or through teaming up with others build their own operating systems, which is why Fedora has so many spins and variants available.
The Fedora Workstation project is something we made using those tools and it has been tested and developed as an integrated whole, not as a collection of interchangeable components. The Fedora Workstation project might of course over time replace certain parts with other parts over time, like how we are migrating from X.org to Wayland. But at some point we are going to drop standalone X.org support and only support X applications through XWayland. But that is not the same as if each of our users individually did the same. And while it might be technically possible for a skilled users to still get things moved back onto X for some time after we make the formal deprecation, the fact is that you would no longer be using 'Fedora Workstation'. You be using a homebrew OS that contains parts taken from Fedora Workstation.

So why am I making this distinction? To be crystal clear, it is not to hate on you for wanting to assemble your own OS, in fact we love having anyone with that passion as part of the Fedora community. I would of course love for you to share our vision and join the Fedora Workstation effort, but the same is true for all the other spins and variant communities we have within the Fedora community too. No the reason is that we have a very specific goal of creating a stable and well working experience for our users with Fedora Workstation and one of the ways we achieve this is by having a tightly integrated operating system that we test and develop as a whole. Because that is the operating system we as the Fedora Workstation project want to make. We believe that doing anything else creates an impossible QA matrix, because if you tell people that 'hey, any part of this OS is replaceable and should still work' you have essentially created a testing matrix for yourself of infinite size. And while as software engineers I am sure many of us find experiments like 'wonder if I can get Fedora Workstation running on a BSD kernel' or 'I wonder if I can make it work if I replace glibc with Bionic' fun and interesting, I am equally sure we all also realize what once we do that we are in self support territory and that Fedora Workstation or any other OS you use as your starting point can't not be blamed if your system stops working very well. And replacing such a core thing as the desktop is no different to those other examples.

Having been in the game of trying to provide a high quality desktop experience both commercially in the form of RHEL Workstation and through our community efforts around Fedora Workstation I have seen and experienced first hand the problems that the mindset of interchangeable desktop creates. For instance before we switched to the Fedora Workstation branding and it was all just 'Fedora' I experienced reviewers complaining about missing features, features had actually spent serious effort implementing, because the reviewer decided to review a different spin of Fedora than the GNOME one. Other cases I remember are of customers trying to fix a problem by switching desktops, only to discover that while the initial issue they wanted fix got resolved by the switch they now got a new batch of issues that was equally problematic for them. And we where left trying to figure out if we should try to fix the original problem, the new ones or maybe the problems reported by users of a third desktop option. We also have had cases of users who just like the reviewer mentioned earlier, assumed something was broken or missing because they where using a different desktop than the one where the feature was added. And at the same time trying to add every feature everywhere would dilute our limited development resources so much that it made us move slow and not have the resources to focus on getting ready for major changes in the hardware landscape for instance.
So for RHEL we now only offer GNOME as the desktop and the same is true in Fedora Workstation, and that is not because we don't understand that people enjoy experimenting with other desktops, but because it allows us to work with our customers and users and hardware partners on fixing the issues they have with our operating system, because it is a clearly defined entity, and adding the features they need going forward and properly support the hardware they are using, as opposed to spreading ourselves to thin that we just run around putting on band-aids for the problems reported.
And in the longer run I actually believe this approach benefits those of you who want to build your own OS to, or use an OS built by another team around a different set of technologies, because while the improvements might come in a bit later for you, the work we now have the ability to undertake due to having a clear focus, like our work on adding HiDPI support, getting Wayland ready for desktop use or enabling Thunderbolt support in Linux, makes it a lot easier for these other projects to eventually add support for these things too.

Update: Adam Jacksons oft quoted response to the old 'linux is about choice meme' is also a required reading for anyone wanting a high quality operating system

07 May 2020 5:57pm GMT

04 May 2020

feedplanet.freedesktop.org

Bastien Nocera: Dual-GPU support: Launch on the discrete GPU automatically

*reality TV show deep voice guy*

In 2016, we added a way to launch apps on the discrete GPU.

*swoosh effects*

In 2019, we added a way for that to work with the NVidia drivers.

*explosions*

In 2020, we're adding a way for applications to launch automatically on the discrete GPU.

*fast cuts of loads of applications being launched and quiet*




Introducing the (badly-named-but-if-you-can-come-up-with-a-better-name-youre-ready-for-computers) "PrefersNonDefaultGPU" desktop entry key.

From the specifications website:

If true, the application prefers to be run on a more powerful discrete GPU if available, which we describe as "a GPU other than the default one" in this spec to avoid the need to define what a discrete GPU is and in which cases it might be considered more powerful than the default GPU. This key is only a hint and support might not be present depending on the implementation.

And support for that key is coming to GNOME Shell soon.

TL;DR

Add "PrefersNonDefaultGPU=true" to your application's .desktop file if it can benefit from being run on a more powerful GPU.

We've also added a switcherooctl command to recent versions of switcheroo-control so you can launch your apps on the right GPU from your scripts and tweaks.

04 May 2020 3:52pm GMT

28 Apr 2020

feedplanet.freedesktop.org

Christian Schaller: Fedora Workstation : Swamp draining for 6 years

As Fedora Workstation 32 was released today I ended up looking back at our efforts to drain the swamp over the last 6 years. In April of 2014 I wrote a blog post outlining our vision for the Fedora Workstation effort and what we wanted to achieve with it. I hadn't looked at that blog post in years, but it was interesting going back to it and realize that while some of the details have changed it is still the vision we are pursuing today; to keep draining the swamp and make Fedora Workstation a top notch operating system for developers and makers in general. Which I guess is one of the hallmarks of a decent vision, that it allows for the details to change without invalidating it.

One of my pet peeves at the time with Linux as a desktop operating system was that so many of the so called efforts to make linux user friendly was essentially duck taping over the problems, creating fragile solutions that often made it harder for us to really move forward. In the yers since we addressed a lot of major swamp issues with our efforts around HiDPI & Bolt (getting ahead of hardware enablement for new monitors and Thunderbolt devices respectively), Flatpaks, GNOME Software and AppStream (making applications discoverable, deployable and maintainable), Wayland (making your desktop secure and future proof), LVFS and firmware handling (making them easily available for Linux users), Finger print reader standard (ensuring your hardware is fully supported) and coming up with ways to improve the lives of developers with improvements to the terminal or Fedora Toolbox, our developer pet container tool.

Working on these and other issues we early realized that a model where hardware gets enabled in a reactive manner, in response to new laptops being sold, was never going to yield a good result for our users. As long as we followed that model people where bound to always hit issues with laptops as they came out and then have to deal with those issues for the first 6-12 Months of its life. This is why I am so excited about our new partnership with Lenovo that we pre-announced on Friday as it is both the culmination of our efforts over the last 6 years, but also the starting point of a new era in terms of how we work with hardware makers. So instead of us spending a ton of time trying to reverse engineers basic drivers we can now rely on our hardware partner and their component vendors providing that and we can instead focus on what I call high level hardware enablement. Meaning that as we see new features coming into laptops and computers we can try to improve the infrastructure in the operating system to be able to take full advantage of said hardware, and we can do so in collaboration with the hardware makers knowing that once we provide the infrastructure they will ensure to provide drivers and similar fitting into that infrastructure. Our work on fingerprint readers and thunderbolt support for instance has been two great early examples of that.

Anyway, you are probably interested to know some of the new things coming in Fedora Workstaton 32, so here are some of my personal highlights:

New lock screen

This is more a cosmetic change, but one that every user will see upon logging into their Fedora system after a new install or upgrade. The new design features a faded version of your desktop background image and it should also feel more smooth as the password dialog now appears on the lock screen page as opposed to before where it sort of replaced it. The dialog now also tries to more discreetly than before inform you if your trying to type in the password while the lock screen is on. A big thanks to Allan Day and the GNOME design team for their work here trying to polish this part of the user interface.

GNOME extension app

GNOME Shell extensions are little tweaks and additional features for the desktop that our user have gotten accustomed to and enjoy greatly. Extensions are also the technology that powers the GNOME Classic session that provides those of our users who want it with a more traditional desktop experience. GNOME Shell extensions have gradually evolved in how we work with them since their inception as something you install through your web browser to now being handled through GNOME Software. With Fedora Workstation 32 we are making the new GNOME Shell extensions management app available as the next step in the evolution of GNOME Shell extensions, making it simple to turn any given extension on of our or quickly see which extensions you have installed.

GNOME Extensions app

GNOME Extensions handling app

Fedora Toolbox

Fedora Toolbox is our helper for making working with containers for development and testing as easy it possibly can be. Debarshi Ray and Ondřej Míchal have been hard at work porting the Fedora Toolbox to Go from shell for this release. For those wondering why we choose Go as the language; there was basically two reasons for that. One we felt that the toolbox had gone as far as it could as a shell script, and two that was the language used by all the components we rely on and interact with in the container space, like buildah and podman. We also wanted to make it easy for developers on those projects to contribute by using the same language as they use in their projects.

Fedora Toolbox

Fedora Toolbox running on Fedora Workstation 32

Performance improvements

Another area that we always try to give some love is general performance improvement. For example this time around Christian Hergert identified some really bad behavior of GNOME shell when running on a system with very high I/O. At the face of it GNOME Shell didn't look like it should have been affected, but during some intensive debugging sessions Christian Hergert discovered that I/O was triggered by various API calls to do things like string translation. So he put together a set of patches to resolve the high I/O stalls and can now report that GNOME Shell keeps running smoothly as silk, even under high disk I/O situations.

PipeWire

Wim Taymans keeps making great strides forward with PipeWire, our tool for creating a unified media handler for audio, pro-audio and Video. In Fedora Workstation 32 we will be shipping the 0.3 version which has quite complete Jack support. In fact we are hoping to team up with the Fedora Jam team to finalize the Jack support during the Fedora 32 lifecycle by testing it extensively. We have a lot of Jacks apps already working with PipeWire, including a series of important Jack apps that we have put into Flatpaks in Fedora like Carla. While the support is there in PipeWire in Fedora 32 right now, there are some convenience work we are still needing to do, but we hope to get that pushed out by next week to make replacing Jack with PipeWire becomes very simple to both do and undo for testing purposes.

The PulseAudio support is the last piece that are still in progress. It works for simple music playback, but it is not a drop in replacement for PulseAudio yet, so while we hoped to encourage widespread testing in F32 we will aim at delaying that to F33 in order to polish the PulseAudio support more first. But once ready we will make this available for testing in a simple manner just like the Jack support.

There has also been further work on the video side of PipeWire, adding support for zero copy video capture, this has reduced the overhead of doing things like screen capturing significantly and should be a nice performance/resource usage improvement for everyone.

Firefox on Wayland

Martin Stransky and Jan Horak has been working hard to improve how Firefox runs and works when used as a Wayland native application fixing a truckload of bigger and smaller bugs this cycle. We feel that we crossed the corner now in terms of the Wayland version being just as stable and good as the X11 one. In fact we could move beyond just fixing bugs to actually adding features this time around for instance Martin Stransky worked on WebGL HW acceleration support enabling us to have that enabled by default now for the first time. We also made sure to taking advantage of the Pipewire zero copy support to improve your video conferencing applications running under Firefox which turned out to be even more important than we expected considering Covid-19 has everyone working from home.

Looking forward

We spent a lot of time and energy over the last 6 years to get to where we are now, putting in place a lot of the basic building blocks needed to make Linux a great desktop operating system. And it feels great that just as we kick of the new line of Lenovo laptops running Fedora we are also entering a new phase of development where we can move beyond getting our basic infrastructure in place, but we can really start taking advantage of it to rapidly improve the experience we are providing even more. A good example is the Firefox work mentioned above, where we finally could move on from 'make it work with Wayland and PipeWire, to 'lets take advantage of these new pieces to make Firefox on Linux better'. Another example here is that Adam Jackson is currently investigating how we can improve how Fedora Workstation performs for remote usage. This work includes looking at things like VNC and RDP and commercial offerings and figuring out how we can make our stack work better with such tools, on top of the improvements that PipeWire brings for such usecases.

There is some more heavy lifting needed before our next generation OS architecture, Silverblue, is ready to be our default offering, but it is improving leaps and bounds each release and already have a loyal following, personally I am very excited about the fact that we are quickly moving closer the point were we can make it our default and through that offer features like bulletproof OS updates, factory resets and solid version rollbacks.

On the Flatpak side Owen Taylor and Alex Larsson are putting in a lot of final touches on our Red Hat infrastructure. So for RHEL8.2 we will finally be able to build Flatpaks in RHEL infrastructure and provide a runtime and SDK for our RHEL customers to use. But equally exciting is that we will be able to offer these to the community at large, meaning that we can offer a high quality Flatpak Long Term Support runtime and SDK for ISVs that they can use to both target RHEL users, but also Fedora and other Linux distributions with, in a similar vein to how the Red Hat UBI works. We will also be looking at ways to make getting access to these on Fedora very simple for developers, so that developing towards this runtime becomes quick and easy on your Fedora system. Alex and Owen are also working on an incremental updates feature to be shared between Kubernetes containers and OCI Flatpaks, making both technologies better and updates a lot smaller.

We are also looking at a host of other smaller improvements, many of them in collaboration with our friends at Lenovo, like lap detection (so you can be sure the laptop doesn't burn you), privacy features (like making it harder to read your screen from an angle) and far field microphones. There are also things like Lennarts HomeD idea which we will be looking at as a way to improve the end user experience.

So the future is looking bright and I hope to see many new faces in the Fedora community going forward, be that if you download Fedora Workstation 32 to install on your own system yourself or if you join us through buying a Fedora laptop from Lenovo this summer.

28 Apr 2020 3:46pm GMT

26 Apr 2020

feedplanet.freedesktop.org

David Rheinsberg: Inside Specs: ELF Segments and Sections

The ELF data format divides object files into segments and sections, which has for long caused confusion. Both terms segment and section can be used interchangeably in almost all cases in the English language ([1], [2]). What is often overlooked is that the ELF specification explicitely meant both to mean almost the same. They merely provide two views of the same data, but use different terms to allow referring to them more easily.

When we look at the defining specification (gABI: System V Application Binary Interface) we find this quote in the introduction:

Object files participate in program linking (building a program) and program execution (running a program). For convenience and efficiency, the object file format provides parallel views of a file's contents, reflecting the differing needs of those activities.

This is, in my opinion, a crucial detail often overlooked. The ELF data format explicitly provides two views of the same data. The difference between segments and sections is thus not what data they contain, but how they index the same data. The specification goes a step further:

A program header table tells the system how to create a process image. Files used to build a process image (execute a program) must have a program header table; relocatable files do not need one.

A section header table contains information describing the file's sections. Every section has an entry in the table; each entry gives information such as the section name, the section size, and so on. Files used during linking must have a section header table; other object files may or may not have one.

Keep in mind that the program header table is effectively a segment header table. Therefore, the specification explicitly says that these two data views do not have to be present in a specific file. Depending on the use case, the format allows for only segments or only sections.

To summarize, an ELF object file contains data and machine code of a program, which itself is divided into many parts. The ELF format then provides two different views of this same content: segments and sections. However, these are views of the data present in the file, they do not define the content, but merely index it.

As a closing note, we must acknowledge how all this evolved over time, though. While the ELF specification provides this neat dual-view, a lot of this freedom is not actually used in most ELF files. Instead, most files are effectively split into many small sections, and the segments merely provide a grouping of sequential sections in the file. Sections have become the tool that drives the data in ELF files, and segments have become a view of that data. But this was a purely artifical interpretation and is not rooted in the ELF data format.

26 Apr 2020 12:00am GMT

24 Apr 2020

feedplanet.freedesktop.org

Christian Schaller: A bold new chapter for Fedora Workstation

So you have probably seen the announcement that Lenovo are launching a set of Fedora Workstation based laptops. I am so happy and proud of this effort as it comes as the culmination of our hard effort over the last 6 years to drain the swamp and make Linux a more viable desktop operating system.
I am also so happy and proud that Lenovo was willing to work with us on this effort as they provide us with an incredible opportunity to reach both new and old Linux users around the globe with these systems, being the worlds biggest laptop maker with the widest global reach. Because one important aspect of this is that Lenovo will provide these laptops through all their sales channels in all their markets. This means you can of course order them online through their website, but it also means companies can order them through Lenovos business to business channels and it means that in any country where Lenovo is present you can order them, so this is not a North America only or Europe only, this is truly a global offering.

There are a lot of people who has been involved here in helping to make this happen, but special thanks goes to Egbert Gracias from Lenovo who was critical in making this happen and also a special thanks to Alberto Ruiz who spearheaded this effort from our side.

Our engineering team here at Red Hat has also been hard at work ensuring we can support these models very well be that by bugfixes to kernel drivers or by polishing up things like the Linux fingerprint support. As we go forward we hope to build on this relationship to take linux laptops to the next level and I am also very happy to say that we got Jared Dominguez on on team now to help us develop better work practices and closer relationships with our hardware partners and original device manufacturers.


Also a special thanks to Jakub Steiner for putting together the little sizzle video above, it was supposed to be used at our booth at Red Hat Summit next week, but with that going virtual we repurposed it for this announcement.

24 Apr 2020 2:38pm GMT

23 Apr 2020

feedplanet.freedesktop.org

Alyssa Rosenzweig: From Bifrost to Panfrost - deep dive into the first render

glmark2-es2 -btextureglmark2-es2 -btexture

In Panfrost's infancy, community members Connor Abbott and Lyude Paul largely reverse-engineered Bifrost and built a proof-of-concept shader dis/assembler. Meanwhile, I focused on the Midgard architecture (Mali T600+), building an OpenGL driver alongside developers like Collaboran Tomeu Vizoso.

As Midgard support has grown - including initial GLES3 support - we have now turned our attention to building a Bifrost driver. We at Collabora got to work in late February, with Tomeu porting the Panfrost command stream, while I built up a new Bifrost compiler.

This week, we've reached our first major milestone: the first 3D renders on Bifrost, including basic texture support!

glmark2-es2 -bshading:shading=phong:model=horseglmark2-es2 -bshading:shading=phong:model=horse glmark2-es2 -bbufferglmark2-es2 -bbuffer

The interface to a modern GPU has two components, the fixed-function command stream and the programmable instruction set architecture. The command stream controls the hardware, dispatching shaders and containing the state required by OpenGL or Vulkan. By contrast, the instruction set encodes the shaders themselves, as with any programmable architecture. Thus the GPU driver contains two major components, generating the command stream and compiling programs respectively.

From Midgard to Bifrost, there have been few changes to the command stream. After all, both architectures feature approximately the same OpenGL and Vulkan capabilities, and the fixed-function hardware has not required much driver-visible optimization. The largest changes involve the interfaces between the shaders and the command stream, including the titular shader descriptors. Indeed, squinting command stream traces from Midgard and Bifrost look similar - but the long tail of minor updates implies a nontrivial Panfrost port.

But the Bifrost instruction set, on the other hand? A rather different story.

Let's dive in.

Compiler

Bifrost's instruction set was redesigned completely from Midgard's, requiring us to build a free software compiler targeting Bifrost from scratch. Midgard's architecture is characterized as:

Vector and VLIW architectures promise high-performance in theory, and in theory, theory and practice are the same. In practice, these architectures are extremely difficult to compile efficiently for. Automatic vectorization is difficult; if a shader uses a 3-channel 32-bit vector (vec3), most likely the extra channel will go unused, wasting resources. Likewise, scheduling for VLIW architectures is difficult and slots often go unused, again wasting resources and preventing shaders from reaching peak architectural performance.

Bifrost by contrast is:

This redesign promises better performance - and a redesign of Panfrost's compiler, too.

Bifrost Intermediate Representation

At the heart of a modern compiler is one or more Intermediate Representations (IRs). In Mesa, OpenGL shaders are parsed into GLSL IR - a tree IR expressing language-visible types - which is converted to NIR - a flat static single assignment IR enabling powerful optimizations. Backends convert NIR to a custom backend IR, whose design seals the fate of a compiler. A poor IR design can impede the growth and harm the performance of the entire compiler, while a well-designed IR enables complex features to be implemented naturally with simple, fast code. There is no one-size-fits-all IR; the design necessarily is built to make certain compiler passes (like algebraic optimization) simple at the expense of others (like register allocation), justifying the use of multiple IRs within a compiler. Further, IR design permeates every aspect of the compiler, so IR changes to a mature compiler are difficult. Both Intel and AMD GPU compilers have required ground-up rewrites to fix IR issues, so I was eager to get the Bifrost IR (BIR) right to begin.

An IR is simply a set of data structures representing a program. Typically backends use a "flat" IR: a list of blocks, where each block contains a list of instructions. But how do we store an instruction?

We could reuse the hardware's instruction format as-is, with abstract variables instead of registers. It's tempting, and for simple architectures, it can work. The initial NIR-to-BIR pass is a bit harder than with abstract instructions, but final code generation is trivial, since the packing is already done.

Unfortunately, real world instructions sets are irregular and as quirky as their compilers' developers. Further, they are tightly packed to be small instead of simple. For final assembly, we will always need to pack the machine formats, but with this design, we also need to unpack them. Worse, the machine irregularities will permeate every aspect of the compiler since they are now embedded into the IR. On Bifrost, for example, most operations have multiple unrelated encodings; this design would force much of the compiler to be duplicated.

So what if the IR is entirely machine-independent, compiling in the abstract and converting to the machine-specific form at the very end. Such IRs are helpful; in Mesa, the machine-independent NIR facilitates sharing of powerful optimizations across drivers. Unfortunately, some design traits really do affect our compiler. Is there a compromise?

Notice the first IR simplifies packing at the expense of the rest of the compiler, whereas the second simplifies NIR-to-BIR at the expense of machine-specific passes. All designs trade complexity in one part of the compiler for another. Hence a good IR simplifies the hardest compiler passes at the expense of the easiest. For us, scheduling and register allocation are NP-complete problems requiring complex heuristics, whereas NIR-to-BIR and packing are linear translations with straightforward rules. Ideally, our IR simplifies scheduling and register allocation, pushing the complexity into NIR-to-BIR and packing. For Bifrost, this yields one golden rule:

A single BIR instruction corresponds to a single Bifrost instruction.

While single instructions may move during scheduling and be rewired from register allocation, the operation is unaffected. On the other hand, within an instruction:

BIR instructions are purely abstract without packing.

By delaying packing, we simplify manipulation. So NIR-to-BIR is complicated by the one-to-many mapping of NIR to Bifrost opcodes with special functions; meanwhile, final code generation is complicated by the pattern matching required to infer packing from abstract instructions. But by pushing this complexity to the edges, the optimizations in between are greatly simplified.

But speaking of IR mistakes, there is one other common issue…

16-bit Support

One oversight I made in designing the Midgard IR - an oversight shared by the IR of many other compilers in Mesa - is often assuming instructions to operate on 32-bit data only. In OpenGL with older Mesa versions, this assumption was true as the default float and integer types are 32-bit. However, the assumption is problematic for OpenCL where 8-, 16-, and 64-bit types are first class. Even for OpenGL, it is suboptimal. While the specification mandates minimum precision requirement for operation, fulfillable with 32-bit arithmetic, on shaders using mediump precision qualifiers we may use 16-bit arithmetic instead. About a month ago, Mesa landed support for optimizing mediump fragment shaders to use 16-bit arithmetic, so for Bifrost, we want to make sure we can take advantage of these optimizations.

The benefit of reduced precision is two-fold. First, shader computations need to be stored in registers, but space in the register file is scarce, so we need to conserve it. If a shader runs out of registers, it spills to main memory, which is slow, so by using 16-bit types instead of 32-bit, we can reduce spilling. Second, although Bifrost is scalar for 32-bit, it is 2-channel vector for 16-bit. As mentioned, automatic vectorization is difficult, but if shaders are vectorized to begin, the compiler can take advantage of this.

As an example, to add 32-bit vector R0-R3 with R4-R7, we need code like:

ADD.f32 R0, R0, R4
ADD.f32 R1, R1, R5
ADD.f32 R2, R2, R6
ADD.f32 R3, R3, R7

But in 16-bit mode with vectors R0-R1 and R2-R3:

ADD.v2f16 R0, R0, R2
ADD.v2f16 R1, R1, R3

Notice both register usage and instruction count are halved. How do we support this in Mesa? Mesa pipes 16-bitness through NIR into our backend, so we must ensure types are respected across our backend compiler. To do so, we include types explicitly in our backend intermediate representation, which the NIR-to-BIR pass simply passes through from NIR. Certain backend optimizations have to be careful to respect these types, whereas others work with little change provided the IR is well-designed. Scheduling is mostly unaffected. But where are there major changes?

Enter register allocation.

Register allocation

A fundamental problem every backend compiler faces is register allocation, mapping abstract IR variables to concrete machine registers:

0 = load uniform #0
1 = load attribute #0
2 = add 0, 1
3 = mul 2, 2

\(\rightarrow\)

R0 = load uniform #0
R1 = load attribute #0
R0 = add R0, R1
R0 = mul R0, R0

Traditionally, register allocation is modeled by a graph, where each IR variable is represented by a node. Any variables that are simultaneously live, in the sense that both of their values will be read later in the program, are said to interfere since they require separate registers to avoid clobbering each other's data. Interference is represented by edges. Then the problem reduces to graph colouring, finding colours (registers) for each node (variable) that don't coincide with the colours (registers) of any nodes connected by edges (interfering variables). Initially, Panfrost used a graph colouring approach.

However, the algorithm above is difficult to adapt to irregular vector architectures. One alternative approach is to model the register file explicitly, modeling modeling interference as constraints and using a constraint solver to allocate registers. For the above program, letting \(k_i\) denote the register allocated to variable \(i\), there is a single constraint on the registers \(k_0 \ne k_1\), since \((0, 1)\) is the only pair interfering nodes, yielding a optimal valid solution \(k_0 = 0, k_1 = 1, k_2 = 0, k_3 = 0\), corresponding to the allocation above.

As-is, this approach is mathematically equivalent to graph colouring. However, unlike graph colouring, it extends easily to model vectorization, enabling per-component liveness tracking, so some components of a vector are allocated to a register while others are reused for another value. It also extends easily to vectors of varying type sizes, crucial for 16-bit support, whereas the corresponding generalization for graph colouring is much more complicated.

This work was originally conducted in October for the Midgard compiler, but the approach works just as well for Bifrost. Although Bifrost is conceptually "almost" scalar, there are enough corner cases where we dip into vector structures that a vector-aware register allocator is useful. In particular, 16-bit instructions involve subdivides 32-bit registers, and vectorized input/output requires contiguous registers; both are easily modeled with linear-style constraints.

Packing

The final stage of a backend compiler is packing (final code generation), taking the scheduled, register allocated IR and assembling a final binary. Compared to Midgard, packing for Bifrost is incredibly complicated. Why?

Vectorized programs for vector architectures can be smaller than equivalent programs for scalar architectures. The above instruction sequences to add a 4-channel vector corresponds to just a single instruction on Midgard:

VADD.FADD R0.xyzw, R0.xyzw, R1.xyzw

We would like to minimize program size, since accesses to main memory are slow and increasing the size of the instruction cache is expensive. By minimizing size, smaller caches can be used with better efficiency. Unfortunately, naively scalarizing the architecture by a factor of 4 would appear to inflate program size by 4x, requiring a 4x larger cache for the same performance.

We can do better than simply duplicating instructions. First, by simplifying the vector semantics (since we know most code will be scalar or small contiguous vectors), we eliminate vector write masks and swizzles. But this may not be good enough.

Bifrost goes beyond to save instruction bits in any way possible, since small savings in instruction encoding accumulate quickly in complex shaders. For example, Bifrost separates medium-latency register file accesses from low latency "port" accesses, loading registers into ports ahead of the instruction. There are 64 registers (32-bits each), requiring 6-bits to index a register number. The structure to specify which registers should be loaded looks like:

unsigned port_0 : 5;
unsigned port_1 : 6;

We have 6 bits to encode port 1, but only 5 bits for port 0. Does that mean port 0 can only load registers R0-R31, instead of the full range?

Actually, no - if port 0 is smaller than port 1, the port numbers are as you would expect. But Bifrost has a trick: if port 0 is larger than port 1, then the hardware subtracts 63 from both ports to get the actual port number. In effect, the ordering of the ports is used as an implicit extra bit, storing 12-bits of port data in 11-bits on the wire. (What if the port numbers are equal? Then just reuse the same port!)

Similar tricks permeate the architecture, a win for code size but a loss for compiler packing complexity. The good news is that our compiler's packing code is self-contained and unit tested independent of the rest of the compiler.

Conclusion

Putting it all together, we have the beginnings of a Bifrost compiler, sufficient for the screenshots above. Next will be adding support for more complex instructions and scheduling to support more complex shaders.

Architecturally, Bifrost turns Midgard on its head, ideally bringing performance improvements but rather complicating the driver. While Bifrost support in Panfrost is still early, it's progressing rapidly. The compiler code required for the screenshots above is all upstreamed, and the command stream code is working its way up. So stay tuned, and happy hacking.

Originally posted on Collabora's blog

23 Apr 2020 4:00am GMT

15 Apr 2020

feedplanet.freedesktop.org

Roman Gilg: The KWinFT project

I am pleased to announce the KWinFT project and with it the first public release of its major open source offerings KWinFT and Wrapland, drop-in replacements for KDE's window manager KWin and its accompanying KWayland library.

The KWinFT project was founded by me at the beginning of this year with the goal to accelerate the development significantly in comparison to KWin. Classic KWin can only be moved with caution, since many people rely on it in their daily computing and there are just as many other stakeholders. In this respect, at least for some time, I anticipated to be able to push KWinFT forward in a much more dynamic way.

Over time I refined this goal though and defined additional objectives to supplement the initial vision to ensure its longevity. And that became in my view now equally important: to provide a sane, modern, well organized development process, something KWinFT users won't notice directly but hopefully will benefit from indirectly by enabling the achievement of the initial goal of rapid pace development while retaining the overall stability of the product.

What is in there for you

If you are primarily a consumer of Linux graphics technology this announcement may not seem especially exciting for you. Right now and in the near future the focus of KWinFT is inwards: to formulate and establish great structures for its development process, multiplying all later development efforts.

Examples are continuous integration with different code linters, scheduled and automatic builds and tests, as well as policies to increase developer's effectiveness. This will hopefully in many ways accelerate the KWinFT progress in the future.

But there are already some experimental features in the first release that you might look out for:

How to get KWinFT not later than now

Does this sound interesting to you? You are in luck! The first official release is already available and if you use Manjaro Linux with its unstable branch enabled, you can even try it out right now by installing the kwinft package.

And you may switch back to classic KWin in no time by installing again the kwin package. Your dependencies will be updated in both directions without further ado.

I hope KWinFT and Wrapland will soon become available in other distributions as well. But at this moment I must thank the Manjaro team for making this happen on very short notice.

Here I also want to thank Ilya Bizyaev for testing my builds from time to time in the last few weeks and giving me direct feedback when something needed to be fixed.

Future directions

For the rest of this article let me outline the strategy I will follow in the future with the KWinFT project. I will expand on these goals in upcoming blog posts as time permits.

An optimized development process

I already mentioned that defining and maintaining a healthy development process in KWinFT is an absolute focus objective for me.

This is the basis on that any future development effort can pick up momentum or if neglected will be held back. And that was a huge problem with KWin in the last year and I can say with certainty that I was not the only developer who suffered because of that.

I tried to improve the situation in the past inside KWin but the larger an organization and the more numerous the stakeholders become the more difficult it is for any form of change to manifest.

In open source software development we have the amazing advantage to be able to circumvent this blockade by rebooting a project in a fresh organisational paradigm what we call a fork. This has all so many risks and challenges as one could expect, so such a decision should not be taken lightly and needs a lot of conviction, preparation and dedication.

I won't go into anymore detail here but I plan to write more articles on what I see as current deficits in KDE's organisation of development processes, how I plan to make sure such issues won't plague KWinFT as well and in which ways these solutions could be at least partly adopted in KDE as well in order to improve the situation there too.

And while I don't have to say much positive about the current state of KDE right now, don't forget that KDE is an organisation which withstood the test of time with a history reaching back more than 20 years. An organisation that had a positive impact on many people. Such an organisation must not be slated but fostered and enhanced.

KWinFT with focus on Wayland

The project is called KWinFT because its primary offering right now is the window manager KWinFT, a fork of KWin. The strategy I will follow for this open source offering is centered around developer focus.

The time and resources of open source developers are not limitless and the window compositor is a central building piece in any desktop making it a natural point of contention.

One solution is permanently trying to make everyone happy through compromise and consensus. That is the normal pattern in large organisations. Dynamic progress is the opposite of that, instead featuring a trial-and-error approach and the sad reality that sometimes corners must be cut and hearts be broken for achieving a greater good.

Both these patterns can be valid approaches in different times and contexts. KDE naturally can only employ the first one, KWinFT is destined to employ the second one.

A major focus for the overall KWinFT project and in particular its window manager will be Wayland. That won't mean to make it just about usable at whatever the cost, but to put KWinFT's Wayland session on a rock-solid solution, rework it as often as needed, grind it out until it is right without any compromise.

And boy, does it need that! To say it bluntly, even with the risk of getting cited out of context: in 2020 still KWin and KWayland are a mess.

Sure, the superficial impression improved over time but there are many deep and fundamental flaws in the architecture that require one or better several developers and project lifetimes of not days or weeks, but months. Let me skip the details for now and instead go directly to the biggest offender.

Wrapland in modern C++ and without Qt

Wrapland was forked from KWayland alongside with KWinFT. I knew KWayland's architecture before of course already, but there is knowing and understanding. And what I have learned additionally about KWayland's internals in the last few months was shocking. And with the current vision that I follow for Wrapland I would not call Wrapland a fork anymore, rather a reboot.

The very first issue ticket I opened in Wrapland was somewhat a gamble back at that time but in hind sight quite visionary. The issue asks to remove Wrapland's Qt dependency. When I opened this ticket I wasn't aware of all the puzzle pieces, I couldn't be. But now two months later I am more convinced by that goal than ever before.

A C++ library that provides a wrapper for the C-style library libwayland is useful, a C++ library in conformance with the C++ Core Guidelines, leveraging the most current C++ standard. That means in particular without using Qt concepts like their moc, raw pointers everywhere and the prevailing use of dynamic over static inheritance.

KWinFT and potentially many other applications, for example from games with nested composition up to the UIs of large industrial machinery, could make great use of a well designed, unit tested, battle-hardened C++ Wayland library that employs modern C++ features for type and memory safety to their full extend. And that only covers Wrapland's server part. Although clients often use complete toolkit for their windowing system integration it is not hard to envision use cases where more direct access is needed and a C++ library is preferred.

The advantages by leveraging Qt in comparison would be primarily the possibility to add QML bindings. This can be useful as some interesting applications leveraging QtWayland's server part prove. But it is minuscule in KWinFT's use case. And what the compositor that Wrapland is written against does not make use of can not be a development objective of this library in the foreseeable future.

I am currently rewriting the server part of Wrapland in this spirit. You can check out the overview ticket that I created for planning the rewrite and the current prototype I am working on. Note that the development is still ongoing on a fundamental level and there might be more iterations necessary before a final structure manifests.

While the remodel of the server part is certainly exciting and I do plan something similar for the client part, this project will need to wait some more. For now I "only" reworked most of the client library to not leak memory in every second place. This allows to run unit tests on the GitLab CI for merge requests and on push in a robust manner. This rework, which contained also fixes for the server part, resulted in a massive merge with 40 commits and over 6000 changed lines.

A beacon of modern technologies

Leveraging modern language features of C++ is one objective, but a far more important one for this project is in the domain to find KWinFT is really created for: computer graphics and their presentation and organisation in an optimal way to the user.

But here I declare just a single goal: KWinFT must be at the top of every major development in this domain.

Too often in the past KWin was sidelined, or rather sidelined itself, when new technology emerged trying to catch up later on. The state of Wayland on Plasma in 2020 is testament to that. In contrast KWinFT shall be open to new developments in the larger community and if manpower permits spearhead such itself, not necessarily as a maverick but in concert with the many other great single and multi-developer projects on freedesktop.org and beyond. This leads over to the final founding principle of KWinFT.

Open to other communities and technologies

A major goal I pursued last year already as a KWin developer and that I want to expand upon with KWinFT is my commitment to building and maintaining meaningful relations with other open source communities.

Meaningful means here on one side on a personal level, like when I attended the X.Org Developer and Wine conferences on two consecutive weekends in October last year in Canada and on the way back home the Gnome Shell meetup in the Netherlands.

But meaningful also means working together, being open to the technologies other projects provide, trying to increase the interoperability instead of locking yourself into the own technology stack.

Of course in this respect the primary address that comes to mind is freedesktop.org with the Wayland and X11 projects. But also working together with teams developing other compositors can be very rewarding.

For example in Wrapland I recently added a client implementation of wlroots' output management protocol. This will allow users of wlroots-based compositors in the future to use KScreen for configuring their outputs.

I want to expand upon that by sharing more protocols and more tools with fellow compositor developers. How about an internal wlroots-based compositor for Wrapland's autotests? This would double-check not only Wrapland's protocol implementation but also wlroot's ones at the same time. If you are interested in designing such a solution in greater details check out Wrapland's contributing guideline and get in touch.

15 Apr 2020 11:00pm GMT

07 Apr 2020

feedplanet.freedesktop.org

Robert Foss: Speed up `git log --graph` 18x times

$ time git lg
git lg  13.34s user 0.87s system 84% cpu 16.845 total

# True by default as of git v2.24
git config --global core.commitGraph true
git config --global gc.writeCommitGraph true

# Command added in git v2.20
git commit-graph write

$ time git lg
git lg  0.72s user 0.14s system 74% cpu 1.154 total

This is a speed up of ~18x, compared to the older versions.

The way this works is that commit-graph file stores the commit graph structure along with some extra metadata to speed up graph in the .git/objects/info directory.

07 Apr 2020 1:06pm GMT

04 Apr 2020

feedplanet.freedesktop.org

Peter Hutterer: High resolution wheel scrolling in the desktop stack

This is a follow up from the kernel support for high-resolution wheel scrolling which you totally forgot about because it's already more then a year in the past and seriously, who has the attention span these days to remember this. Anyway, I finally found time and motivation to pick this up again and I started lining up the pieces like cans, for it only to be shot down by the commentary of strangers on the internet. The Wayland merge request lists the various pieces (libinput, wayland, weston, mutter, gtk and Xwayland) but for the impatient there's also an Fedora 32 COPR. For all you weirdos inexplicably not running the latest Fedora, well, you'll have to compile this yourself, just like I did.

Let's recap: in v5.0 the kernel added new axes REL_WHEEL_HI_RES and REL_HWHEEL_HI_RES for all devices. On devices that actually support high-resolution wheel scrolling (Logitech and Microsoft mice, primarily) you'll get multiple hires events before the now-legacy REL_WHEEL events. On all other devices those two are in sync.

Integrating this into the userspace stack was a bit of a mess at first, but I think the solution is good enough, even if it has a rather verbose explanation on how to handle it. The actual patches to integrate ended up being relatively simple. So let's see why it's a bit weird:

When Wayland started, back in WhoahReallyThatLongAgo, scrolling was specified as the wl_pointer.axis event with a value in pixels. This works fine for touchpads, not so much for wheels. The early versions of Weston decreed that one wheel click was 10 pixels [1] and, perhaps surprisingly, the world kept on turning. When libinput was forked from Weston an early change was that wheel events would have two values - degrees of movement and click count ("discrete steps"). The wayland protocol was expanded to include the discrete steps as wl_pointer.axis_discrete as well. Then backwards compatibility reared its ugly head and Mutter, Weston, GTK all basically said: one discrete step equals 10 pixels so we multiply the discrete value by 10 and, perhaps surprisingly, the world kept on turning.

This worked out well enough for a few years but with high resolution wheels we ran into a problem. Discrete steps are integers, so we can't send partial values. And the protocol is defined in a way that any tweaking of the behaviour would result in broken clients which, perhaps surprisingly, is a Bad Thing. This lead to the current proposal of separate events. LIBINPUT_EVENT_POINTER_AXIS_WHEEL and for Wayland the wl_pointer.axis_v120 event, linked to above. These events are (like the kernel events) a parallel event stream to the previous events and effectively replace the LIBINPUT_EVENT_POINTER_AXIS and Wayland wl_pointer.axis/axis_discrete pair for wheel events (not so for touchpad or button scrolling though).

The compositor side of things is relatively simple: take the events from libinput and pass the hires ones as v120 events and the lowres ones as v120 events with a value of zero. The client side takes the v120 events and uses them over wl_pointer.axis/axis_discrete unless one is zero in which case you can discard all axis events in that wl_pointer.frame. Since most client implementation already have the support for smooth scrolling (because, well, touchpads do exist) it's relatively simple to integrate - the new events just feed into the smooth scrolling code. And since you already have to do wheel emulation for that (because, well, old clients exist) wheel emulation is handled easily too.

All that to provide buttery smooth [2] wheel scrolling. Or not, if your hardware doesn't support it. In which case, well, live with the warm fuzzy feeling that someone else has a better user experience now. Or soon, anyway.

[1] with, I suspect, the scientific measurement of "yeah, that seems about alright"
[2] like butter out of a fridge, so still chunky but at least less so than before

04 Apr 2020 4:00am GMT