23 Mar 2017

feedplanet.freedesktop.org

Simon McVittie: GTK hackfest 2017: D-Bus communication with containers

At the GTK hackfest in London (which accidentally became mostly a Flatpak hackfest) I've mainly been looking into how to make D-Bus work better for app container technologies like Flatpak and Snap.

The initial motivating use cases are:

At the moment, Flatpak runs a D-Bus proxy for each app instance that has access to D-Bus, connects to the appropriate bus on the app's behalf, and passes messages through. That proxy is in a container similar to the actual app instance, but not actually the same container; it is trusted to not pass messages through that it shouldn't pass through. The app-identification mechanism works in practice, but is Flatpak-specific, and has a known race condition due to process ID reuse and limitations in the metadata that the Linux kernel maintains for AF_UNIX sockets. In practice the use of X11 rather than Wayland in current systems is a much larger loophole in the container than this race condition, but we want to do better in future.

Meanwhile, Snap does its sandboxing with AppArmor, on kernels where it is enabled both at compile-time (Ubuntu, openSUSE, Debian, Debian derivatives like Tails) and at runtime (Ubuntu, openSUSE and Tails, but not Debian by default). Ubuntu's kernel has extra AppArmor features that haven't yet gone upstream, some of which provide reliable app identification via LSM labels, which dbus-daemon can learn by querying its AF_UNIX socket. However, other kernels like the ones in openSUSE and Debian don't have those. The access-control (AppArmor mediation) is implemented in upstream dbus-daemon, but again doesn't work portably, and is not sufficiently fine-grained or flexible to do some of the things we'll likely want to do, particularly in dconf.

After a lot of discussion with dconf maintainer Allison Lortie and Flatpak maintainer Alexander Larsson, I think I have a plan for fixing this.

This is all subject to change: see fd.o #100344 for the latest ideas.

Identity model

Each user (uid) has some uncontained processes, plus 0 or more containers.

The uncontained processes include dbus-daemon itself, desktop environment components such as gnome-session and gnome-shell, the container managers like Flatpak and Snap, and so on. They have the user's full privileges, and in particular they are allowed to do privileged things on the user's session bus (like running dbus-monitor), and act with the user's full privileges on the system bus. In generic information security jargon, they are the trusted computing base; in AppArmor jargon, they are unconfined.

The containers are Flatpak apps, or Snap apps, or other app-container technologies like Firejail and AppImage (if they adopt this mechanism, which I hope they will), or even a mixture (different app-container technologies can coexist on a single system). They are containers (or container instances) and not "apps", because in principle, you could install com.example.MyApp 1.0, run it, and while it's still running, upgrade to com.example.MyApp 2.0 and run that; you'd have two containers for the same app, perhaps with different permissions.

Each container has an container type, which is a reversed DNS name like org.flatpak or io.snapcraft representing the container technology, and an app identifier, an arbitrary non-empty string whose meaning is defined by the container technology. For Flatpak, that string would be another reversed DNS name like com.example.MyGreatApp; for Snap, as far as I can tell it would look like example-my-great-app.

The container technology can also put arbitrary metadata on the D-Bus representation of a container, again defined and namespaced by the container technology. For instance, Flatpak would use some serialization of the same fields that go in the Flatpak metadata file at the moment.

Finally, the container has an opaque container identifier identifying a particular container instance. For example, launching com.example.MyApp twice (maybe different versions or with different command-line options to flatpak run) might result in two containers with different privileges, so they need to have different container identifiers.

Contained server sockets

App-container managers like Flatpak and Snap would create an AF_UNIX socket inside the container, bind() it to an address that will be made available to the contained processes, and listen(), but not accept() any new connections. Instead, they would fd-pass the new socket to the dbus-daemon by calling a new method, and the dbus-daemon would proceed to accept() connections after the app-container manager has signalled that it has called both bind() and listen(). (See fd.o #100344 for full details.)

Processes inside the container must not be allowed to contact the AF_UNIX socket used by the wider, uncontained system - if they could, the dbus-daemon wouldn't be able to distinguish between them and uncontained processes and we'd be back where we started. Instead, they should have the new socket bind-mounted into their container's XDG_RUNTIME_DIR and connect to that, or have the new socket set as their DBUS_SESSION_BUS_ADDRESS and be prevented from connecting to the uncontained socket in some other way. Those familiar with the kdbus proposals a while ago might recognise this as being quite similar to kdbus' concept of endpoints, and I'm considering reusing that name.

Along with the socket, the container manager would pass in the container's identity and metadata, and the method would return a unique, opaque identifier for this particular container instance. The basic fields (container technology, technology-specific app ID, container ID) should probably be added to the result of GetConnectionCredentials(), and there should be a new API call to get all of those plus the arbitrary technology-specific metadata.

When a process from a container connects to the contained server socket, every message that it sends should also have the container instance ID in a new header field. This is OK even though dbus-daemon does not (in general) forbid sender-specified future header fields, because any dbus-daemon that supported this new feature would guarantee to set that header field correctly, the existing Flatpak D-Bus proxy already filters out unknown header fields, and adding this header field is only ever a reduction in privilege.

The reasoning for using the sender's container instance ID (as opposed to the sender's unique name) is for services like dconf to be able to treat multiple unique bus names as belonging to the same equivalence class of contained processes: instead of having to look up the container metadata once per unique name, dconf can look it up once per container instance the first time it sees a new identifier in a header field. For the second and subsequent unique names in the container, dconf can know that the container metadata and permissions are identical to the one it already saw.

Access control

In principle, we could have the new identification feature without adding any new access control, by keeping Flatpak's proxies. However, in the short term that would mean we'd be adding new API to set up a socket for a container without any access control, and having to keep the proxies anyway, which doesn't seem great; in the longer term, I think we'd find ourselves adding a second new API to set up a socket for a container with new access control. So we might as well bite the bullet and go for the version with access control immediately.

In principle, we could also avoid the need for new access control by ensuring that each service that will serve contained clients does its own. However, that makes it really hard to send broadcasts and not have them unintentionally leak information to contained clients - we would need to do something more like kdbus' approach to multicast, where services know who has subscribed to their multicast signals, and that is just not how dbus-daemon works at the moment. If we're going to have access control for broadcasts, it might as well also cover unicast.

The plan is that messages from containers to the outside world will be mediated by a new access control mechanism, in parallel with dbus-daemon's current support for firewall-style rules in the XML bus configuration, AppArmor mediation, and SELinux mediation. A message would only be allowed through if the XML configuration, the new container access control mechanism, and the LSM (if any) all agree it should be allowed.

By default, processes in a container can send broadcast signals, and send method calls and unicast signals to other processes in the same container. They can also receive method calls from outside the container (so that interfaces like org.freedesktop.Application can work), and send exactly one reply to each of those method calls. They cannot own bus names, communicate with other containers, or send file descriptors (which reduces the scope for denial of service).

Obviously, that's not going to be enough for a lot of contained apps, so we need a way to add more access. I'm intending this to be purely additive (start by denying everything except what is always allowed, then add new rules), not a mixture of adding and removing access like the current XML policy language.

There are two ways we've identified for rules to be added:

Initially, many contained apps would work in the first way (and in particular sockets=session-bus would add a rule that allows almost everything), while over time we'll probably want to head towards recommending more use of the second.

Related topics

Access control on the system bus

We talked about the possibility of using a very similar ruleset to control access to the system bus, as an alternative to the XML rules found in /etc/dbus-1/system.d and /usr/share/dbus-1/system.d. We didn't really come to a conclusion here.

Allison had the useful insight that the XML rules are acting like a firewall: they're something that is placed in front of potentially-broken services, and not part of the services themselves (which, as with firewalls like ufw, makes it seem rather odd when the services themselves install rules). D-Bus system services already have total control over what requests they will accept from D-Bus peers, and if they rely on the XML rules to mediate that access, they're essentially rejecting that responsibility and hoping the dbus-daemon will protect them. The D-Bus maintainers would much prefer it if system services took responsibility for their own access control (with or without using polkit), because fundamentally the system service is always going to understand its domain and its intended security model better than the dbus-daemon can.

Analogously, when a network service listens on all addresses and accepts requests from elsewhere on the LAN, we sometimes work around that by protecting it with a firewall, but the optimal resolution is to get that network service fixed to do proper authentication and access control instead.

For system services, we continue to recommend essentially this "firewall" configuration, filling in the ${} variables as appropriate:

<busconfig>
    <policy user="${the daemon uid under which the service runs}">
        <allow own="${the service's bus name}"/>
    </policy>
    <policy context="default">
        <allow send_destination="${the service's bus name}"/>
    </policy>
</busconfig>

We discussed the possibility of moving towards a model where the daemon uid to be allowed is written in the .service file, together with an opt-in to "modern D-Bus access control" that makes the "firewall" unnecessary; after some flag day when all significant system services follow that pattern, dbus-daemon would even have the option of no longer applying the "firewall" (moving to an allow-by-default model) and just refusing to activate system services that have not opted in to being safe to use without it. However, the "firewall" also protects system bus clients, and services like Avahi that are not bus-activatable, against unintended access, which is harder to solve via that approach; so this is going to take more thought.

For system services' clients that follow the "agent" pattern (BlueZ, polkit, NetworkManager, Geoclue), the correct "firewall" configuration is more complicated. At some point I'll try to write up a best-practice for these.

New header fields for the system bus

At the moment, it's harder than it needs to be to provide non-trivial access control on the system bus, because on receiving a method call, a service has to remember what was in the method call, then call GetConnectionCredentials() to find out who sent it, then only process the actual request when it has the information necessary to do access control.

Allison and I had hoped to resolve this by adding new D-Bus message header fields with the user ID, the LSM label, and other interesting facts for access control. These could be "opt-in" to avoid increasing message sizes for no reason: in particular, it is not typically useful for session services to receive the user ID, because only one user ID is allowed to connect to the session bus anyway.

Unfortunately, the dbus-daemon currently lets unknown fields through without modification. With hindsight this seems an unwise design choice, because header fields are a finite resource (there are 255 possible header fields) and are defined by the D-Bus Specification. The only field that can currently be trusted is the sender's unique name, because the dbus-daemon sets that field, overwriting the value in the original message (if any).

To make it safe to rely on the new fields, we would have to make the dbus-daemon filter out all unknown header fields, and introduce a mechanism for the service to check (during connection to the bus) whether the dbus-daemon is sufficiently new that it does so. If connected to an older dbus-daemon, the service would not be able to rely on the new fields being true, so it would have to ignore the new fields and treat them as unset. The specification is sufficiently vague that making new dbus-daemons filter out unknown header fields is a valid change (it just says that "Header fields with an unknown or unexpected field code must be ignored", without specifying who must ignore them, so having the dbus-daemon delete those fields seems spec-compliant).

This all seemed fine when we discussed it in person; but GDBus already has accessors for arbitrary header fields by numeric ID, and I'm concerned that this might mean it's too easy for a system service to be accidentally insecure: It would be natural (but wrong!) for an implementor to assume that if g_message_get_header (message, G_DBUS_MESSAGE_HEADER_FIELD_SENDER_UID) returned non-NULL, then that was guaranteed to be the correct, valid sender uid. As a result, fd.o #100317 might have to be abandoned. I think more thought is needed on that one.

Unrelated topics

As happens at any good meeting, we took the opportunity of high-bandwidth discussion to cover many useful things and several useless ones. Other discussions that I got into during the hackfest included, in no particular order:

More notes are available from the GNOME wiki.

Acknowledgements

The GTK hackfest was organised by GNOME and hosted by Red Hat and Endless. My attendance was sponsored by Collabora. Thanks to all the sponsors and organisers, and the developers and organisations who attended.

23 Mar 2017 6:07pm GMT

22 Mar 2017

feedplanet.freedesktop.org

Christian Schaller: Another media codec on the way!

One of the thing we are working hard at currently is ensuring you have the codecs you need available in Fedora Workstation. Our main avenue for doing this is looking at the various codecs out there and trying to determine if the intellectual property situation allows us to start shipping all or parts of the technologies involved. This was how we were able to start shipping mp3 playback support for Fedora Workstation 25. Of course in cases where this is obviously not the case we have things like the agreement with our friends at Cisco allowing us to offer H264 support using their licensed codec, which is how OpenH264 started being available in Fedora Workstation 24.

As you might imagine clearing a codec for shipping is a slow and labour intensive process with lawyers and engineers spending a lot of time reviewing stuff to figure out what can be shipped when and how. I am hoping to have more announcements like this coming out during the course of the year.

So I am very happy to announce today that we are now working on packaging the codec known as AC3 (also known as A52) for Fedora Workstation 26. The name AC3 might not be very well known to you, but AC3 is part of a set of technologies developed by Dolby and marketed as Dolby Surround. This means that if you have video files with surround sound audio it is most likely something we can playback with an AC3 decoder. AC3/A52 is also used for surround sound TV broadcasts in the US and it is the audio format used by some Sony and Panasonic video cameras.

We will be offering AC3 playback in Fedora Workstation 26 and we are looking into options for offering an encoder. To be clear there are nothing stopping us from offering an encoder apart from finding an implementation that is possible to package and ship with Fedora with an reasonable amount of effort. The most well known open source implementation we know about is the one found in ffmpeg/libav, but extracting a single codec to ship from ffmpeg or libav is a lot of work and not something we currently have the resources to do. We found another implementation called aften, but that seems to be unmaintaned for years, but we will look at it to see if it could be used.
But if you are interested in AC3 encoding support we would love it if someone started working on a standalone AC3 encoder we could ship, be that by picking up maintership of Aften, splitting out AC3 encoding from libav or ffmpeg or writting something new.

If you want to learn more about AC3 the best place to look is probably the Wikipedia page for Dolby Digital or the a52 ATSC audio standard document for more of a technical deep dive.

22 Mar 2017 5:02pm GMT

20 Mar 2017

feedplanet.freedesktop.org

Eric Anholt: This week in vc4 2017-03-20: HDMI Audio, ARM CLCD, glamor

The biggest news this week is that I've landed the VC4 HDMI audio driver from Boris in drm-misc-next. The alsa-lib maintainers took the patch today for the automatic configuration file, so now we just need to get that backported to various distributions and we'll have native HDMI audio in 4.12.

My main project, though, was cleaning up the old ARM CLCD controller driver. I've cut the driver from 3500 lines of code to 1000, and successfully tested it with VC4 doing dmabuf sharing to it. It was surprising how easy the dmabuf part actually was, once I had display working.

Still to do for display on this platform is to get CLCD merged, get my kmscube patches merged (letting you do kmscube on a separate display vs gpu platform), and clean up my xserver patches. The xserver changes are to let you have a 3D GPU with no display connectors as the protocol screen and the display controller as its sink. We had infrastructure for most of this for doing PRIME in laptops, except that laptop GPUs all have some display connectors, even if they're not actually connected to anything.

I also had a little time to look into glamor bugs. I found a little one-liner to fix dashed lines (think selection rectangles in the GIMP), and debugged a report of how zero-width lines are broken. I'm afraid for VC4 we're going to need to disable X's zero-width line acceleration. VC4 hardware doesn't implement the diamond-exit rule, and actually has some workarounds in it for the way people naively *try* to draw lines in GL that thoroughly breaks X11.

In non-vc4 news, the SDHOST driver is now merged upstream, so it should be in 4.12. It won't quite be enabled in 4.12, due to the absurd merge policies of the arm-soc world, but at least the Fedora and Debian kernel maintainers should have a much easier time with Pi3 support at that point.

20 Mar 2017 6:39pm GMT

Dave Airlie: how close to conformant is radv?

I spent some time staring into the results of the VK-GL-CTS test suite on radv, which contains the Vulkan 1.0 conformance tests.

In order to be conformant you have to pass all the tests on the mustpass list for the Vulkan version you want to conform to, from the branch of the test suite for that version.

The latest CTS tests for 1.0 is the vulkan-cts-1.0.2 branch, and the mustpass list is in external/vulkancts/mustpass/1.0.2/vk-default.txt

Using some WIP radv patches in my github radv-wip-conform branch and the 1.0.2 test suite, today's results are on my Tonga GPU:

Test run totals:
Passed: 82551/150950 (54.7%)
Failed: 0/150950 (0.0%)
Not supported: 68397/150950 (45.3%)
Warnings: 2/150950 (0.0%)

That is pretty conformant (in fact it would pass as-is). However I need to clean up the patches in the branch and maybe figure out how to do some bits properly without hacks (particularly some semaphore wait tweaks), but that is most of the work done.

Thanks again to Bas and all other radv contributors.

20 Mar 2017 7:26am GMT

17 Mar 2017

feedplanet.freedesktop.org

Peter Hutterer: A simple house-moving tip: use tape to mark empty cupboards

When you've emptied a cupboard, put masking tape across it, ideally in a colour that's highly visible. This way you immediately see know which ones are finished and which ones still need attention. You won't keep opening the cupboard a million times to check and after the move it takes merely seconds to undo.

17 Mar 2017 2:00am GMT

14 Mar 2017

feedplanet.freedesktop.org

Keith Packard: Valve

Consulting for Valve in my spare time

Valve Software has asked me to help work on a couple of Linux graphics issues, so I'll be doing a bit of consulting for them in my spare time. It should be an interesting diversion from my day job working for Hewlett Packard Enterprise on Memory Driven Computing and other fun things.

First thing on my plate is helping support head-mounted displays better by getting the window system out of the way. I spent some time talking with Dave Airlie and Eric Anholt about how this might work and have started on the kernel side of that. A brief synopsis is that we'll split off some of the output resources from the window system and hand them to the HMD compositor to perform mode setting and page flips.

After that, I'll be working out how to improve frame timing reporting back to games from a composited desktop under X. Right now, a game running on X with a compositing manager can't tell when each frame was shown, nor accurately predict when a new frame will be shown. This makes smooth animation rather difficult.

14 Mar 2017 7:10pm GMT

Eric Anholt: This month in vc4 (2017-03-13): docs, porting, tiled display

It's been a while since I've posted a status update. Here are a few things that have been going on.

VC4 now generates HTML documentation from the comments in the kernel code. It's just a start, but I'm hoping to add more technical details of how the system works here. I don't think doxygen-style documentation of individual functions would be very useful (if you're calling vc4 functions, you're editing the vc4 driver, unlike library code), so I haven't pursued that.

I've managed to get permission to port my 3D driver to another platform, the 911360 enterprise phone that's already partially upstreamed. It's got a VC4 V3D in it (slightly newer than the one in Raspberry Pi), but an ARM CLCD controller for the display. The 3D came up pretty easily, but for display I've been resurrecing Tom Cooksey's old CLCD KMS driver that never got merged. After 3+ years of DRM core improvements, we get to delete half of the driver he wrote and just use core helpers instead. Completing this work will require that I get the dma-buf fencing to work right in vc4, which is something that Android cares about.

Spurred on by a report from one of the Processing developers, I've also been looking at register allocation troubles again. I've found one good opportunity for reducing register pressure in Processing by delaying FS/VS input loads until they're actually used, and landed and then reverted one patch trying to accomplish it. I also took a look at DEQP's register allocation failures again, and fixed a bunch of its testcases with a little scheduling fix.

I've also started on a fix to allow arbitrary amounts of CMA memory to be used for vc4. The 256MB CMA limit today is to make sure that the tile state buffer and tile alloc/overflow buffers are in the same 256MB region due to a HW addressing bug. If we allocate a single buffer that can contain tile state, tile alloc, and overflow all at once within a 256MB area, then all the other buffers in the system (texture contents, particularly) can go anywhere in physical address space. The set top box group did some testing, and found that of all of their workloads, 16MB was enough to get any job completed, and 28MB was enough to get bin/render parallelism on any job. The downside is that a job that's too big ends up just getting rejected, but we'll surely run out of CMA for all your textures before someone hits the bin limit by having too many vertices per pixel.

Eben recently pointed out to me that the HVS can in fact scan out tiled buffers, which I had thought it wasn't able to. Having our window system buffers be linear means that when we try to texture from it (your compositing manager's drawing and uncomposited window movement for example) we have to first do a copy to the tiled format. The downside is that, in my measurements, we lose up to 1.4% of system performance (as measured by memcpy bandwidth tests with the 7" LCD panel for display) due to the HVS being inefficient at reading from tiled buffers (since it no longer gets to do large burst reads). However, this is probably a small price to pay for the massive improvement we get in graphics operations, and that cost could be reduced by X noticing that the display is idle and swapping to a linear buffer instead. Enabling tiled scanout is currently waiting for the new buffer modifiers patches to land, which Ben Widawsky at Intel has been working on.

There's no update on HDMI audio merging -- we're still waiting for any response from the ALSA maintainers, despite having submitted the patch twice and pinged on IRC.

14 Mar 2017 12:33am GMT

02 Mar 2017

feedplanet.freedesktop.org

Julien Danjou: Python never gives up: the tenacity library

A couple of years ago, I wrote about the Python retrying library. This library was designed to retry the execution of a task when a failure occurred.

I started to spread usage of this library in various projects, such as Gnocchi, these last years. Unfortunately, it started to get very hard to contribute and send patches to the upstream retrying project. I spent several months trying to work with the original author. But after a while, I had to come to the conclusion that I would be unable to fix bugs and enhance it at the pace I would like to. Therefore, I had to take a difficult decision and decided to fork the library.

Here comes tenacity

I picked a new name and rewrote parts of the API of retrying that were not working correctly or were too complicated. I also fixed bugs with the help of Joshua, and named this new library tenacity. It works in the same manner as retrying does, except that it is written in a more functional way and offers some nifty new features.

Basic usage

The basic usage is to use it as a decorator:

import tenacity

@tenacity.retry
def do_something_and_retry_on_any_exception():
pass


This will make the function do_something_and_retry_on_any_exception be called over and over again until it stops raising an exception. It would have been hard to design anything simpler. Obviously, this is a pretty rare case, as one usually wants to e.g. wait some time between retries. For that, tenacity offers a large panel of waiting methods:

import tenacity

@tenacity.retry(wait=tenacity.wait_fixed(1))
def do_something_and_retry():
do_something()


Or a simple exponential back-off method can be used instead:

import tenacity

@tenacity.retry(wait=tenacity.wait_exponential())
def do_something_and_retry():
do_something()


Combination

What is especially interesting with tenacity, is that you can easily combine several methods. For example, you can combine tenacity.wait.wait_random with tenacity.wait.wait_fixed to wait a number of seconds defined in an interval:

import tenacity

@tenacity.retry(wait=tenacity.wait_fixed(10) + wait.wait_random(0, 3))
def do_something_and_retry():
do_something()


This will make the function being retried wait randomly between 10 and 13 seconds before trying again.

tenacity offers more customization, such as retrying on some exceptions only. You can retry every second to execute the function only if the exception raised by do_something is an instance of IOError, e.g. a network communication error.

import tenacity

@tenacity.retry(wait=tenacity.wait_fixed(1),
retry=tenacity.retry_if_exception_type(IOError))
def do_something_and_retry():
do_something()


You can combine several condition easily by using the | or & binary operators. They are used to make the code retry if an IOError exception is raised, or if no result is returned. Also, a stop condition is added with the stop keyword arguments. It allows to specify a condition unrelated to the function result of exception to stop, such as a number of attemps or a delay.

import tenacity

@tenacity.retry(wait=tenacity.wait_fixed(1),
stop=tenacity.stop_after_delay(60),
retry=(tenacity.retry_if_exception_type(IOError) |
tenacity.retry_if_result(lambda result: result == None))
def do_something_and_retry():
do_something()


The functional approach of tenacity makes it easy and clean to combine a lot of condition for various use cases with simple binary operators.

Standalone usage

tenacity can also be used without decorator by using the object Retrying, that implements its main behaviour, and usig its call method. This allows to call any function with different retry conditions, or to retry any piece of code that do not use the decorator at all - like code from an external library.

import tenacity

r = tenacity.Retrying(
wait=tenacity.wait_fixed(1),
retry=tenacity.retry_if_exception_type(IOError))
r.call(do_something)


This also allows you to re-use that object without creating one new each time, saving some memory!

I hope you'll like it and will find it some use. Feel free to fork it, report bug or ask for new features on its GitHub!

02 Mar 2017 3:01pm GMT

01 Mar 2017

feedplanet.freedesktop.org

Christian Schaller: 2016 in review

I started writing this blog entry at the end of January, but kept delaying publishing it due to waiting for some cool updates we are working on. But I decided today that instead of keep pushing the 2016 review part back I should just do this as two separate blog entries. So here is my Fedora Workstation 2016 Summary :)

We did two major releases of Fedora Workstation, namely 24 and 25 each taking is steps closer to realising our vision for the future of the Linux Desktop. I am really happy that we finally managed to default to Wayland in Fedora Workstation 25. As Jonathan Corbet of LWN so well put it: "That said, it's worth pointing out that the move to Wayland is a huge transition; we are moving away from a display manager that has been in place since before Linus Torvalds got his first computer".
Successfully replacing the X11 system that has been used since 1987 is no small feat and we have to remember many tried over the years and failed. So a big Thank You to Kristian Høgsberg for his incredible work getting Wayland off the ground and build consensus around it from the community. I am always full of admiration for those who manage to create these kind of efforts up from their first line of code to a place where a vibrant and dynamic community can form around them.

And while we for sure have some issues left to resolve I think the launch of Wayland in Fedora Workstation 25 was so strong that we managed to keep and accelerate the momentum needed to leave the orbit of X11 and have it truly take on a life of its own.
Because we have succeeded not just in forming a community around Wayland, but with getting the existing linux graphics driver community to partake in the move, we managed to get the major desktop toolkits to partake in the move and I believe we have managed to get the community at large to partake in the move. And we needed all of those 3 to join us for this transition to have a chance to succeed with it. If this had only been about us at Red Hat and in the Fedora community who cared and contributed it would have gone nowhere, but this was truly one of those efforts that pulled together almost everyone in the wider linux community, and showcased what is possible when such a wide coalition of people get together. So while for instance we don't ship an Enlightenment spin of Fedora (interested parties would be encouraged to do so though) we did value and appreciate the work they where doing around Wayland, simply because the bigger the community the more development and bug fixing you will see on the shared infrastructure.

A part of the Wayland effort was the new input library Peter Hutterer put out called libinput. That library allowed us to clean up our input stack and also share the input code between X and Wayland. A lot of credit to Peter and Benjamin Tissoires for their work here as the transition like the later Wayland transition succeeded without causing a huge amount of pain for our users.

And this is also our approach for Flatpak which for us forms a crucial tandem with Wayland and the future of the Linux desktop. To ensure the project is managed in a way that is open and transparent to all and allows for different groups to adapt it to their specific usecases. And so far it is looking good, with early adoption and trials from the IVI community, traditional Linux distributions, device makers like Endless and platforms such as Steam. Each of these using the technologies or looking to use them in slightly different ways, but still all collaborating on pushing the shared technologies forward.

We managed to make some good steps forward in our effort to drain the swamp of Desktop Linux land (only unfortunate thing here is a certain Trump deciding to cybersquat on the 'drain the swamp' mantra) with adding H264 and mp3 support to Fedora Workstation. And while the H264 support still needs some work to support more profiles (which we unfortunately did not get to in 2016) we have other codec related work underway which I think will help move the needle on this being a big issue even further. The work needed on OpenH264 is not forgotten, but Wim Taymans ended up doing a lot more multimedia plumbing work for our container based future than originally planned. I am looking forward to share more details of where his work is going these days though as it could bring another group of makers into the world of mainstream desktop Linux when its ready.

Another piece of swamp draining that happened was around the Linux Firmware Service, which went from strength to strength in 2016. We had new vendors sign up throughout the year and while not all of those efforts are public I do expect that by the end of 2017 we will have most major hardware vendors offering firmware through the service. And not only system firmware updates but things like Logitech mice and keyboards will also be available.

Of course the firmware update service also has a client part and GNOME Software truly became a powerhouse for driving change during 2016, being the catalyst not only for the firmware update service, but also for linux applications providing good metadata in a standardized manner. The Appstream format required by GNOME Software has become the de-facto standard. And speaking of GNOME Software the distribution upgrade functionality we added in Fedora 24 and improved in Fedora 25 has become pretty flawless. Always possible to improve of course, but the biggest problem I heard of was due to versioning issue due to us pushing the mp3 decoding support for Fedora in at the very last minute and thus not giving 3rd party repositories a reasonable chance to update their packaging to account for it. Lesson learnt for going forward :)

These are just of course a small subset of the things we accomplished in 2016, but I was really happy to see the great reception we had to Fedora 25 last year, with a lot of major new sites giving it stellar reviews and also making it their distribution of the year. The growth curves in terms of adoption we are seeing for Fedora Workstation is a great encouragement for the team and helps is validate that we are on the right track with setting our development priorities. My hope for 2017 is that even more of you will decide to join our effort and switch to Fedora and 2017 will be the year of Fedora Workstation! On that note the very positive reception to the Fedora Media Writer that we introduced as the default download for Fedora Workstation 25 was great to see. Being able to have one simple tool to use regardless of which operating system you come to us from simplifies so much in terms of both communication on our end and lowering the threshold of adoption on the end user side.

01 Mar 2017 5:47pm GMT

27 Feb 2017

feedplanet.freedesktop.org

Dave Airlie: radv + steamvr

If anyone wants to run SteamVR on top of radv, the code is all public now.

https://github.com/airlied/mesa/tree/radv-wip-steamvr

The external memory code will be going upstream to master once I clean it up a bit, the semaphore hack is waiting on kernel
changes, and the NIR shader hack is waiting on a new SteamVR build that removes the bad use of SPIR-V.

I've run Serious SAM TFE in VR mode on this branch.

27 Feb 2017 7:54pm GMT

22 Feb 2017

feedplanet.freedesktop.org

Julien Danjou: Scalable metrics storage: Gnocchi on Amazon Web Services

As I wrote a few weeks ago in my post about Gnocchi 3.1 being released, one of the new feature available in this version it the S3 driver. Today I would like to show you how easy it is to use it and store millions of metrics into the simple, durable and massively scalable object storage provided by Amazon Web Services.

Installation

The installation of Gnocchi for this use case is not different than the standard installation procedure described in the documentation. Simply install Gnocchi from PyPI using the following command:

$ pip install gnocchi[s3,postgresql] gnocchiclient

This will install Gnocchi with the dependencies for the S3 and PostgreSQL drivers and the command-line interface to talk with Gnocchi.

Configuring Amazon RDS

Since you need a SQL database for the indexer, the easiest way to get started is to create a database on Amazon RDS. You can create a managed PostgreSQL database instance in just a few clicks.

Once you're on the homepage of Amazon RDS, pick PostgreSQL as a database:

You can then configure your PostgreSQL instance: I've picked a dev/test instance with the basic options available within the RDS Free Tier, but you can pick whatever you think is needed for your production use. Set a username and a password and note them for later: we'll need them to configure Gnocchi.

The next step is to configure the database in details. Just set the database name to "gnocchi" and leave the other options to their default values (I'm lazy).

After a few minutes, your instance should be created and running. Note down the endpoint. In this case, my instance is gnocchi.cywagbaxpert.us-east-1.rds.amazonaws.com.

Configuring Gnocchi for S3 access

In order to give Gnocchi an access to S3, you need to create access keys. The easiest way to create them is to go to IAM in your AWS console, pick a user with S3 access and click on the big gray button named "Create access key".

Once you do that, you'll get the access key id and secret access key. Note them down, we will need these later.

Creating gnocchi.conf

Now is time to create the gnocchi.conf file. You can place it in /etc/gnocchi if you want to deploy it system-wide, or in any other directory and add the --config-file option to each Gnocchi command..

Here are the values that you should retrieve and write in the configuration file:

Your gnocchi.conf file should then look like that:

[indexer]
url = postgresql://gnocchi:gn0cch1rul3z@gnocchi.cywagbaxpert.us-east-1.rds.amazonaws.com:5432/gnocchi

[storage]
driver = s3
s3_endpoint_url = https://s3-eu-west-1.amazonaws.com
s3_region_name = eu-west-1
s3_access_key_id = <you access key id>
s3_secret_access_key = <your secret access key>


Once that's done, you can run gnocchi-upgrade in order to initialize Gnocchi indexer (PostgreSQL) and storage (S3):

$ gnocchi-upgrade --config-file gnocchi.conf
2017-02-07 15:35:52.491 3660 INFO gnocchi.cli [-] Upgrading indexer 
2017-02-07 15:36:04.127 3660 INFO gnocchi.cli [-] Upgrading storage 

Then you can run the API endpoint using the test endpoint gnocchi-api and specifying its default port 8041:

$ gnocchi-api --port 8041 -- --config-file gnocchi.conf
2017-02-07 15:53:06.823 6290 INFO gnocchi.rest.app [-] WSGI config used: /Users/jd/Source/gnocchi/gnocchi/rest/api-paste.ini
********************************************************************************
STARTING test server gnocchi.rest.app.build_wsgi_app
Available at http://127.0.0.1:8041/
DANGER! For testing only, do not use in production
********************************************************************************

The best way to run Gnocchi API is to use uwsgi as documented, but in this case, using the testing daemon gnocchi-api is good enough.

Finally, in another terminal, you can start the gnocchi-metricd daemon that will process metrics in background:

$ gnocchi-metricd --config-file gnocchi.conf
2017-02-07 15:52:41.416 6262 INFO gnocchi.cli [-] 0 measurements bundles across 0 metrics wait to be processed.

Once everything is running, you can use Gnocchi's client to query it and check that everything is OK. The backlog should be empty at this stage, obviously.

$ gnocchi status
+-----------------------------------------------------+-------+
| Field                                               | Value |
+-----------------------------------------------------+-------+
| storage/number of metric having measures to process | 0     |
| storage/total number of measures to process         | 0     |
+-----------------------------------------------------+-------+

Gnocchi is ready to be used!

$ # Create a generic resource "foobar" with a metric named "visitor"
$ gnocchi resource create foobar -n visitor
+-----------------------+-----------------------------------------------+
| Field                 | Value                                         |
+-----------------------+-----------------------------------------------+
| created_by_project_id |                                               |
| created_by_user_id    | admin                                         |
| creator               | admin                                         |
| ended_at              | None                                          |
| id                    | b4d568e4-7af1-5aec-ac3f-9c09fa3685a9          |
| metrics               | visitor: 05f45876-1a69-4a64-8575-03eea5b79407 |
| original_resource_id  | foobar                                        |
| project_id            | None                                          |
| revision_end          | None                                          |
| revision_start        | 2017-02-07T14:54:54.417447+00:00              |
| started_at            | 2017-02-07T14:54:54.417414+00:00              |
| type                  | generic                                       |
| user_id               | None                                          |
+-----------------------+-----------------------------------------------+

# Send the number of visitor at 2 different timestamps
$ gnocchi measures add --resource-id foobar -m 2017-02-07T15:56@23 visitor
$ gnocchi measures add --resource-id foobar -m 2017-02-07T15:57@42 visitor

# Check the average number of visitor
# (the --refresh option is given to be sure the measure are processed)
$ gnocchi measures show --resource-id foobar visitor --refresh
+---------------------------+-------------+-------+
| timestamp                 | granularity | value |
+---------------------------+-------------+-------+
| 2017-02-07T15:55:00+00:00 |       300.0 |  32.5 |
+---------------------------+-------------+-------+

# Now shows the minimum number of visitor
$ gnocchi measures show --aggregation min --resource-id foobar visitor
+---------------------------+-------------+-------+
| timestamp                 | granularity | value |
+---------------------------+-------------+-------+
| 2017-02-07T15:55:00+00:00 |       300.0 |  23.0 |
+---------------------------+-------------+-------+

And voilà! You're ready to store millions of metrics and measures on your Amazon Web Services cloud platform. I hope you'll enjoy it and feel free to ask any question in the comment section or by reaching me directly!

22 Feb 2017 10:30am GMT

16 Feb 2017

feedplanet.freedesktop.org

Julien Danjou: Sending your collectd metrics to Gnocchi

Knowing that collectd is a daemon that collects system and applications metrics and that Gnocchi is a scalable timeseries database, it sounds like a good idea to combine them together. Cheery on the cake: you can easily draw charts using Grafana.

While it's true that Gnocchi is well integrated with OpenStack, as it comes from this ecosystem, it actually works standalone by default. Starting with the 3.1 version, it is now easy to send metrics to Gnocchi using collectd.

Installation

What we'll need to install to accomplish this task is:

How you install them does not really matter. If they are packaged by your operating system, go ahead. For Gnocchi and collectd-gnocchi, you can also use pip:

# pip install gnocchi[file,postgresql]
[…]
Successfully installed gnocchi-3.1.0
# pip install collectd-gnocchi
Collecting collectd-gnocchi
  Using cached collectd-gnocchi-1.0.1.tar.gz
[…]
Installing collected packages: collectd-gnocchi
  Running setup.py install for collectd-gnocchi ... done
Successfully installed collectd-gnocchi-1.0.1

The detailed installation procedure for Gnocchi is detailed in the documentation. It among other things explains which flavors are available - here I picked PostgreSQL and the file driver to store the metrics.

Configuration

Gnocchi

Gnocchi is simple to configure and is again documented. The default configuration file is /etc/gnocchi/gnocchi.conf - you can generate it with gnocchi-config-generator if needed. However, it also possible to specify another configuration file by appending the --config-file option to any command line

In Gnocchi's configuration file, you need to set the indexer.url configuration option to point an existing PostgreSQL database and set storage.file_basepath to an existing directory to store your metrics (the default is /var/lib/gnocchi). That gives something like:

[indexer]
url = postgresql://root:p4assw0rd@localhost/gnocchi

[storage]
file_basepath = /var/lib/gnocchi


Once done, just run the gnocchi-upgrade command to initialize the index and storage.

collectd

Collectd provides a default configuration file that loads a bunch of plugin by default, that will meter all sort of metrics on your computer. You can check the documentation online to see how to disable or enable plugins.

As the collectd-gnocchi plugin is written in Python, you'll need to enable the Python plugin and load the collectd-gnocchi module:

LoadPlugin python

<Plugin python>
Import "collectd_gnocchi"
<Module collectd_gnocchi>
endpoint "http://localhost:8041"
</Module>
</Plugin>


That is enough to enable the storage of metrics in Gnocchi.

Running the daemons

Once everything is configured, you can launch gnocchi-metricd and the gnocchi-api daemon:

$ gnocchi-metricd
2017-01-26 15:22:49.018 15971 INFO gnocchi.cli [-] 0 measurements bundles
across 0 metrics wait to be processed.
[…]
# In another terminal
$ gnocchi-api --port 8041
[…]
STARTING test server gnocchi.rest.app.build_wsgi_app
Available at http://127.0.0.1:8041/
[…]

It's not recommended to run Gnocchi using Gnocchi API (as written in the documentation): using uwsgi is a better option. However for rapid testing, the gnocchi-api daemon is good enough.

Once that's done, you can start collectd:

$ collectd
# Or to run in foreground with a different configuration file:
# $ collectd -C collectd.conf -f

If you have any problem launchding colllectd, check syslog for more information: there might be an issue loading a module or plugin.

If no error are printed, then everythin's working fine and you soon should see gnocchi-api printing some requests such as:

127.0.0.1 - - [26/Jan/2017 15:27:03] "POST /v1/resource/collectd HTTP/1.1" 409 113
127.0.0.1 - - [26/Jan/2017 15:27:03] "POST /v1/batch/resources/metrics/measures?create_metrics=True HTTP/1.1" 400 91

Enjoying the result

Once everything runs, you can access your newly created resources and metric by using the gnocchiclient. It should have been installed as a dependency of collectd_gnocchi, but you can also install it manually using pip install gnocchiclient.

If you need to specify a different endpoint you can use the --endpoint option (which default to http://localhost:8041). Do not hesitate to check the --help option for more information.

$ gnocchi resource list --details
+---------------+----------+------------+---------+----------------------+---------------+----------+----------------+--------------+---------+-----------+
| id            | type     | project_id | user_id | original_resource_id | started_at    | ended_at | revision_start | revision_end | creator | host      |
+---------------+----------+------------+---------+----------------------+---------------+----------+----------------+--------------+---------+-----------+
| dd245138-00c7 | collectd | None       | None    | dd245138-00c7-5bdc-  | 2017-01-26T14 | None     | 2017-01-26T14: | None         | admin   | localhost |
| -5bdc-94f8-26 |          |            |         | 94f8-263e236812f7    | :21:02.297466 |          | 21:02.297483+0 |              |         |           |
| 3e236812f7    |          |            |         |                      | +00:00        |          | 0:00           |              |         |           |
+---------------+----------+------------+---------+----------------------+---------------+----------+----------------+--------------+---------+-----------+
$ gnocchi resource show collectd:localhost
+-----------------------+-----------------------------------------------------------------------+
| Field                 | Value                                                                 |
+-----------------------+-----------------------------------------------------------------------+
| created_by_project_id |                                                                       |
| created_by_user_id    | admin                                                                 |
| creator               | admin                                                                 |
| ended_at              | None                                                                  |
| host                  | localhost                                                             |
| id                    | dd245138-00c7-5bdc-94f8-263e236812f7                                  |
| metrics               | interface-en0@if_errors-0: 5d60f224-2e9e-4247-b415-64d567cf5866       |
|                       | interface-en0@if_errors-1: 1df8b08b-555a-4cab-9186-f9b79a814b03       |
|                       | interface-en0@if_octets-0: 491b7517-7219-4a04-bdb6-934d3bacb482       |
|                       | interface-en0@if_octets-1: 8b5264b8-03f3-4aba-a7f8-3cd4b559e162       |
|                       | interface-en0@if_packets-0: 12efc12b-2538-45e7-aa66-f8b9960b5fa3      |
|                       | interface-en0@if_packets-1: 39377ff7-06e8-454a-a22a-942c8f2bca56      |
|                       | interface-en1@if_errors-0: c3c7e9fc-f486-4d0c-9d36-55cea855596a       |
|                       | interface-en1@if_errors-1: a90f1bec-3a60-4f58-a1d1-b3c09dce4359       |
|                       | interface-en1@if_octets-0: c1ee8c75-95bf-4096-8055-8c0c4ec8cd47       |
|                       | interface-en1@if_octets-1: cbb90a94-e133-4deb-ac10-3f37770e32f0       |
|                       | interface-en1@if_packets-0: ac93b1b9-da71-4876-96aa-76067b35c6c9      |
|                       | interface-en1@if_packets-1: 2f8528b2-12ae-4c4d-bec7-8cc987e7487b      |
|                       | interface-en2@if_errors-0: ddcf7203-4c49-400b-9320-9d3e0a63c6d5       |
|                       | interface-en2@if_errors-1: b249ea42-01ad-4742-9452-2c834010df71       |
|                       | interface-en2@if_octets-0: 8c23013a-604e-40bf-a07a-e2dc4fc5cbd7       |
|                       | interface-en2@if_octets-1: 806c1452-0607-4b56-b184-c4ffd48f52c0       |
|                       | interface-en2@if_packets-0: c5bc6103-6313-4b8b-997d-01930d1d8af4      |
|                       | interface-en2@if_packets-1: 478ae87e-e56b-44e4-83b0-ed28d99ed280      |
|                       | load@load-0: 5db2248d-2dca-401e-b2e2-bbaee23b623e                     |
|                       | load@load-1: 6f74ac93-78fd-4a74-a47e-d2add487a30f                     |
|                       | load@load-2: 1897aca1-356e-4791-907f-512e516992b5                     |
|                       | memory@memory-active-0: 83944a85-9c84-4fe4-b471-1a6cf8dce858          |
|                       | memory@memory-free-0: 0ccc7cfa-26a5-4441-a15f-9ebb2aa82c6d            |
|                       | memory@memory-inactive-0: 63736026-94c4-47c5-8d6f-a9d89d65025b        |
|                       | memory@memory-wired-0: b7217fd6-2cdc-4efd-b1a8-a1edd52eaa2e           |
| original_resource_id  | dd245138-00c7-5bdc-94f8-263e236812f7                                  |
| project_id            | None                                                                  |
| revision_end          | None                                                                  |
| revision_start        | 2017-01-26T14:21:02.297483+00:00                                      |
| started_at            | 2017-01-26T14:21:02.297466+00:00                                      |
| type                  | collectd                                                              |
| user_id               | None                                                                  |
+-----------------------+-----------------------------------------------------------------------+
% gnocchi metric show -r collectd:localhost load@load-0
+------------------------------------+-----------------------------------------------------------------------+
| Field                              | Value                                                                 |
+------------------------------------+-----------------------------------------------------------------------+
| archive_policy/aggregation_methods | min, std, sum, median, mean, 95pct, count, max                        |
| archive_policy/back_window         | 0                                                                     |
| archive_policy/definition          | - timespan: 1:00:00, granularity: 0:05:00, points: 12                 |
|                                    | - timespan: 1 day, 0:00:00, granularity: 1:00:00, points: 24          |
|                                    | - timespan: 30 days, 0:00:00, granularity: 1 day, 0:00:00, points: 30 |
| archive_policy/name                | low                                                                   |
| created_by_project_id              |                                                                       |
| created_by_user_id                 | admin                                                                 |
| creator                            | admin                                                                 |
| id                                 | 5db2248d-2dca-401e-b2e2-bbaee23b623e                                  |
| name                               | load@load-0                                                           |
| resource/created_by_project_id     |                                                                       |
| resource/created_by_user_id        | admin                                                                 |
| resource/creator                   | admin                                                                 |
| resource/ended_at                  | None                                                                  |
| resource/id                        | dd245138-00c7-5bdc-94f8-263e236812f7                                  |
| resource/original_resource_id      | dd245138-00c7-5bdc-94f8-263e236812f7                                  |
| resource/project_id                | None                                                                  |
| resource/revision_end              | None                                                                  |
| resource/revision_start            | 2017-01-26T14:21:02.297483+00:00                                      |
| resource/started_at                | 2017-01-26T14:21:02.297466+00:00                                      |
| resource/type                      | collectd                                                              |
| resource/user_id                   | None                                                                  |
| unit                               | None                                                                  |
+------------------------------------+-----------------------------------------------------------------------+
$ gnocchi measures show -r collectd:localhost load@load-0
+---------------------------+-------------+--------------------+
| timestamp                 | granularity |              value |
+---------------------------+-------------+--------------------+
| 2017-01-26T00:00:00+00:00 |     86400.0 | 3.2705004391254193 |
| 2017-01-26T15:00:00+00:00 |      3600.0 | 3.2705004391254193 |
| 2017-01-26T15:00:00+00:00 |       300.0 | 2.6022800611413044 |
| 2017-01-26T15:05:00+00:00 |       300.0 |  3.561742940080275 |
| 2017-01-26T15:10:00+00:00 |       300.0 | 2.5605337960379466 |
| 2017-01-26T15:15:00+00:00 |       300.0 |  3.837517851142473 |
| 2017-01-26T15:20:00+00:00 |       300.0 | 3.9625948392427883 |
| 2017-01-26T15:25:00+00:00 |       300.0 | 3.2690042162698414 |
+---------------------------+-------------+--------------------+

As you can see, the command line works smoothly and can show you any kind of metric reported by collectd. In this case, it was just running on my laptop, but you can imagine it's easy enough to poll thousands of hosts with collectd and Gnocchi.

Bonus: charting with Grafana

Grafana, a charting software, has a plugin for Gnocchi as detailed in the documentation. Once installed, you can just configure Grafana to point to Gnocchi this way:

Grafana configuration screen

You can then create a new dashboard by filling the forms as you wish. See this other screenshot for a nice example:

Charts of my laptop's load average

I hope everything is clear and easy enough. If you have any question, feel free to write something in the comment section!

16 Feb 2017 8:53am GMT

13 Feb 2017

feedplanet.freedesktop.org

Eric Anholt: These weeks in vc4 (2017-02-13): HDMI audio, DSI, 3D stability

Probably the biggest news of the last two weeks is that Boris's native HDMI audio driver is now on the mailing list for review. I'm hoping that we can get this merged for 4.12 (4.10 is about to be released, so we're too late for 4.11). We've tested stereo audio so far, no compresesd audio (though I think it should Just Work), and >2 channel audio should be relatively small amounts of work from here. The next step on HDMI audio is to write the alsalib configuration snippets necessary to hide the weird details of HDMI audio (stereo IEC958 frames required) so that sound playback works normally for all existing userspace, which Boris should have a bit of time to work on still.

I've also landed the vc4 part of the DSI driver in the drm-misc-next tree, along with a fixup. Working with the new drm-misc-next trees for vc4 submission is delightful -- once I get a review, I can just push code to the tree and it will magically (through Daniel Vetter's pull requests and some behind-the-scenes tools) go upstream at the appropriate time. I am delighted with the work that Daniel has been doing to make the DRM subsystem more welcoming of infrequent contributors and reducing the burden on us all of participating in the Linux kernel process.

In 3D land, the biggest news is that I've fixed a kernel oops (producing a desktop lockup) when CMA returns out of memory. Unfortunately, VC4 doesn't have an MMU, so we require that memory allocations for graphics be contiguous (using the Contiguous Memory Allocator), and to make things worse we have a limit of 256MB for CMA due to an addressing bug of the V3D part, so CMA returning out of memory is a serious and unfortunately frequent problem. I had a bug where a CMA failure would put the failed allocation into the BO cache, so if you asked for a BO of the same size soon enough, you'd get the bad BO back and crash. I've been unable to construct a good minimal testcase for it, but the patch is on the mailing list and in the rpi-4.4.y tree now.

I've also fixed a bug in the downstream tree's "firmware KMS" mode (where I use the closed source firmware for display, but open source 3D) where fullscreen 3D rendering would get a single frame displayed and then freeze.

In userspace, I've fixed a bit of multisample rendering (copies of MSAA depth buffers) that gets more of our regression testing working, and worked on some potential rasterization problems (the texwrap regression tests are failing due to texture filtering troubles, and I'm not sure if we're actually doing anything wrong or not because we're near the cutoff for the "how accurate does the filtering have to be?").

Coming up, I'm looking at fixing a cursor updates bug that Michael Zoran has found, and fixing up the DSI panel driver so that it can hopefully get into 4.12. Also, after a recent discussion with Eben, I've realized that we can actually scan out tiled buffers from the GPU, so we may get big performance wins for un-composited X if I can have Mesa coordinate with the kernel on producing tiled buffers.

13 Feb 2017 10:19pm GMT

10 Feb 2017

feedplanet.freedesktop.org

Peter Hutterer: libinput knows about internal and external touchpads

libinput has a couple of features that 'automagically' work on touchpads such as disable-while-typing and the lid switch triggered disabling of touchpads and disabling the touchpad when an external mouse is plugged in [1]. But not all of these features make sense on all touchpads. For example, an Apple Magic Trackpad doesn't need disable-while-typing because unless you have a creative arrangement of input devices [2], the touchpad won't be where your palm is likely to hit it. Likewise, a Logitech T650 connected over a unifying receiver shouldn't get disabled when the laptop lid closes.

For this to work, libinput has some code to figure out whether a touchpad is internal or external. Initially we had some code to detect this but eventually moved this to the ID_INPUT_TOUCHPAD_INTEGRATION property now set by udev's hwdb (systemd 231 and later). Having it in the hwdb makes it quite trivial to override locally where the current rules are insufficient (and until the hwdb is fixed, thanks for filing a bug). We still have the fallback code though in case the tag is missing. On a sufficiently modern distribution, udevadm info /sys/class/input/event4 for your touchpad device node should show something like ID_INPUT_TOUCHPAD_INTEGRATION=internal.

So for any feature that libinput adds for touchpads, we only enable it where it makes sense. That's why your external touchpad doesn't trigger disable-while-typing or the lid switch.

[1] ok, I admit, this is something we should've left to the client, but now we have the feature.
[2] yes, I'm sure there's at least one person out there that uses the touchpad upside down in front of the keyboard and is now angry that libinput doesn't allow arbitrary rotation of the device combined with configurable dwt. I think of you every night I cry myself to sleep.

10 Feb 2017 12:27am GMT

06 Feb 2017

feedplanet.freedesktop.org

Julien Danjou: FOSDEM 2017, recap

Last week-end, I was in Brussels, Belgium for the 2017 edition of the FOSDEM, one of the greatest open source developer conference.

This year, I decided to propose a talk about Gnocchi which was accepted in the Python devroom. The track was very wlell organized (thanks to Stéphane Wirtel) and I was able to present Gnocchi to a room full of Python developers!

I've explained why we created Gnocchi and how we did it, and finally briefly explained how to use it with the command-line interface or in a Python application using the SDK.

You can check the slides and the video of the talk.

06 Feb 2017 10:54am GMT

03 Feb 2017

feedplanet.freedesktop.org

Daniel Vetter: LCA Hobart: Maintainers Don't Scale

Seems that there was a rift in the spacetime that sucked away the video of my LCA talk, but the awesome NextDayVideo team managed to pull it back out. And there's still the writeup and slides available.

03 Feb 2017 12:00am GMT