28 Apr 2016

feedplanet.freedesktop.org

Daniel Vetter: X.Org Foundation Election Results

Two questions were up for voting, 4 seats on the Board of Directors and approval of the amended By-Laws to join SPI.

Congratulations to our reelected and new board members Egbert Eich, Alex Deucher, Keith Packard and Bryce Harrington. Thanks a lot to Lucas Stach for running. And also big thanks to our outgoing board member Matt Dew, who stepped down for personal reasons.

On the bylaw changes and merging with SPI, 61 out of 65 active members voted, with 54 voting yes, 4 no and 3 abstained. Which means we're well past the 2/3rd quorum for bylaw changes, and everything's green now to proceed with the plan to join SPI!

28 Apr 2016 6:37am GMT

27 Apr 2016

feedplanet.freedesktop.org

Nicolai Hähnle: Using LLVM bugpoint to hunt radeonsi compiler crashes

Shaders can be huge, and tracking down compiler crashes (or asserts) in LLVM with a giant shader isn't a lot of fun. Luckily, LLVM has a tool called Bugpoint. It takes a given piece of LLVM IR and tries a bunch of simplifications such as removing instructions or basic blocks, while checking that a given condition is still satisfied. Make the given condition something like "llc asserts with message X", and you have a very useful tool for reducing test cases. Unfortunately, its documentation isn't the greatest, so let me briefly dump how I have used it in the past.

I have a little script called run_llc.sh that looks like this:


#!/bin/bash

if ! llc -mtriple=amdgcn-- -verify-machineinstrs "$@" 2>&1 | grep "error message here"; then
exit 0
else
exit $?
fi

When I encounter a compiler assertion, I first make sure to collect the offending shader from our driver using R600_DEBUG=ps,vs,gs,tcs,tes and extract it into a file like bug.ll. (In very rare cases, one may need the preoptir option in R600_DEBUG.) Then I edit run_llc.sh with the correct error message and run


bugpoint -compile-custom -compile-command ./run_llc.sh bug.ll

It'll churn for some time and produce a hopefully much smaller .bc file that one can use the usual tools on, such as llc, opt, and llvm-dis.

Occasionally, it can be useful to run the result through opt -instnamer or to simplify it further by hand, but usually, bugpoint provides a good starting point.

27 Apr 2016 11:11pm GMT

26 Apr 2016

feedplanet.freedesktop.org

Matthias Klumpp: Why are AppStream metainfo files XML data?

This is a question raised quite quite often, the last time in a blogpost by Thomas, so I thought it is a good idea to give a slightly longer explanation (and also create an article to link to…).

There are basically three reasons for using XML as the default format for metainfo files:

1. XML is easily forward/backward compatible, while YAML is not

This is a matter of extending the AppStream metainfo files with new entries, or adapt existing entries to new needs.

Take this example XML line for defining an icon for an application:

<icon type="cached">foobar.png</icon>

and now the equivalent YAML:

Icons:
  cached: foobar.png

Now consider we want to add a width and height property to the icons, because we started to allow more than one icon size. Easy for the XML:

<icon type="cached" width="128" height="128">foobar.png</icon>

This line of XML can be read correctly by both old parsers, which will just see the icon as before without reading the size information, and new parsers, which can make use of the additional information if they want. The change is both forward and backward compatible.

This looks differently with the YAML file. The "foobar.png" is a string-type, and parsers will expect a string as value for the cached key, while we would need a dictionary there to include the additional width/height information:

Icons:
  cached: name: foobar.png
          width: 128
          height: 128

The change shown above will break existing parsers though. Of course, we could add a cached2 key, but that would require people to write two entries, to keep compatibility with older parsers:

Icons:
  cached: foobar.png
  cached2: name: foobar.png
          width: 128
          height: 128

Less than ideal.

While there are ways to break compatibility in XML documents too, as well as ways to design YAML documents in a way which minimizes the risk of breaking compatibility later, keeping the format future-proof is far easier with XML compared to YAML (and sometimes simply not possible with YAML documents). This makes XML a good choice for this usecase, since we can not do transitions with thousands of independent upstream projects easily, and need to care about backwards compatibility.

2. Translating YAML is not much fun

A property of AppStream metainfo files is that they can be easily translated into multiple languages. For that, tools like intltool and itstool exist to aid with translating XML using Gettext files. This can be done at project build-time, keeping a clean, minimal XML file, or before, storing the translated strings directly in the XML document. Generally, YAML files can be translated too. Take the following example (shamelessly copied from Dolphin):

<summary>File Manager</summary>
<summary xml:lang="bs">Upravitelj datoteka</summary>
<summary xml:lang="cs">Správce souborů</summary>
<summary xml:lang="da">Filhåndtering</summary>

This would become something like this in YAML:

Summary:
  C: File Manager
  bs: Upravitelj datoteka
  cs: Správce souborů
  da: Filhåndtering

Looks manageable, right? Now, AppStream also covers long descriptions, where individual paragraphs can be translated by the translators. This looks like this in XML:

<description>
  <p>Dolphin is a lightweight file manager. It has been designed with ease of use and simplicity in mind, while still allowing flexibility and customisation. This means that you can do your file management exactly the way you want to do it.</p>
  <p xml:lang="de">Dolphin ist ein schlankes Programm zur Dateiverwaltung. Es wurde mit dem Ziel entwickelt, einfach in der Anwendung, dabei aber auch flexibel und anpassungsfähig zu sein. Sie können daher Ihre Dateiverwaltungsaufgaben genau nach Ihren Bedürfnissen ausführen.</p>
  <p>Features:</p>
  <p xml:lang="de">Funktionen:</p>
  <p xml:lang="es">Características:</p>
  <ul>
    <li>Navigation (or breadcrumb) bar for URLs, allowing you to quickly navigate through the hierarchy of files and folders.</li>
    <li xml:lang="de">Navigationsleiste für Adressen (auch editierbar), mit der Sie schnell durch die Hierarchie der Dateien und Ordner navigieren können.</li>
    <li xml:lang="es">barra de navegación (o de ruta completa) para URL que permite navegar rápidamente a través de la jerarquía de archivos y carpetas.</li>
    <li>Supports several different kinds of view styles and properties and allows you to configure the view exactly how you want it.</li>
    ....
  </ul>
</description>

Now, how would you represent this in YAML? Since we need to preserve the paragraph and enumeration markup somehow, and creating a large chain of YAML dictionaries is not really a sane option, the only choices would be:

In both cases, we would loose the ability to translate individual paragraphs, which also means that as soon as the developer changes the original text in YAML, translators would need to translate the whole bunch again, which is inconvenient.

On top of that, there are no tools to translate YAML properly that I am aware of, so we would need to write those too.

3. Allowing XML and YAML makes a confusing story and adds complexity

While adding YAML as a format would not be too hard, given that we already support it for DEP-11 distro metadata (Debian uses this), it would make the business of creating metainfo files more confusing. At time, we have a clear story: Write the XML, store it in /usr/share/metainfo, use standard tools to translate the translatable entries. Adding YAML to the mix adds an additional choice that needs to be supported for eternity and also has the problems mentioned above.

I wanted to add YAML as format for AppStream, and we discussed this at the hackfest as well, but in the end I think it isn't worth the pain of supporting it for upstream projects (remember, someone needs to maintain the parsers and specification too and keep XML and YAML in sync and updated). Don't get me wrong, I love YAML, but for translated metadata which needs a guarantee on format stability it is not the ideal choice.

So yeah, XML isn't fun to write by hand. But for this case, XML is a good choice.

26 Apr 2016 4:20pm GMT

Matthias Klumpp: A GNOME Software Hackfest report

Two weeks ago was the GNOME Software hackfest in London, and I've been there! And I just now found the time to blog about it, but better do it late than never 😉 .

Arriving in London and finding the Red Hat offices

After being stuck in trains for the weekend, but fortunately arriving at the airport in time, I finally made it to London with quite some delay due to the slow bus transfer from Stansted Airport. After finding the hotel, the next issue was to get food and a place which accepted my credit card, which was surprisingly hard - in defence of London I must say though, that it was a Sunday, 7 p.m. and my card is somewhat special (in Canada, it managed to crash some card readers, so they needed a hard-reset). While searching for food, I also found the Red Hat offices where the hackfest was starting the next day by accident. My hotel, the office and the tower bridge were really close, which was awesome! I have been to London in 2008 the last time, and only for a day, so being that close to the city center was great. The hackfest didn't leave any time to visit the city much, but by being close to the center, one could hardly avoid the "London experience" 😉 .

Cool people working on great stuff

towerbridge2016That's basically the summary for the hackfest 😉 . It was awesome to meet with Richard Hughes again, since we haven't seen each other in person since 2011, but work on lots of stuff together. This was especially important, since we managed to solve quite some disagreements we had over stuff - Richard even almost managed to make me give in to adding <kudos/> to the AppStream spec, something which I was pretty against supporting (it didn't make it yet, but I am no longer against the idea of having that - the remaining issues are solvable).

Meeting Iain Lane again (after FOSDEM) was also very nice, and also seeing other people I've only worked with over IRC or bug reports (e.g. William, Kalev, …) was great. Also lots of "new" people were there, like guys from Endless, who build their low-budget computer for developing/emerging countries on top of GNOME and Linux technologies. It's pretty cool stuff they do, you should check out their website! (they also build their distribution on top of Debian, which is even more awesome, and something I didn't know before (because many Endless people I met before were associated with GNOME or Fedora, I kind of implicitly assumed the system was based on Fedora 😛 )).

The incarnation of GNOME Software used by endless looks pretty different from what the normal GNOME user sees, since it's adjusted for a different audience and input method. But it looks great, and is a good example for how versatile GS already is! And for upstream GNOME, we've seen some pretty great mockups done by Endless too - I hope those will make it into production somehow.

Ironically, a "snapstore" was close to the office ;-)

Ironically, a "snapstore" was close to the office ;-)

XdgApp and sandboxing of apps was also a big topic, aside from Ubuntu and Endless integration. Fortunately, Alexander Larsson was also there to answer all the sandboxing and XdgApp-questions.

I used the time to follow up on a conversation with Alexander we started at FOSDEM this year, about the Limba vs. XdgApp bundling issue. While we are in-line on the sandboxing approach, the way how software is distributed is implemented differently in Limba and XdgApp, and it is bad to have too many bundling systems around (doesn't make for a good story where we can just tell developers "ship as this bundling format, and it will be supported everywhere"). Talking with Alex about this was very nice, and I think there is a way out of the too-many-solutions dilemma, at least for Limba and XdgApp - I will blog about that separately soon.

On the Ubuntu side, a lot of bugs and issues were squashed and changes upstreamed to GNOME, and people were generally doing their best to reduce Richard's bus-factor on the project a little 😉 .

I mainly worked on AppStream issues, finishing up the last pieces of appstream-generator and running it against some sample package sets (and later that week against the whole Debian archive). I also started to implement support for showing AppStream issues in the Debian PTS (this work is not finished yet). I also managed to solve a few bugs in the old DEP-11 generator and prepare another release for Ubuntu.

We also enjoyed some good Japanese food, and some incredibly great, but also suddenly very expensive Indian food (but that's a different story 😉 ).

The most important thing for me though was to get together with people actually using AppStream metadata in software centers and also more specialized places. This yielded some useful findings, e.g. that localized screenshots are not something weird, but actually a wanted feature of Endless for their curated AppStore. So localized screenshots will be part of the next AppStream spec. Also, there seems to be a general need to ship curation information for software centers somehow (which apps are featured? how are they styled? added special banners for some featured apps, "app of the day" features, etc.). This problem hasn't been solved, since it's highly implementation-specific, and AppStream should be distro-agnostic. But it is something we might be able to address in a generic way sooner or later (I need to talk to people at KDE and Elementary about it).

In summary…

It was a great event! Going to conferences and hackfests always makes me feel like it moves projects leaps ahead, even if you do little coding. Sorting out issues together with people you see in person (rather than communicating with them via text messages or video chat), is IMHO always the most productive way to move forward (yeah, unless you do this every week, but I think you get my point 😀 ).

For me, being the only (and youngest ^^) developer at the hackfest who was not employed by any company in the FLOSS business, the hackfest was also motivating to continue to invest spare time into working on these projects.

So, the only thing left to do is a huge shout out of "THANK YOU" to the Ubuntu Community Fund - and therefore the Ubuntu community - for sponsoring me! You rock! Also huge thanks to Canonical for organizing the sponsoring really quickly, so I didn't get into trouble with paying my flights.

Laney and attente walking on the Millennium Bridge after we walked the distance between Red Hat and Canonical's offices.

Laney and attente on the Millennium Bridge after we walked the distance between Red Hat and Canonical's offices.

To worried KDE people: No, I didn't leave the blue side - I just generally work on cross-desktop stuff, and would like all desktops to work as well as possible 😉

26 Apr 2016 7:50am GMT

24 Apr 2016

feedplanet.freedesktop.org

Daniel Vetter: Should the X.org Foundation join SPI? Vote Now!



In case you missed it: Please vote now on https://members.x.org/login.php!

24 Apr 2016 9:14am GMT

20 Apr 2016

feedplanet.freedesktop.org

Daniel Vetter: X.org Foundation Election - Vote Now!

It's election season in X.org land, and it matters: Besides new board seats we're also voting on bylaw changes and whether to join SPI or not.

Personally, and as the secretary of the board I'm very much in favour of joining SPI. It will allow us to offload all the boring bits of running a foundation, and those are also all the bits we tend to struggle with. And that would give the board more time to do things that actually matter and help the community. And all that for a really reasonable price - running our own legal entity isn't free, and not really worth it for our small budget mostly consisting of travel sponsoring and the occasional internship.

And bylaw changes need a qualified supermajority of all members, every vote counts and not voting essentially means voting no. Hence please vote, and please vote even when you don't want to join - this is our second attempt and I'd really like to see a clear verdict from our members, one way or the other.

Thanks.

Voting closes by Apr 26 23:59 UTC, but please don't cut it short, it's a computer that decides when it's over ...

20 Apr 2016 7:24am GMT

18 Apr 2016

feedplanet.freedesktop.org

Christian Schaller: Fedora Workstation Phase 1 – Homestretch

When we set of to do the Fedora Workstation we had some clear idea about where we wanted to take it, but we also realized there was a lot of cleaning up needed in our stack to make our vision viable. The biggest change we felt was needed to enable us was the move towards using application bundles as the primary shipping method for applications as opposed to the fine grained package universe that RPMS represent. That said we also saw the many advantages the packages brought in terms of easing security updates and allowing people to fine tune their system, so we didn't want to throw the proverbial baby out with the bathwater. So we started investigating the various technologies out there, as we where of course not alone in thinking about these things. Unfortunately nothing clearly fit the bill of what we wanted and trying to modify for instance Docker to be a good technology for running desktop applications turned out to be nonviable. So we tasked Alex Larsson with designing and creating what today is known as xdg-app. Our requirements list looked something like this (in random order):

a) Easy bundling of needed libraries
b) A runtime system to reduce the application sizes to something more manageable and ease providing security updates.
c) A system designed to be managed by a desktop session as opposed to managed by sysadmin style tools
d) A security model that would let us gradually move towards sandboxing applications and alleviate the downsides of library bundling
e) An ability to reliably offer online updates of applications
f) Reuse as much of the technology created by others as possible to lower maintenance overhead
g) Design it in a way that makes supporting the applications cross multiple distributions possible and easy
h) Provide a top notch developer story so that this becomes a big positive for application developers and not another pain point.

As we investigated what we needed other requirements become obvious, like the need to migrate from X to Wayland in order to build a modern composited windowing system that renders using GL, instead of an aging one that has a rendering interface that is no longer used for the most part, and to be able to provide the level of system security we wanted. There was also the need to design new systems like Pinos for handling video and add new functionality to PulseAudio for dealing with sandboxed applications, creating libinput to have great input handling in Wayland and also let us share the input subsystem between X and Wayland. And of course we wanted our new packaging system tightly integrated into GNOME Software so that install, updating and running these applications became smooth and transparent to the user.

This would be a big undertaking and it turned out to be an even bigger effort than we initially thought, as there was a lot of swamp draining needed here and I am really happy that we have a team capable of pulling these things off. For instance there is not really many other people in the Linux community other than Peter Hutterer who could have created libinput, and without libinput there is no way Wayland would be a viable alternative to X (and without libinput neither would Mir which is a bit ironic for a system that was created because they couldn't wait for Wayland :).

So going back to the title of this blog entry I feel that we are now finally exiting what I think of as Phase 1, even if we never formally designated it as such, of our development roadmap. For instance we initially hoped to have Wayland feature complete in a Fedora 22 timeframe, but it has taken us extra time to get all the smaller details right, so instead we are now having what we consider the first feature complete Wayland ready with Fedora Workstation 24. And if things go as we expect and hope that should become our default system starting from Fedora Workstation 25. The X Window session will be shipping and available for a long time yet I am sure, but not defaulting to it will mark a clear line in the sand for where the development focus is going forward.

At the same time Xdg-app has started to come together nicely over the last few Months with a lot of projects experimenting with it and bugs and issues being quickly resolved. We also taking major steps towards bringing xdg-app into the mainstream by Alex now working on making Xdg-apps OCI compliant, basically meaning that xdg-apps conform to the Open Container Initative requirements defined by Opencontainers.org. Our expectation is that the next Xdg-app development release will include the needed bits to be OCI compliant. Another nice milestone for Xdg-app was that it recently got added to Debian, meaning that Xdg-apps should be more easily runable in both Fedora its downstreams and in Debian and its downstreams.

Another major piece of engineering that is coming to a close is moving major applications such as Firefox, LibreOffice and Eclipse to GTK3. This was needed both to get these applications able to run natively on Wayland, but it also enabled us to make them work nicely for HiDPI. This has also played out into how GTK3 have positioned itself which to be a toolkit dedicated to pushing the Linux desktop forward and helping that quickly adapt and adopt to changes in the technology landscape. This is why GTK3 is that toolkit that has been leading the charge on things like HiDPI support and Wayland support. At the same time we know some of the changes in GTK3 have been painful for application developers, especially the changes in how theming works, but with the bulk of the migration to using CSS for theming now being complete we expect that even for applications that use GTK3 in 'weird ways' like Firefox, LibreOffice and Eclipse, things should be stable.

Another piece of the puzzle we have wanted to improve is the general Linux hardware story. So since Red Hat joined Khronos last year the Red Hat Graphics team, with Dave Airlie and Adam Jackson leading the charge, has been able to participate in preparing the launch of Vulkan through doing review and we even managed to sneak in a bit of code contribution by Adam Jackson ensuring that there was a vendor neutral Vulkan loader available so that we didn't end up in a situation where every vendor had to provide their own.

We have also been contributing to the vendor neutral OpenGL dispatcher. The dispatcher is basically a layer that routes an applications OpenGL rendering to the correct implementation, so if you have a system with a discrete GPU system you can for instance control which of your two GPUs handle a certain application or game. Adam Jackson has also been collaborating closely with NVidia on getting such a dispatch system complete for OpenGL, so that the age old problem of the Mesa OpenGL library and the proprietary NVidia OpenGL library conflicting can finally be resolved. So NVidia has of course handled the part in their driver and they where also the ones designing this, but Adam has been working on getting the Mesa parts completed. We think that this will make the GPU story on Linux a lot nicer going forward. There are still a few missing pieces before we have the discrete graphics card scenario handled in a smooth way, but we are getting there quickly.

The other thing we have been working on in terms of hardware support, which is still ongoing is improving the Red Hat certification process to cover more desktop hardware. You might ask what that has to do with Fedora Workstation, but it actually is a quite efficient way of improving the quality of Linux support for desktop hardware in general as most of the major vendors submit some of their laptops and desktops to Red Hat for certification. So the more issues the Red Hat certification process can detect the better Linux support on such hardware can become.

Another major effort where we have managed to cover a lot of our goals and targets is GNOME Software. Since the inception of Fedora Workstation we taken that tool and added functionality like UEFI firmware updates, codecs and font handling, GNOME Extensions handling, System upgrades, Xdg-app handling, users reviews, improved application metadata, improved handling of 3rd party repositories and improved general performance with the move from yum to hawkeye. And I think that the Software store has become a crucial part of what you expect of a desktop these days, with things like the Google Play store, the Apple Store and the Microsoft store to some degree defining their respective products more than the heuristics of the shell of Android, iPhone, MacOS or Windows. And I take it as an clear recognition of the great work Richard Hughes had done with GNOME Software that this week there is a special GNOME Software hackfest in London with participants from Fedora/Red Hat, Ubuntu/Canonical, Codethink and Endless.

So I am very happy with where we are at, and I want to say thank you to all long term Fedora users who been with us through the years and also say thank you and welcome to all the new Fedora Workstation users who has seen all the cool stuff we been doing and decided to join us over the last two years; seeing the strong growth in our userbase during this time has been a great source of joy for us and been a verification that we are on the right track.

I am also very happy about how the culmination of these efforts will be on display with the upcoming Fedora Workstation 24 release! Of course it also means it is time for the Fedora Workstation Working group to start planning what Phase 2 of reaching the Fedora Workstation vision should be :)

18 Apr 2016 3:06pm GMT

Peter Hutterer: libinput and graphics tablet pad support

When we released graphics tablet support in libinput earlier this year, only tablet tools were supported. So while you could use the pen normally, the buttons, rings and strips on the physical tablet itself (the "pad") weren't detected by libinput and did not work. I have now merged the patches for pad support into libinput.

The reason for the delay was simple: we wanted to get it right [1]. Pads have a couple of properties that tools don't have and we always considered pads to be different to pens and initially focused on a more generic interface (the "buttonset" interface) to accommodate for those. After some coding, we have now arrived at a tablet pad-specific interface instead. This post is a high-level overview of the new tablet pad interface and how we intend it do be used.

The basic sign that a pad is present is when a device has the tablet pad capability. Unlike tools, pads don't have proximity events, they are always considered in proximity and it is up to the compositor to handle the focus accordingly. In most cases, this means tying it to the keyboard focus. Usually a pad is available as soon as a tablet is plugged in, but note that the Wacom ExpressKey Remote (EKR) is a separate, wireless device and may be connected after the physical pad. It is up to the compositor to link the EKR with the correct tablet (if there is more than one).

Pads have three sources of events: buttons, rings and strips. Rings and strips are touch-sensitive surfaces and provide absolute values - rings in degrees, strips in normalized [0.0, 1.0] coordinates. Similar to pointer axis sources we provide a source notification. If that source is "finger", then we send a terminating out-of-range event so that the caller can trigger things like kinetic scrolling.

Buttons on a pad are ... different. libinput usually re-uses the Linux kernel's include/input.h event codes for buttons and keys. But for the pad we decided to use plain sequential button numbering, starting at index 0. So rather than a semantic code like BTN_LEFT, you'd simply get a button 0 event. The reasoning behind this is a caveat in the kernel evdev API: event codes have semantic meaning (e.g. BTN_LEFT) but buttons on a tablet pad don't those meanings. There are some generic event ranges (e.g. BTN_0 through to BTN_9) and the Wacom tablets use those but once you have more than 10 buttons you leak into other ranges. The ranges are simply too narrow so we end up with seemingly different buttons even though all buttons are effectively the same. libinput's pad support undoes that split and combines the buttons into a simple sequential range and leaves any semantic mapping of buttons to the caller. Together with libwacom which describes the location of the buttons a caller can get a relatively good idea of how the layout looks like.

Mode switching is a commonly expected feature on tablet. One button is designated as mode switch button and toggles all other buttons between the available modes. On the Intuos Pro series tablets, that button is usually the button inside the ring. Button mapping and thus mode switching is however a feature we leave up to the caller, if you're working on a compositor you will have to implemented mode switching there.

Other than that, pad support is relatively simple and straightforward and should not cause any big troubles.

[1] or at least less wrong than in the past
[2] They're actually linux/input-event-codes.h in recent kernels

18 Apr 2016 7:14am GMT

16 Apr 2016

feedplanet.freedesktop.org

Tollef Fog Heen: Blog moved, new tech

I moved my blog around a bit and it appears that static pages are now in favour, so I switched to that, by way of Hugo. CSS and such needs more tweaking, but it'll make do for now.

As part of this, RSS feeds and such changed, if you want to subscribe to this (very seldomly updated) blog, use https://err.no/personal/blog/index.xml

16 Apr 2016 8:42pm GMT

Matthias Klumpp: Introducing AppStream-Generator

AppStream GeneratorSince mid-2015 we were using the dep11-generator in Debian to build AppStream metadata about available software components in the distribution.

Getting rid of dep11-generator

Unfortunately, the old Python-based dep11-generator was hitting some hard limits pretty soon. For example, using multiprocessing with Python was a pain, since it resulted in some very hard-to-track bugs. Also, the multiprocessing approach (as opposed to multithreading) made it impossible to use the underlying LMDB database properly (it was basically closed and reopened in each forked off process, since pickling the Python LMDB object caused some really funny bugs, which usually manifested themselves in the application hanging forever without any information on what was going on). Additionally to that, the Python-based generator forced me to maintain two implementations of the AppStream YAML spec, one in C and one in Python, which consumes quite some time. There were also some other issues (e.g. no unit-tests) in the implementation, which made me think about rewriting the generator.

Adventures in Go / Rust / D

Since I didn't want to write this new piece of software in C (or basically, writing it in C was my last option 😉 ), I explored Go and Rust for this purpose and also did a small prototype in the D programming language, when I was starting to feel really adventurous. And while I never intended to write the new generator in D (I was pretty fixated on Go…), this is what happened. The strong points for D for this particular project were its close relation to C (and ease of using existing C code), its super-flat learning curve for someone who knows and likes C and C++ and its pretty powerful implementations of the concurrent and parallel programming paradigms. That being said, not all is great in D and there are some pretty dark spots too, mainly when it comes to the standard library and compilers. I will dive into my experiences with D in a separate blogpost.

What good to expect from appstream-generator?

So, what can the new appstream-generator do for you? Basically, the same as the old dep11-generator: It will extract metadata from a distribution's package archive, download and resize screenshots, search for icons and size them properly and generate reports in JSON and HTML of found metadata and issues.

LibAppStream-based parsing, generation of YAML or XML, multi-distro support, …

As opposed to the old generator, the new generator utilizes the metadata parsers and writers of libappstream. This allows it to return the extracted metadata as AppStream YAML (for Debian) or XML (everyone else) It is also written in a distribution-agnostic way, so if someone wants to use it in a different distribution than Debian, this is possible now. It just requires a very small distribution-specific backend to be written, all of the details of the metadata extraction are abstracted away (just two interfaces need to be implemented). While I do not expect anyone except Debian to use this in the near future (most distros have found a solution to generate metadata already), the frontend-backend split is a much cleaner design than what was available in the previous code. It also allows to unit-test the code properly, without providing a Debian archive in the testsuite.

Feature Flags, Optipng, …

The new generator also allows to enable and disable certain sets of features in a standardized way. E.g. Ubuntu uses a language-pack system for translations, which Debian doesn't use. Features like this can be implemented as disableable separate modules in the generator. We use this at time to e.g. allow descriptions from packages to be used as AppStream descriptions, or for running optipng on the generated PNG images and icons.

No more Contents file dependency

Another issue the old generator had was that it used the Contents file from the Debian archive to find matching icons for an application. We could never be sure whether the contents in the Contents file actually matched the contents of the package we were currently dealing with. What made things worse is that at Ubuntu, the archive software is only updating the Contents file weekly daily (while the generator might run multiple times a day), which has lead to software being ignored in the metadata, because icons could not yet be found. Even on Debian, with its quickly-updated Contents file, we could immediately see the effects of an out-of-date Contents file when updating it failed once. In the new generator, we read the contents of each package ourselves now and store them in a LMDB database, bypassing the Contents file and removing the whole class of problems resulting from missing or wrong contents-data.

It can't all be good, right?

That is true, there are also some known issues the new generator has:

Large amounts of RAM required

The better speed of the new generator comes at the cost of holding more stuff in RAM. Much more. When processing data from 5 architectures initially on Debian, the amount of required RAM might lie above 4GB, with the OOM killer sometimes being quicker than the garbage collector… That being said, on subsequent runs the amount of required memory is much lower. Still, this is something I am working on to improve.

What are symbolic links?

To be faster, the appstream-generator will read the md5sum file in .deb packages instead of extracting the payload archive and reading its contents. Since the md5sums file does not list symbolic links, symlinks basically don't exist for the new generator. This is a problem for software symlinking icons or even .desktop files around, like e.g. LibreOffice does.

I am still investigating how widespread the use of symlinks for icons and .desktop files is, but it looks like fixing packages (making them not-symlink stuff and rather move the files) might be the better approach than investing additional computing power to find symlinks or even switch back to parsing the Contents file. Input on this is welcome!

Deploying asgen

I finished the last pieces of the appstream-generator (together with doing lots of other cool things and talking to great people) at the GNOME Software Hackfest in London last week (detailed blogposts about things that happened there will follow - many thanks once again for the Ubuntu community for sponsoring my attendance!).

Since today, the new generator is running on the Debian infrastructure. If bigger issues are found, we can still roll back to the old code. I decided to deploy this faster, so we can get some good testing done before the Stretch release. Please report any issues you may find!

16 Apr 2016 6:45am GMT

13 Apr 2016

feedplanet.freedesktop.org

Julien Danjou: Gnocchi 2.1 release

A little less than 2 months after our latest major release, here is the new minor version of Gnocchi, stamped 2.1.0. It was a smooth release, but with one major feature implemented by my fellow fantastic developer Mehdi Abaakouk: the ability to create resource types dynamically.

Resource types REST API

This new version of Gnocchi offers the long-awaited ability to create resource types dynamically. What does that mean? Well, until version 2.0, the resources that you were able to create in Gnocchi had a particular type that was defined in the code: instance, volume, SNMP host, Swift account, etc. All of them were tied to OpenStack, since it was our primary use case.

Now, the API allows to create resource types dynamically. This means you can create your own custom types to describe your own architecture. You then can exploit the same features that were offered before: history of your resources, searching through them, associating metrics, etc!

Performances improvement

We did some profiling on Gnocchi, and some benchmarks, and with the help of my fellow developer Gordon Chung, improved the metric performances.

The API speed improved a bit, and I've measured the Gnocchi API endpoint of being able to ingest up to 190k measures/s with only one node (the same as used in my previous benchmark) using uwsgi, so a 50 % improvement. The time required to compute aggregation on new measures is now also metered and displayed in the gnocchi-metricd log in debug mode. Handy to have an idea of how fast your measures are treated.

Ceph backend optimization

The Ceph back-end has been improved again by Mehdi. We're now relying on OMAP rather than xattr for finer grained control and better performance.

We already have a few new features being prepared for our next release, so stay tuned! And if you have any suggestion, feel free to say a word.

13 Apr 2016 3:58pm GMT

Keith Packard: x.org-election

X.org Election Time - Vote Now

It's more important than usual to actually get your vote in - we're asking the membership to vote on changes the the X.org bylaws that are necessary for X.org to become a SPI affiliate project, instead of continuing on as a separate organization. While I'm in favor of this transition as I think it will provide much needed legal and financial help, the real reason we need everyone to vote is that we need ⅔ of the membership to cast ballots for the vote to be valid. Last time, we didn't reach that value, so even though we had a majority voting in favor of the change, it didn't take effect. If you aren't in favor of this change, I'd still encourage you to vote as I'd like to get a valid result, no matter the outcome.

Of course, we're also electing four members to the board. I'm happy to note that there are five candidates running for the four available seats, which shows that there are enough people willing to help serve the X.org community in this fashion.

13 Apr 2016 6:51am GMT

08 Apr 2016

feedplanet.freedesktop.org

Julien Danjou: Pifpaf, or how to run any daemon briefly

There's a lot of situation where you end up needing a software deployed temporarily. This can happen when testing something manually, when running a script or when launching a test suite.

Indeed, many applications need to use and interconnect with external software: a RDBMS (PostgreSQL, MySQL…), a cache (memcached, Redis…) or any other external component. This tends to make more difficult running a software (or its test suite). If you want to rely on this component being installed and deployed, you end up needing a full environment set-up and properly configured to run your tests. Which is discouraging.

The different OpenStack projects I work on ended up pretty soon spawning some of their back-ends temporarily to run their tests. Some of those unit tests somehow became entirely what you would call functional or integration tests. But that's just a name. In the end, what we ended up doing is testing that the software was really working. And there's no better way doing that than talking to a real PostgreSQL instance rather than mocking every call.

Pifpaf to the rescue

To solve that issue, I created a new tool, named Pifpaf. Pifpaf eases the run of any daemon in a test mode for a brief moment, before making it disappear completely. It's pretty easy to install as it is available on PyPI:

$ pip install pifpaf
Collecting pifpaf
[]
Installing collected packages: pifpaf
Successfully installed pifpaf-0.0.7


You can then use it to run any of the listed daemons:

$ pifpaf list
+---------------+
| Daemons |
+---------------+
| redis |
| postgresql |
| mongodb |
| zookeeper |
| aodh |
| influxdb |
| ceph |
| elasticsearch |
| etcd |
| mysql |
| memcached |
| rabbitmq |
| gnocchi |
+---------------+


Pifpaf accepts any shell command line to execute after its arguments:

$ pifpaf run postgresql -- psql
Expanded display is used automatically.
Line style is unicode.
SET
psql (9.5.2)
Type "help" for help.

template1=# \l
List of databases
Name │ Owner │ Encoding │ Collate │ Ctype │ Access privileges
───────────┼───────┼──────────┼─────────────┼─────────────┼───────────────────
postgres │ jd │ UTF8 │ en_US.UTF-8 │ en_US.UTF-8 │
template0 │ jd │ UTF8 │ en_US.UTF-8 │ en_US.UTF-8 │ =c/jd ↵
│ │ │ │ │ jd=CTc/jd
template1 │ jd │ UTF8 │ en_US.UTF-8 │ en_US.UTF-8 │ =c/jd ↵
│ │ │ │ │ jd=CTc/jd
(3 rows)

template1=# create database foobar;
CREATE DATABASE
template1=# \l
List of databases
Name │ Owner │ Encoding │ Collate │ Ctype │ Access privileges
───────────┼───────┼──────────┼─────────────┼─────────────┼───────────────────
foobar │ jd │ UTF8 │ en_US.UTF-8 │ en_US.UTF-8 │
postgres │ jd │ UTF8 │ en_US.UTF-8 │ en_US.UTF-8 │
template0 │ jd │ UTF8 │ en_US.UTF-8 │ en_US.UTF-8 │ =c/jd ↵
│ │ │ │ │ jd=CTc/jd
template1 │ jd │ UTF8 │ en_US.UTF-8 │ en_US.UTF-8 │ =c/jd ↵
│ │ │ │ │ jd=CTc/jd
(4 rows)

template1=# \q


What pifpaf does is that it runs the different commands needed to create a new PostgreSQL cluster and then run PostgreSQL on a temporary port for you. So your psql session actually connects to a temporary PostgreSQL server, that is trashed as soon as you quit psql. And all of that in less than 10 seconds, without the use of any virtualization or container technology!

You can see what it does in detail using the debug mode:

$ pifpaf --debug run mysql $SHELL
DEBUG: pifpaf.drivers: executing: ['mysqld', '--initialize-insecure', '--datadir=/var/folders/7k/pwdhb_mj2cv4zyr0kyrlzjx40000gq/T/tmpkut9bg']
DEBUG: pifpaf.drivers: executing: ['mysqld', '--datadir=/var/folders/7k/pwdhb_mj2cv4zyr0kyrlzjx40000gq/T/tmpkut9bg', '--pid-file=/var/folders/7k/pwdhb_mj2cv4zyr0kyrlzjx40000gq/T/tmpkut9bg/mysql.pid', '--socket=/var/folders/7k/pwdhb_mj2cv4zyr0kyrlzjx40000gq/T/tmpkut9bg/mysql.socket', '--skip-networking', '--skip-grant-tables']
DEBUG: pifpaf.drivers: executing: ['mysql', '--no-defaults', '-S', '/var/folders/7k/pwdhb_mj2cv4zyr0kyrlzjx40000gq/T/tmpkut9bg/mysql.socket', '-e', 'CREATE DATABASE test;']
[]
$ exit
[]
DEBUG: pifpaf.drivers: mysqld output: 2016-04-08T08:52:04.202143Z 0 [Note] InnoDB: Starting shutdown...


Pifpaf also supports my pet project Gnocchi, so you can run and try that timeseries database in a snap:

$ pifpaf run gnocchi $SHELL
$ gnocchi metric create
+------------------------------------+-----------------------------------------------------------------------+
| Field | Value |
+------------------------------------+-----------------------------------------------------------------------+
| archive_policy/aggregation_methods | std, count, 95pct, min, max, sum, median, mean |
| archive_policy/back_window | 0 |
| archive_policy/definition | - points: 12, granularity: 0:05:00, timespan: 1:00:00 |
| | - points: 24, granularity: 1:00:00, timespan: 1 day, 0:00:00 |
| | - points: 30, granularity: 1 day, 0:00:00, timespan: 30 days, 0:00:00 |
| archive_policy/name | low |
| created_by_project_id | admin |
| created_by_user_id | admin |
| id | ff825d33-c8c8-46d4-b696-4b1e8f84a871 |
| name | None |
| resource/id | None |
+------------------------------------+-----------------------------------------------------------------------+
$ exit


And it takes less than 10 seconds to launch Gnocchi on my laptop using pifpaf. I'm then able to play with the gnocchi command line tool. It's by far faster than using OpenStack devstack to deloy everything the software.

Using pifpaf with your test suite

We leverage Pifpaf in several of our OpenStack telemetry related projects now, and even in tooz. For example, to run unit/functional tests with a memcached server available, a tox.ini file should like this:

[testenv:py27-memcached]
commands = pifpaf run memcached -- python setup.py testr


The tests can then use the environment variable PIFPAF_MEMCACHED_PORT to connect to memcached and run tests using it. As soon as the tests are finished, memcached is killed by pifpaf and the temporary data are trashed.

We move a few OpenStack projects to using Pifpaf already, and I'm planning to make use of it in a few more. My fellow developer Mehdi Abaakouk added support for RabbitMQ in Pifpaf and added support for more advanced tests in oslo.messaging (such as failure scenarios) using Pifpaf.

Pifpaf is a very small and handy tool. Give it a try and let me know how it works for you!

08 Apr 2016 12:00pm GMT

07 Apr 2016

feedplanet.freedesktop.org

Peter Hutterer: Why libinput doesn't have a lot of config options

Most days, at least one of the bugs I deal with requests something along the lines of "just add $FOO as a config option". In this post, I'll explain why this is usually a bad solution. First, read http://www.islinuxaboutchoice.com/ and keep those arguments in mind. Generally, there are two groups of configuration options - hardware options and user options. Hardware options are those that deal with specific quirks needed on some hardware, but not on other hardware. User options are those that deal with user preferences such as tapping or two-finger vs. edge scrolling.

In the old synaptics driver, we added options whenever something new came up and we tried to make those options generic. This was a big mistake. The driver now has over 70 configuration options resulting in a test matrix with a googolplex of combinations. In other words, it's completely untestable. To make a device work users often have to find the right combination of options from somewhere, write out a root-owned config file and then hope this works. Why do we still think this is acceptable? Even worse: some options are very specific to hardware but still spread in user forum examples like an STD during spring break.

In libinput, we're having none of that. When hardware doesn't work we expect a user to file a bug, we get it fixed upstream for the specific model and thus automatically fix it for all users of that device. We're leaning heavily on udev's hwdb which we have extended to correct devices when the firmware announces wrong information. This has the advantage that there is only one authoritative source of quirks a device needs to work. And we can update this as time goes by without having to worry about stale configuration options. One good example here is the custom acceleration profile that Lenovo X230 touchpads have in libinput. All in all, there is little pushback for the lack of hardware-specific configuration options and most users are fine with it once they accept the initial waiting period to get the patch into their distribution.

User-specific options are more contentious. In our opinion, some features should be configurable and others should not. Where to draw that line is of course quite undefined. For example, tapping on or off was one of the first configuration options available and that was never a cause for arguments either way (except whether the default should be on or off). Other options are more contentious. Clickpad software buttons are always on the bottom edge and their size is hardcoded (synaptics allowed almost free positioning of those buttons). Other features such as changing a two-finger tap to some other button event is not supported at all in libinput. This effectively comes down to cost. You see, whenever you write "it's just 5 lines of code to make this an option", what I think is "once the patch is reviewed and applied, I'll spend two days to write test cases and documentation. I'll need to handle any bug reports related to this, and I'm expected to make sure this option works indefinitely. Any addition of another feature may conflict with this option, so I need to make sure the right combination is possible and test cases are written." So your work ends after writing a 5 line patch, my work as maintainer merely starts. And unless it pays off long-term, the effort is not worth it. Some features make that cut, others don't if they are too much of a niche feature.

All this is of course nothing new and every software project needs to make these decisions. Input isn't even a special case here, it pales in comparison with e.g. the decisions UI designers need to make. However, in FOSS we have a tendency to think that because something is possible, it should be done. Legally, you have freedom to do almost anything with the software, so you can maintain a local fork of libinput with that extra feature applied. If that isn't acceptable, why would it be acceptable to merge the patch and expect others to shoulder the costs?

07 Apr 2016 12:11am GMT

05 Apr 2016

feedplanet.freedesktop.org

Peter Hutterer: libinput now has a touchpad software middle button

I just pushed a patch to libinput master to enable a middle button on the clickpad software buttons. Until now, our stance was that clickpads only get a left and right software button, split at the 50% mark. The reasoning is simple: touchpads only have markings for left and right buttons (if any!) and the middle button's extents are not easily discoverable if there is no visual or haptic feedback. A middle button event could however be triggered through middle button emulation, i.e. by clicking the touchpad with a finger on the left and right software button area (see the instructions here).

This is nice in theory but, as usual, reality gets in the way. Most interactions with the middle button are quick and short-lived, i.e. clicking the button once to paste. This interaction is what many touchpads are spectacularly bad at. For middle button emulation to be handled correctly, both fingers must be registered before the physical button press. The scanout rate on a touchpad is often too low and on touchpads with extremely light resistance like the T440 it's common to register the physical click before we know that there's a second finger on the touchpad. But even on a T450 and an X220 with much higher clickpad resistance I barely managed to get above 7 out of 10 correctly registered middle button events. That is simply not good enough.

So the patch I just pushed out to master enables a middle software button between the left and the right button. The exact width of the button scales with the touchpad but it's usually around 20-25mm and it's centered on the touchpad so despite the lack of visual or haptic guides it should be reliable to hit. The new behaviour is hard-coded and for now middle button emulation continues to work on touchpads. In the future, I expect I will remove middle button emulation on touchpads or at least disable it by default.

The middle button will be available in libinput 1.3.

05 Apr 2016 11:04pm GMT

03 Apr 2016

feedplanet.freedesktop.org

Lennart Poettering: Announcing systemd.conf 2016

Announcing systemd.conf 2016

We are happy to announce the 2016 installment of systemd.conf, the conference of the systemd project!

After our successful first conference 2015 we'd like to repeat the event in 2016 for the second time. The conference will take place on September 28th until October 1st, 2016 at betahaus in Berlin, Germany. The event is a few days before LinuxCon Europe, which also is located in Berlin this year. This year, the conference will consist of two days of presentations, a one-day hackfest and one day of hands-on training sessions.

The website is online now, please visit https://conf.systemd.io/.

Tickets at early-bird prices are available already. Purchase them at https://ti.to/systemdconf/systemdconf-2016.

The Call for Presentations will open soon, we are looking forward to your submissions! A separate announcement will be published as soon as the CfP is open.

systemd.conf 2016 is a organized jointly by the systemd community and kinvolk.io.

We are looking for sponsors! We've got early commitments from some of last year's sponsors: Collabora, Pengutronix & Red Hat. Please see the web site for details about how your company may become a sponsor, too.

If you have any questions, please contact us at info@systemd.io.

03 Apr 2016 10:00pm GMT