28 Jul 2015

feedplanet.freedesktop.org

Lennart Poettering: Announcing systemd.conf 2015

Announcing systemd.conf 2015

We are happy to announce the inaugural systemd.conf 2015 conference of the systemd project.

The conference takes place November 5th-7th, 2015 in Berlin, Germany.

Only a limited number of tickets are available, hence make sure to sign up quickly.

For further details consult the conference website.

28 Jul 2015 10:00pm GMT

22 Jul 2015

feedplanet.freedesktop.org

Peter Hutterer: A short overview of touchpad devices

Below is an outline of the various types of touchpads that can be found in the wild. Touchpads aren't simply categorised into a single type, instead they have a set of properties, a combination of number of physical buttons, touch support and physical properties.

Number of buttons

Physically separate buttons

For years this was the default type of touchpads: a touchpad with a separate set of physical buttons below the touch surface. Such touchpads are still around, but most newer models are Clickpads now.

Touchpads with physical buttons usually provide two buttons, left and right. A few touchpads with three buttons exist, and Apple used to have touchpads with a single physical buttons back in the PPC days. Touchpads with only two buttons require the software stack to emulate a middle button. libinput does this when both buttons are pressed simultaneously.


A two-button touchpad, with a two-button pointing stick above.

Note that many Lenovo laptops provide a pointing stick above the touchpad. This pointing stick has a set of physical buttons just above the touchpad. While many users use those as substitute touchpad buttons, they logically belong to the pointing stick. The *40 and *50 series are an exception here, the former had no physical buttons on the touchpad and required the top section of the pad to emulate pointing stick buttons, the *50 series has physical buttons but they are wired to the touchpads. The kernel re-routes those buttons through the trackstick device.

Clickpads

Clickpads are the most common type of touchpads these days. A Clickpad has no separate physical buttons, instead the touchpad itself is clickable as a whole, i.e. a user presses down on the touch area and triggers a physical click. Clickpads thus only provide a single button, everything else needs to be software-emulated.


A clickpad on a Lenovo x220t. Just above the touchpad are the three buttons associated with the pointing stick. Faint markings on the bottom of the touchpad hint at where the software buttons should be.

Right and middle clicks are generated either via software buttons or "clickfinger" behaviour. Software buttons define an area on the touchpad that is a virtual right button. If a finger is in that area when the click happens, the left button event is changed to a right button event. A middle click is either a separate area or emulated when both the left and right virtual buttons are pressed simultaneously.

When the software stack uses the clickfinger method, the number of fingers decide the type of click: a one-finger is a left button, a two-finger click is a right button, a three-finger click is a middle button. The location of the fingers doesn't matter, though there are usually some limits in how the fingers can be distributed (e.g. some implementations try to detect a thumb at the bottom of the touchpad to avoid accidental two-finger clicks when the user intends a thumb click).

The libinput documentation has a section on Clickpad software button behaviour with more detailed illustrations


The touchpad on a T440s has no physical buttons for the pointing stick. The marks on the top of the touchpad hint at the software button position for the pointing stick. Note that there are no markings at the bottom of the touchpad anymore.

Clickpads are labelled by the kernel with the INPUT_PROP_BUTTONPAD input property.

Forcepads

One step further down the touchpad evolution, Forcepads are Clickpads without a physical button. They provide pressure and (at least in Apple's case) have a vibration element that is software-controlled. Instead of the satisfying click of a physical button, you instead get a buzz of happiness. Which apparently feels the same as a click, judging by the reviews I've read so far. A software-controlled click feel has some advantages, it can be disabled for some gestures, modified for others, etc. I suspect that over time Forcepads will become the main touchpad category, but that's a few years away.

Not much to say on the implementation here. The kernel has some ForcePad support but everything else is spotty.


Note how Apple's Clickpads have no markings whatsoever, Apple uses the clickfinger method by default.

Touch capabilities

Single-touch touchpads

In the beginning, there was the single-finger touchpad. This touchpad would simply provide x/y coordinates for a single finger and get mightily confused when more than one finger was present. These touchpads are now fighting with dodos for exhibition space in museums, few of those are still out in the wild.

Pure multi-touch touchpads

Pure multi-touch touchpads are those that can track, i.e. identify the location of all fingers on the touchpad. Apple's touchpads support 16 touches (iirc), others support 5 touches like the Synaptics touchpads when using SMBus.

Pure multi-touch touchpads are the easiest to support, we can rely on the finger locations and use them for scrolling, gestures, etc. These touchpads usually also provide extra information. In the case of the Apple touchpads we get an ellipsis and the orientation of the ellipsis for each touch point. Other touchpads provide a pressure value for each touch point. Though pressure is a bit of a misnomer, pressure is usually directly related to contact area. Since our puny human fingers flatten out as the pressure on the pad increases, the contact area increases and the firmware then calculates that back into a (mostly rather arbitrary) pressure reading.

Because pressure is really contact area size, we can use it to detect accidental palm contact or thumbs though it's fairly unreliable. A light palm touch or a touch at the very edge of a touchpad will have a low pressure reading simply because the palm is mostly next to the touchpad and thus the contact area itself remains small.

Partial multi-touch touchpads

The vast majority of touchpads fall into this category. It's the half-way point between single-touch and pure multi-touch. These devices can track N fingers, but detect more than N. The current Synaptics touchpads fall into that category when they're using the serial protocol. Most touchpads that fall into this category can track two fingers and detect up to four or five. So a typical three-finger interaction would give you the location of two fingers and a separate value telling you that a third finger is down.

The lack of finger location doesn't matter for some interactions (tapping, three-finger click) but it can cause issues in some cases. For example, a user may have a thumb resting on a touchpad while scrolling with two fingers. Which touch locations you get depends on the order of the fingers being set down, i.e. this may look like thumb + finger + third touch somewhere (lucky!) or two fingers scrolling + third touch somewhere (unlucky, this looks like a three-finger swipe). So far we've mostly avoided having anything complex enough that requires the exact location of more than two fingers, these pads are so prevalent that any complex feature would exclude the majority of users.

Semi-mt touchpads

A sub-class of partial multi-touch touchpads. These touchpads can technically detect two fingers but the location of both is limited to the bounding box, i.e. the first touch is always the top-left one and the second touch is the bottom-right one. Coordinates jump around as fingers move past each other. Most semi-mt touchpads also have a lower resolution for two touches than for one, so even things like two-finger scrolling can be very jumpy.

Semi-mt are labelled by the kernel with the INPUT_PROP_SEMI_MT input property.

Physical properties

External touchpads

USB or Bluetooth touchpads not in a laptop chassis. Think the Apple Magic Trackpad, the Logitech T650, etc. These are usually clickpads, the biggest difference is that they can be removed or added at runtime. One interaction method that is only possible on external touchpads is a thumb resting on the very edge/immediately next to the touchpad. On the far edge, touchpads don't always detect the finger location so clicking with a thumb barely touching the edge makes it hard or impossible to figure out which software button area the finger is on.

These touchpads also don't need palm detection - since they're not located underneath the keyboard, accidental palm touches are a non-issue.


A Logitech T650 external touchpad. Note the thumb position, it is possible to click the touchpad without triggering a touch.

Circular touchpads

Yes, used to be a thing. Touchpad shaped in an ellipsis or circle. Luckily for us they have gone full dodo. The X.Org synaptics driver had to be aware of these touchpads to calculate the right distance for edge scrolling - unsurprisingly an edge scroll motion on a circular touchpad isn't very straight.

Graphics tablets

Touch-capable graphics tablets are effectively external touchpads, with two differentiators: they are huge compared to normal touchpads and they have no touchpad buttons whatsoever. This means they can either work like a Forcepad, or rely on interaction methods that don't require buttons (like tap-to-click). Since the physical device is shared with the pen input, some touch arbitration is required to avoid touch input interfering when the pen is in use.

Dedicated edge scroll area

Mostly on older touchpads before two-finger scrolling became the default method. These touchpads have a marking on the touch area that designates the edge to be used for scrolling. A finger movement in that edge zone should trigger vertical motions. Some touchpads have markers for a horizontal scroll area too at the bottom of the touchpad.


A touchpad with a marked edge scroll area on the right.

22 Jul 2015 9:12pm GMT

21 Jul 2015

feedplanet.freedesktop.org

Corbin Simpson: Mont is a Category

I've had a lot of thought about Mont. (Sorry for the rhymes.) Mont, recall, is the set of all Monte objects. I have a couple interesting thoughts on Mont that I'd like to share, but the compelling result I hope to convince readers of is this: Mont is a simple and easy-to-think-about category once we define an appropriate sort of morphism. By "category" I mean the fundamental building block of category theory, and most of the maths I'm going to use in this post is centered around that field. In particular, "morphism" is used in the sense of categories.

I'd like to put out a little lemma from the other day, first. Let us say that the Monte == operator is defined as follows: For any two objects in Mont, x and y, x == y if and only if x is y, or for any message [verb, args] sendable to these objects, M.send(x, verb, args) == M.send(y, verb, args). In other words, x == y if it is not possible to distinguish x and y by any chain of sent messages. This turns out to relate to the category definition I give below. It also happens to correlate nicely with the idea of equivalence, in that == is an equivalence relation on Mont! The proof:

Now, obviously, since objects can do whatever computation they like, the actual implementation of == has to be conservative. We generally choose to be sound and incomplete; thus, x == y sometimes has false negatives when implemented in software. We can't really work around this without weakening the language considerably. Thus, when I talk about Mont/==, please be assured that I'm talking more about the ideal than the reality. I'll try to address spots where this matters.

Back to categories. What makes a category? Well, we need a set, some morphisms, and a couple proofs about the behavior of those morphisms. First, the set. I'll use Mont-DF for starters, but eventually we want to use Mont. Not up on my posts? Mont-DF is the subset of Mont where objects are transitively immutable; this is extremely helpful to us since we do not have to worry about mutable state nor any other side effect. (We do have to worry about identity, but most of my results are going to be stated as holding up to equivalence. I am not really concerned with whether there are two 42 objects in Mont right now.)

My first (and, spoiler alert, failed) attempt at defining a category was to use messages as morphisms; that is, to go from one object to another in Mont-DF, send a message to the first object and receive the second object. Clear, clean, direct, simple, and corresponds wonderfully to Monte's semantics. However, there's a problem. The first requirement of a category is that, for any object in the set, there exists an identity morphism, usually called 1, from that object to itself. This is a problem in Monte. We can come up with a message like that for some objects, like good old 42, which responds to ["add", [0]] with 42. (Up to equivalence, of course!) However, for some other objects, like object o as DeepFrozen {}, there's no obvious methods to use.

The answer is to add a new Miranda method which is not overrideable called _magic/0. (Yes, if this approach would have worked, I would have picked a better name.) Starting from Mont-DF, we could amend all objects to get a new set, Mont-DF+Magic, in which the identity morphism is always ["_magic", []]. This neatly wraps up the identity morphism problem.

Next, we have to figure out how to compose messages. At first blush, this is simple; if we start from x and send it some message to get y, and then send another message to y to get z, then we obviously can get to x from z. However, here's the rub: There might not be any message directly from x to z! We're stuck here. Unlike with other composition operators, there's no hand-wavey way to compose messages like with functions. So this is bunk.

However, we can cheat gently and use the free monoid a.k.a. the humble list. A list of messages will work just fine: To compose them, simply catenate the lists, and the identity morphism is the empty list. Putting it all together, a morphism from 6 to 42 might be [["multiply", [7]]], and we could compose that with [["asString", []]] to get [["multiply", [7]], ["asString", []]], a morphism from 6 to "42". Not shabby at all!

There we go. Now Mont-DF is a category up to equivalence. The (very informally defined) set of representatives of equivalence classes via ==, which I'll call Mont-DF/==, is definitely a category here as well, since it encapsulates the equivalence question. We could alternatively insist that objects in Mont-DF are unique (or that equivalent definitions of objects are those same objects), but I'm not willing to take up that sword this time around, mostly because I don't think that it's true.

"Hold up," you might say; "you didn't prove that Mont is a category, only Mont-DF." Curses! I didn't fool you at all, did I? Yes, you're right. We can't extend this result to Mont wholesale, since objects in Mont can mutate themselves. In fact, Mont doesn't really make sense to discuss in this way, since objects in sets aren't supposed to be mutable. I'm probably going to have to extend/alter my definition of Mont in order to get anywhere with that.

21 Jul 2015 2:33pm GMT

Ben Widawsky: OS backtrace with symbol names

For those of you reading this that didn't know, I've had two months of paid vacation - one of the real perks of working for Intel. Today is the last day. It is as hard as I thought it would be.

Most of the vacation was spent vacationing. As I have access to none of the pictures at the moment, and I don't want to make you jealous, I'll skip over that. Toward the end though, I ended up at a coffee shop waiting for someone with nothing to do. I spent a little bit of time working on HOBos, and I thought it could be interesting to write about it.

WARNING: There is nothing novel here.

A brief history of HOBos

The HOBby operating system is an operating system project I started a while ago. I am a bit embarrassed that I started an OS. In my opinion, it's one of lamer tasks to take on because 1. everyone seems to do it; 2. there really isn't a need, there are many operating systems with permissive licenses already; and 3. sites like OSDev have made much of the work trivial (I like to think that when I started there wasn't quite as much info readily available, but that's a lie).

Larrabee Av in Portland (not what the project was named after)

Larrabee Av in Portland
(not what the project was named after)

HOBos began while I was working on the Larrabee project. The team spent a lot of time optimizing the memory management, and the scheduler for the embedded software. I really wanted to work on these things full time. Unfortunately for me, having had a background in device drivers, I was often required to do other things. As a means to scratch the itch, I started HOBos after not finding anything suitable for my needs. The stuff I found were all either too advanced, or too rudimentary. When I was hired to work on i915, I decided that it was better use of my free time. Since then, I've been tweaking things here or there, and I do try to make sure things still run with the latest QEMU and compilers at least once a year. The last actual feature I added was more than 1300 days ago:

   commit 1c9b5c78b22b97246989b00e807c9bf1fbc9e517
        Author: Ben Widawsky <ben@bwidawsk.net>
        Date: Sat Mar 19 21:19:57 2011 -0700

        basic backtrace

So back to the coffee shop. I tried to do something or other, got a hang, and didn't want to fire up GDB.

Backtracing

HOBos had implemented backtraces since the original import from SVN (let's pretend that means, since always). Obtaining a backtrace is actually pretty straightforward on x86.

The stack frame

The stack frame can be thought of as memory contents that are locally scoped to a function. Declaring a local variable will end up in the stack frame. A global variable will not. As functions call other functions, you end up with multiple frames. A stack is used because the last frames added are the first ones removed (this is not always true for things like exceptions, but nevermind that for now). The fact that a stack decrements is arbitrarily chosen, as far as I can tell. The following shows the stack when the function foo() calls the function bar().

Example Stackframe

Example Stackframe

The memory contents shown above are created as a result of two things. First is what the CPU implicitly does upon execution of the call instruction. The second is what the compiler generates. Since we're talking about x86 here, the call instruction always pushes at least the return address. The second I'll detail a bit more in a bit. Correlating this to the picture, the green (foo) and blue (bar) are creations of the compiler. The brown is automatically pushed on to the stack by the hardware, and is automatically popped from the stack on the ret instruction.

In the above there are two registers worth noting, RBP, and RSP. RBP which is the 64b extension to the original 8086 BP, Base Pointer, register is the beginning of the stack frame ie the Frame Pointer. RSP, the extension to the 8086 SP, Stack Pointer, points to the end of the stack frame. By convention the Base Pointer doesn't change throughout a function being executed and therefore it is often used as the reference to local variables stored on the stack. -100(%rbp) as an example above.

Digging further into that disassembly above, one notices a pattern. Every function begins with:

push   %rbp       // Push the old RBP, RSP now points to this
mov    %rsp,%rbp  // Store RSP in RBP

Assuming this is the convention, it implies that at any given point during the execution of a function we can obtain the previous RBP by reading the current RBP and doing some processing. Specifically, reading RBP gives us the old Stack Pointer, which is pointing to the last RBP. As mentioned above, the x86 CPU pushed the return address immediately before the push %rbp - which means as we work backwards through the Base Pointers, we can also obtain the caller for the current stack frame. People have done really nice pictures on this - use your favorite search engine.

Here is the HOBos code (ignore the part about symbols for now):

void bt_fp(void *fp)
{
        do {
                uint64_t prev_rbp = *((uint64_t *)fp);
                uint64_t prev_ip = *((uint64_t *)(fp + sizeof(prev_rbp)));
                struct sym_offset sym_offset = get_symbol((void *)prev_ip);
                printf("\t%s (+0x%x)\n", sym_offset.name, sym_offset.offset);
                fp = (void *)prev_rbp;
                /* Stop if rbp is not in the kernel
                 * TODO< need an upper bound too*/
                if (fp <= (void *)KVADDR(DMAP_PML,0,0,0))
                        break;
        } while(1);
}

As far as I know, all modern CPUs work in a similar fashion with differences sprinkled here and there. ARM for example has an LR register for the return address instead of using the stack.

ABI/Calling Conventions

The fact that we can work backwards this way is a byproduct of the calling convention. One example of an aspect of the calling convention is where to put the arguments to a function. Do they go on the stack, in registers, or somewhere else? In addition to these arguments, the way in which RBP, and RSP are used are strictly software constructs that are part of the convention. As a result, it might not always be possible to get a backtrace if:

  1. This convention is not adhered to (or -fomit-frame-pointer)
  2. The contents of RBP are destroyed
  3. The contents of the stack are corrupted.

How arguments are passed to function are also needed to make sure linkers and loaders (both static and dynamic) can operate to correctly form an executable, or dynamically call a function. Since this isn't really important to obtaining a backtrace, I will leave it there. Some architectures do provide a way to obtain useful backtrace information without caring about the calling convention: Intel's Processor Trace for example.

Symbol information

The previous section will get us a reverse list of addresses for all function calls working backward from a given point during execution. But having names makes things much more easier to quickly diagnose what is going wrong. There is a lot of information on the internet about this stuff. I'm simply providing all that's relevant to my specific problem.

ELF Symbols (linking)

The ELF format provides everything we need (assuming things aren't stripped). Glossing over the details (see this simple tool if you're curious), we end up with two "sections" that tell us everything we need. They are conventionally named, ".symtab", and ".strtab" and are conveniently of type, SHT_SYMTAB, and SHT_STRTAB. The symbol table defines the information about each symbol (functions, variables, whatever). Part of the information is a name, which is an index into the string table. In this simplest case, these are provisions for inter-object linking. If I had defined foo() in foo.c, and bar() in bar.c, the compiled object files can be linked together, but the linker needs the information about the symbol bar (in this case) in order to do its job.

readelf -S a.out
[Nr] Name Type Address Offset
[33] .symtab SYMTAB 0000000000000000 000015b8
[34] .strtab STRTAB 0000000000000000 00001c90
> readelf -S a.out | egrep "\.strtab|\.symtab" | wc -l
2
> strip a.out
> readelf -S a.out | egrep "\.strtab|\.symtab" | wc -l
0

Summing that up, if we have an entire ELF file, and the symbol and string tables haven't been stripped, we're good to go. However, ELF sections are not the unit in which an ELF loader decides what to load. The loader loads segments which are of type PT_LOAD. A segment is made up of 0 or more sections, plus padding. Since the Operating System is itself an ELF loaded by an ELF loader (the bootloader) we're not in a good position. :(

> readelf -l a.out | egrep "\.strtab|\.symtab" | wc -l
0
ELF Loader

ELF Loader

Debug Info

Note that what we want is not the same thing as debug information. If one wants to do source level debug, there needs to be some way of correlating a machine instruction to a line of source code. This is also a software abstraction, and there is usually a spec for it unless you are using some proprietary thing. It would technically be possible to include DWARF capabilities within the kernel, but I do not know of a way to get that info to the OS (see multiboot stuff for details).

From boot to symbols

The HOBos project implements a small Multiboot compliant bootloader called smallboot. When the machine starts up, boot firmware is loaded from a fixed location (this is currently done by SeaBIOS). The boot firmware then loads the smallboot bootloader. The bootloader will load the kernel (smallboot, and most modern bootloaders will do this through a text configuration file on the resident filesystem). In the HOBos case, the kernel is simply an ELF file. smallboot implements a basic ELF loader to load the kernel into memory and give execution over.

The multiboot specification is a standardized communication mechanism (for various things) from the bootloader to the Operating System (or any file really). One of these things is symbol information. Quoting the multiboot spec

If bit 5 in the 'flags' word is set, then the following fields in the Multiboot information structure starting at byte 28 are valid:

             +-------------------+
     28      | num               |
     32      | size              |
     36      | addr              |
     40      | shndx             |
             +-------------------+

These indicate where the section header table from an ELF kernel is, the size of each entry, number of entries, and the string table used as the index of names. They correspond to the 'shdr_*' entries ('shdr_num', etc.) in the Executable and Linkable Format (elf) specification in the program header. All sections are loaded, and the physical address fields of the elf section header then refer to where the sections are in memory (refer to the i386 elf documentation for details as to how to read the section header(s)). Note that 'shdr_num' may be 0, indicating no symbols, even if bit 5 in the 'flags' word is set.

Since the beginning I had implemented these fields in the bootloader:

multiboot_info.flags |= MULTIBOOT_INFO_ELF_SHDR;
multiboot_info.u.elf_sec = *table;

Because of the fact that the symbols weren't in the ELF segments though, I was stumped as to how to get at the data one the OS is loaded. As it turned out, I didn't actually read all 4 sentences and I had missed one very important part.

All sections are loaded, and the physical address fields of the elf section header then refer to where the sections are in memory

What the spec is dictating is that even though the sections are not in loadable segments, they shall exist within memory during the handover to the OS, and the section header information will be updated so that the OS knows where to find it. With this, the OS can copy out, or just make sure not to overwrite the info, and then get access to it.

    for (i = 0; i < shnum; i++) {
        __ElfN(Shdr) *sh = &shdr[i];
        if (sh->sh_size == 0)
            continue;

        if (sh->sh_addr) /* Already loaded */
            continue;

        ASSERT(sizeof(void *) == 4);
        *((volatile __ElfN(Addr) *)&sh->sh_addr) = sh->sh_offset + (uint32_t)addr;
    }

et cetera

The code for pulling out the symbols is quite a bit longer, but it can be found in kern/core/syms.c. With the given RBP unwinder near the top, we can easily get the IP for the caller. With that IP, we do a symbol lookup from the symbols we got via the multiboot info.

Screenshot with backtrace

Screenshot with backtrace

Inkscape Links

https://bwidawsk.net/blog/wp-content/uploads/2014/10/stackframe.svg
https://bwidawsk.net/blog/wp-content/uploads/2014/10/loader.svg

Download PDF

21 Jul 2015 6:17am GMT

16 Jul 2015

feedplanet.freedesktop.org

Peter Hutterer: libinput and handling resolution-less touchpads

In a perfect world, any device that advertises absolute x/y axes also advertises the resolution for those axes. Alas, not all of them do. For libinput, this problem is two-fold: parts of the touchscreen API provide data in mm - without knowing the resolution this is a guess at best. But it also matters for touchpads, where a lack of resolution is a lot more common (though the newest generations of major touchpad manufacturers tend to advertise resolutions now).

We have a number of features that rely on the touchpad resolution: from the size of the software button to deciding which type of palm detection we need, it all is calculated based on physical measurements. Until recently, we had code to differentiate between touchpads with resolution and most of the special handling was a matter of magic numbers, usually divided by the diagonal of the touchpad in device units. This made code maintenance more difficult - without testing each device, behaviour could not be guaranteed.

With libinput 0.20, we now got rid of this special handling and instead require the touchpads to advertise resolutions. This requires manual intervention, so we're trying to fix this in multiple places, depending on the confidence of the data. We have hwdb entries for the bcm5974 (Apple) touchpads and the Chromebook Pixel. For Elantech touchpads, a kernel patch is currently waiting for merging. For ALPS touchpads, we ship size hints with libinput's hwdb. If all that fails, we fall back to a default touchpad size of 69x55mm. [1]

All this affects users in two ways: one is that you may notice a slightly different behaviour of your touchpad now. The software-buttons may be bigger or smaller than before, pointer acceleration may be slightly different, etc. Shouldn't be too bad, but you may just notice it. The second noticeable change is that libinput will now log when it falls back to the default size. If you notice a message like that in your log, please file a bug and attach the output of evemu-describe and the physical dimensions of your touchpad. Once we have that information, we can add it to the right place and make sure that everyone else with that touchpad gets the right settings out of the box.

[1] The default size was chosen because it's close enough to what old touchpads used to be, and those are most likely to lack resolution values. This size may change over time as we get better data.

16 Jul 2015 11:28am GMT

15 Jul 2015

feedplanet.freedesktop.org

Alan Coopersmith: Solaris 11.3 beta: Changes to bundled software packages

With the release of Solaris 11.3 beta, I've gone back and made a new list of changes to the bundled software packages available in the Solaris IPS package repository, as I've done for the Solaris 11.1, Solaris 11.2 beta, and the Solaris 11.2 GA releases.

Oracle packages

Several bundled packages improve integration with other Oracle software. The Oracle Instant Client packages are now in the IPS repo for building software that connects to Oracle databases. MySQL 5.6 has also been added alongside the existing version 5.5 packages.

The Java runtime & developer kits for Java 7 & 8 were updated to new versions, while the Java 6 versions were removed as its support life winds down. The End of Feature Notices for Oracle Solaris 11 warns that Java 7 will be coming out as well in a later release.

Also updated was Oracle Hardware Management Pack (HMP), a set of tools that work with the ILOM, firmware, and other components in Sun/Oracle servers to configure low-level system options. HMP 2.2 was introduced in Solaris 11.2, and Solaris 11.3 now delivers HMP 2.3 packages.

Python packages

Solaris has long included and depended on Python 2. Solaris 11.3 adds Python 3 support for the first time, with the bundling of Python 3.4 and many module packages that work with it. Python 2.7 is still included, as is 2.6 for now, but Python 2 software in Solaris is almost completely switched over to 2.7 now, and Python 2.6 will be obsoleted soon.

A side effect of these changes was a revamping of the naming pattern for Python module packages in IPS - previously most modules delivered a set of packages following the pattern:

For example, there were three Mako packages, library/python-2/mako, library/python-2/mako-26, library/python-2/mako-27, where the latter two installed the modules built for the named versions of Python, and the first uses IPS conditional dependencies to install the modules for any Python versions that were installed on the system.

In extending this to provide Python 3 modules, it was decided to drop the python major version from the library/python-N prefix, leaving just the version at the end of the module name. Thus in Solaris 11.3, you'll see that the mako packages are now library/python/mako, library/python/mako-26, library/python/mako-27, and library/python/mako-34.

NVIDIA graphics driver packages

NVIDIA has been providing graphics driver packages for Solaris for almost a decade now. As new families and models of graphics cards are regularly introduced, they retire support for older generations from time to time in the new drivers. Support for these models is retained in a legacy driver, but that requires uninstalling the latest version and switching to a legacy branch. Previously that meant installing NVDIA's SVR4 package release instead of IPS, losing the ability to get updates with a simple "pkg update" command.

Now the legacy drivers are also available in IPS packages, which will continue to be updated as necessary for bug fixes and support for new Xorg releases during NVIDIA's Support timeframes for Unix legacy GPU releases. To switch to the version 340 legacy driver on Solaris 11.3 or the later Solaris 11.2 SRU's simply run:

  # pkg install --reject driver/graphics/nvidia driver/graphics/nvidiaR340 

and then reboot into the new BE created. For the previous version 304, change the above command to end in nvidiaR304 instead.

Other packages

There are far more changes than I've covered here - fortunately, the engineers who worked on many of these changes have written their own blog posts about them for you to check out:

One more thing... Solaris 11.2 packages

While all these are available now in the Solaris 11.3 beta, many are also available for testing and evaluation on existing Solaris 11.2 systems, when you're ready to upgrade a FOSS package, but not the rest of the OS. This is planned to be an ongoing program, so once Solaris 11.3 is officially released, the evaluation packages will keep moving forward to new versions of many packages. More details are available in a Solaris FOSS blog post and an article in the Solaris 11 OTN community space.

Not all packages are available in the evaluation program though, since some depend on OS changes not in Solaris 11.2. For instance, OpenSSH is not available for Solaris 11.2, since it depends on changes to the existing SunSSH packages that allow for the ssh package mediator to choose which ssh software to use on a given system.

Detailed list of changes

This table shows most of the changes to the bundled packages between the original Solaris 11.2.0 release, the latest Solaris 11.2 support repository update (SRU11, aka 11.2.11, released June 13, 2015), and the Solaris 11.3 beta released today. These show the versions they were released with, and not later versions that may now be available via the new FOSS Evaluation Packages for existing Solaris releases.

As with last time, some were excluded for clarity, or to reduce noise and duplication. All of the bundled packages which didn't change the version number in their packaging info are not included, even if they had updates to fix bugs, security holes, or add support for new hardware or new features of Solaris.

Package Upstream 11.2.0 11.2.11 11.3 Beta
cloud/openstack OpenStack 0.2013.2.3 0.2014.2.2 0.2014.2.2
cloud/openstack/cinder OpenStack 0.2013.2.3 0.2014.2.2 0.2014.2.2
cloud/openstack/glance OpenStack 0.2013.2.3 0.2014.2.2 0.2014.2.2
cloud/openstack/heat OpenStack not included 0.2014.2.2 0.2014.2.2
cloud/openstack/horizon OpenStack 0.2013.2.3 0.2014.2.2 0.2014.2.2
cloud/openstack/keystone OpenStack 0.2013.2.3 0.2014.2.2 0.2014.2.2
cloud/openstack/neutron OpenStack 0.2013.2.3 0.2014.2.2 0.2014.2.2
cloud/openstack/nova OpenStack 0.2013.2.3 0.2014.2.2 0.2014.2.2
cloud/openstack/swift OpenStack 1.10.0 2.2.2 2.2.2
communication/im/pidgin Pidgin 2.10.9 2.10.11 2.10.11
compress/pigz pigz not included 2.2.5 2.2.5
crypto/gnupg GnuPG 2.0.22 2.0.26 2.0.26
database/mysql-56 MySQL not included
(MySQL 5.5 in database/mysql-56)
5.6.21
database/sqlite-3 SQLite 3.7.14.1 3.8.8.1 3.8.8.1
developer/build/ant Apache Ant 1.8.4 1.8.4 1.9.3
developer/documentation-tool/help2man GNU help2man not included not included 1.46.1
developer/documentation-tool/xmlto xmlto not included not included 0.0.25
developer/java/jdk-6 Java 1.6.0.75
(Java SE 6u75)
1.6.0.95
(Java SE 6u95)
not included
developer/java/jdk-7 Java 1.7.0.65
(Java SE 7u65)
1.7.0.80
(Java SE 7u80)
1.7.0.80
(Java SE 7u80)
developer/java/jdk-8 Java 1.8.0.11
(Java SE 8u11)
1.8.0.45
(Java SE 8u45)
1.8.0.45
(Java SE 8u45)
developer/test/check check not included not included 0.9.14
developer/versioning/mercurial Mercurial SCM 2.8.2 3.2.3 3.4
developer/versioning/subversion Apache Subversion 1.7.5 1.7.5 1.7.20
diagnostic/nicstat nicstat not included not included 1.95
diagnostic/tcpdump tcpdump 4.5.1 4.5.1 4.7.4
diagnostic/wireshark Wireshark 1.10.7 1.10.14 1.12.5
driver/graphics/nvidia NVIDIA 0.331.38.0 0.346.35.0 0.346.35.0
driver/graphics/nvidiaR304 NVIDIA not included 0.304.125.0 0.304.125.0
driver/graphics/nvidiaR340 NVIDIA not included 0.340.65.0 0.340.65.0
file/mc GNU Midnight Commander 4.8.8 4.8.8 4.8.13
library/apr-15 Apache Portable Runtime not included not included 1.5.1
library/c++/net6 Gobby 1.3.12 1.3.14 1.3.14
library/jansson Jansson not included not included 2.7
library/json-c JSON-C 0.9 0.9 0.12
library/libee libee 0.3.2 0.3.2 0.4.1
library/libestr libestr 0.1.2 0.1.2 0.1.9
library/libgsl GNU GSL not included not included 1.16
library/liblogging LibLogging not included not included 1.0.4
library/libmicrohttpd GNU Libmicrohttpd not included not included 0.9.37
library/libmilter Sendmail 8.14.7 8.14.9 8.15.1
library/libxml2 XML C parser 2.9.1 2.9.2 2.9.2
library/neon neon 0.29.6 0.29.6 0.30.1
library/perl-5/openscap-512 OpenSCAP 1.0.0 1.0.0 1.2.3
library/perl-5/xml-libxml CPAN: XML::LibXML 2.14 2.14 2.121
library/python/alembic
was library/python-2/alembic
alembic 0.6.0 0.7.4 0.7.4
library/python/amqp
was library/python-2/amqp
amqp 1.0.12 1.4.6 1.4.6
library/python/barbicanclient OpenStack not included 3.0.1 3.0.1
library/python/boto
was library/python-2/boto
boto 2.9.9 2.34.0 2.34.0
library/python/ceilometerclient OpenStack 1.0.10 1.0.12 1.0.12
library/python/cinderclient OpenStack 1.0.9 1.1.1 1.1.1
library/python/cliff
was library/python-2/cliff
cliff 1.4.5 1.9.0 1.9.0
library/python/django Django 1.4.11 1.4.20 1.4.20
library/python/django-pyscss django-pyscss not included 1.0.6 1.0.6
library/python/django_compressor
was library/python-2/django_compressor
django_compressor 1.3 1.4 1.4
library/python/django_openstack_auth
was library/python-2/django_openstack_auth
OpenStack 1.1.3 1.1.9 1.1.9
library/python/eventlet
was library/python-2/eventlet
eventlet 0.13.0 0.15.2 0.15.2
library/python/futures pythonfutures not included 2.2.0 2.2.0
library/python/glance_store OpenStack not included 0.1.10 0.1.10
library/python/glanceclient OpenStack 0.12.0 0.15.0 0.15.0
library/python/greenlet
was library/python-2/greenlet
greenlet 0.4.1 0.4.5 0.4.5
library/python/heatclient OpenStack 0.2.9 0.2.12 0.2.12
library/python/iniparse iniparse not included 0.4 0.4
library/python/ipaddr ipaddr-py not included 2.1.11 2.1.11
library/python/jinja2 Jinja 2.7.2 2.7.3 2.7.3
library/python/keystoneclient OpenStack 0.8.0 1.0.0 1.0.0
library/python/keystonemiddleware OpenStack not included 1.3.1 1.3.1
library/python/kombu
was library/python-2/kombu
kombu 2.5.12 3.0.7 3.0.7
library/python/ldappool ldappool not included 1.0 1.0
library/python/netaddr
was library/python-2/netaddr
netaddr 0.7.10 0.7.13 0.7.13
library/python/netifaces
was library/python-2/netifaces
netifaces 0.8 0.10.4 0.10.4
library/python/networkx NetworkX not included 1.9.1 1.9.1
library/python/neutronclient OpenStack 2.3.4 2.3.10 2.3.10
library/python/novaclient OpenStack 2.17.0 2.20.0 2.20.0
library/python/oauthlib OAuthLib not included 0.7.2 0.7.2
library/python/openscap OpenSCAP 1.0.0 1.0.0 1.2.3
library/python/oslo.config OpenStack 1.3.0 1.6.0 1.6.0
library/python/oslo.context OpenStack not included 0.1.0 0.1.0
library/python/oslo.db OpenStack not included 1.0.3 1.0.3
library/python/oslo.i18n OpenStack not included 1.3.1 1.3.1
library/python/oslo.messaging OpenStack not included 1.4.1 1.4.1
library/python/oslo.middleware OpenStack not included 0.4.0 0.4.0
library/python/oslo.serialization OpenStack not included 1.2.0 1.2.0
library/python/oslo.utils OpenStack not included 1.2.1 1.2.1
library/python/oslo.vmware OpenStack not included 0.8.0 0.8.0
library/python/osprofiler OpenStack not included 0.3.0 0.3.0
library/python/pep8
was library/python-2/pep8
PyPI: pep8 1.4.4 1.4.4 1.5.7
library/python/pip
was library/python-2/pip
pip 1.4.1 1.4.1 6.0.8
library/python/posix_ipc POSIX IPC for Python not included 0.9.9 0.9.9
library/python/py
was library/python-2/py
py 1.4.15 1.4.26 1.4.26
library/python/pycadf OpenStack not included 0.6.0 0.6.0
library/python/pyflakes
was library/python-2/pyflakes
pyflakes 0.7.2 0.8.1 0.8.1
library/python/pyscss pyScss not included 1.2.1 1.2.1
library/python/pysendfile pysendfile not included 2.0.1 2.0.1
library/python/pytest
was library/python-2/pytest
pytest 2.3.5 2.6.4 2.6.4
library/python/python-mysql
was library/python-2/python-mysql
python-mysql 1.2.2 1.2.5 1.2.5
library/python/pytz
was library/python-2/pytz
pytz 2013.4 2014.10 2014.10
library/python/requests
was library/python-2/requests
requests 1.2.3 2.6.0 2.6.0
library/python/retrying Retrying not included 1.3.3 1.3.3
library/python/rfc3986 rfc3986 not included 0.2.0 0.2.0
library/python/saharaclient OpenStack not included 0.7.6 0.7.6
library/python/setuptools
was library/python-2/setuptools
PyPI: setuptools 0.6.11 0.6.11 0.9.6
library/python/simplegeneric PyPI: simplegeneric not included 0.8.1 0.8.1
library/python/simplejson
was library/python-2/simplejson
simplejson 2.1.2 3.6.5 3.6.5
library/python/six PyPI: six 1.6.1 1.9.0 1.9.0
library/python/sqlalchemy
was library/python-2/sqlalchemy
sqlalchemy 0.7.9 0.9.8 0.9.8
library/python/sqlalchemy-migrate
was library/python-2/sqlalchemy-migrate
sqlalchemy-migrate 0.7.2 0.9.1 0.9.1
library/python/stevedore
was library/python-2/stevedore
stevedore 0.10 1.2.0 1.2.0
library/python/swiftclient OpenStack 2.1.0 2.3.1 2.3.1
library/python/taskflow OpenStack not included 0.6.1 0.6.1
library/python/tox
was library/python-2/tox
tox 1.4.3 1.8.1 1.8.1
library/python/troveclient OpenStack 0.1.4 1.0.8 1.0.8
library/python/virtualenv
was library/python-2/virtualenv
virtualenv 1.9.1 12.0.7 12.0.7
library/python/websockify Websockify 0.5.1 0.6.0 0.6.0
library/python/wsme wsme not included 0.6.4 0.6.4
library/ruby/hiera Puppet not included 1.3.4 1.3.4
library/security/libassuan GnuPG 2.0.1 2.2.0 2.2.0
library/security/libksba GnuPG 1.1.0 1.3.2 1.3.2
library/security/openssl OpenSSL 1.0.1.8 (1.0.1h) 1.0.1.13 (1.0.1m) 1.0.1.15 (1.0.1o)
library/unixodbc unixODBC 2.3.0 2.3.0 2.3.1
library/zlib zlib 1.2.3 1.2.3 1.2.8
mail/mailman GNU Mailman not included not included 2.1.18.1
network/dns/bind ISC BIND 9.6.3.11.0
(9.6-ESV-R11)
9.6.3.11.1
(9.6-ESV-R11)
9.6.3.11.1
(9.6-ESV-R11-P1)
network/firewall OpenBSD PF not included not included 5.5
network/mtr MTR not included not included 0.86
network/openssh OpenSSH not included not included 6.5.0.1
network/rsync rsync 3.1.0 3.1.0 3.1.1
print/filter/hplip HPLIP 3.12.4 3.14.6 3.14.6
runtime/erlang erlang 15.2.3 17.5 17.5
runtime/java/jre-6 Java 1.6.0.75
(Java SE 6u75)
1.6.0.95
(Java SE 6u95)
not included
runtime/java/jre-7 Java 1.7.0.65
(Java SE 7u65)
1.7.0.80
(Java SE 7u80)
1.7.0.80
(Java SE 7u80)
runtime/java/jre-8 Java 1.8.0.11
(Java SE 8u11)
1.8.0.45
(Java SE 8u45)
1.8.0.45
(Java SE 8u45)
runtime/python-27 Python 2.7.3 2.7.9 2.7.9
runtime/python-34 Python not included not included 3.4.3
runtime/ruby-21 Ruby not included
(Ruby 1.9.3 in runtime/ruby-19)
2.1.6
security/compliance/openscap OpenSCAP 1.0.0 1.0.0 1.2.3
security/sudo Sudo 1.8.6.7 1.8.9.5 1.8.9.5
service/network/dns/bind ISC BIND 9.6.3.11.0
(9.6-ESV-R11)
9.6.3.11.1
(9.6-ESV-R11)
9.6.3.11.1
(9.6-ESV-R11-P1)
service/network/ftp ProFTPD 1.3.4.0.3 (1.3.4c) 1.3.5 1.3.5
service/network/ntp NTP 4.2.7.381 (4.2.7p381) 4.2.8.2 (4.2.8p2) 4.2.8.2 (4.2.8p2)
service/network/samba Samba 3.6.23 3.6.25 3.6.25
service/network/smtp/postfix Postfix not included not included 2.11.3
service/network/smtp/sendmail Sendmail 8.14.7 8.14.9 8.15.1
shell/bash GNU bash 4.1.11 4.1.17 4.1.17
shell/watch procps-ng not included not included 3.3.10
shell/zsh Zsh 5.0.5 5.0.7 5.0.7
system/data/hardware-registry pci.ids
usb.ids
2012.06.25
2012.06.11
2015.03.02
2015.02.21
2015.03.02
2015.02.21
system/data/timezone IANA Time Zone Data 0.5.11 (2014c) 0.5.11 (2015d) 2015.4 (2015d)
system/font/truetype/google-droid Droid Fonts 0.2010.2.24 0.2010.2.24 0.2013.6.7
system/library/freetype-2 FreeType 2.4.11 2.5.5 2.5.5
system/library/hmp-libs Oracle HMP 2.2.8 2.3.2.3 2.3.2.4
system/library/i18n/libthai libthai 0.1.9 0.1.9 0.1.14
system/library/libdatrie datrie 0.1.2 0.1.2 0.2.4
system/management/biosconfig Oracle HMP 2.2.8 2.3.2.3 2.3.2.4
system/management/facter Puppet 1.6.18 2.1.0 2.1.0
system/management/fwupdate Oracle HMP 2.2.8 2.3.2.3 2.3.2.4
system/management/fwupdate/qlogic Oracle HMP 1.7.3 1.7.4 1.7.4
system/management/hmp-snmp Oracle HMP 2.2.8 2.3.2.3 2.3.2.4
system/management/hwmgmtcli Oracle HMP 2.2.8 2.3.2.3 2.3.2.4
system/management/hwmgmtd Oracle HMP 2.2.8 2.3.2.3 2.3.2.4
system/management/ocm Oracle Configuration Manager 12.0.0.0.0 12.1.0.0.0 12.1.0.0.0
system/management/puppet Puppet 3.4.1 3.6.2 3.6.2
system/management/raidconfig Oracle HMP 2.2.8 2.3.2.3 2.3.2.4
system/management/ubiosconfig Oracle HMP 2.2.8 2.3.2.3 2.3.2.4
system/rsyslog rsyslog 6.2.0 6.2.0 8.4.2
system/test/sunvts Oracle VTS 7.18.1 7.19.2 7.19.2
terminal/tmux tmux 1.8 1.8 1.9.1
text/gnu-patch GNU Patch 2.5.9 2.7.1 2.7.1
text/groff GNU troff 1.19.2 1.19.2 1.22.2
text/less Less 436 436 458
text/text-utilities util-linux not included not included 2.24.2
web/browser/firefox Mozilla Firefox 17.0.11 31.6.0 31.6.0
web/browser/links Links 1.0.3 1.0.3 2.9
web/curl cURL 7.21.2 7.21.2 7.40.0
web/java-servlet/tomcat Apache Tomcat 6.0.41 6.0.43 6.0.43
web/java-servlet/tomcat-8 Apache Tomcat not included not included 8.0.21
web/novnc noVNC not included 0.5 0.5
web/php-53 PHP 5.3.28 5.3.29 5.3.29
web/php-56 PHP not included not included 5.6.8
web/php-56/extension/php-suhosin-extension Suhosin not included not included 0.9.37.1
web/php-56/extension/php-xdebug Xdebug not included not included 2.3.2
web/server/apache-22 Apache HTTPD 2.2.27 2.2.29 2.2.29
web/server/apache-22/module/apache-jk Apache Tomcat 1.2.28 1.2.28 1.2.40
web/server/apache-22/module/apache-security ModSecurity 2.7.5 2.7.5 2.8.0
web/server/apache-22/module/apache-wsgi mod_wsgi 3.3 3.3 4.3.0
web/server/apache-24 Apache HTTPD not included not included 2.4.12
web/server/apache-24/module/apache-dtrace Apache DTrace module not included not included 0.3.1
web/server/apache-24/module/apache-fcgid Apache mod_fcgid not included not included 2.3.9
web/server/apache-24/module/apache-jk Apache Tomcat not included not included 1.2.40
web/server/apache-24/module/apache-security ModSecurity not included not included 2.8.0
web/server/apache-24/module/apache-wsgi-26
web/server/apache-24/module/apache-wsgi-27
>webserver/apache-24/module/apache-wsgi-34
mod_wsgi not included not included 4.3.0
web/wget GNU wget 1.14 1.16 1.16
x11/server/xorg/driver/xorg-input-keyboard X.Org 1.7.0 1.7.0 1.8.0
x11/server/xorg/driver/xorg-input-mouse X.Org 1.9.0 1.9.0 1.9.1
x11/server/xorg/driver/xorg-input-synaptics X.Org 1.7.1 1.7.1 1.7.8
x11/server/xorg/driver/xorg-video-ast X.Org 0.97.0 1.0.1 1.0.1
x11/server/xorg/driver/xorg-video-dummy X.Org 0.3.6 0.3.6 0.3.7
x11/server/xorg/driver/xorg-video-mga X.Org 1.6.2 1.6.2 1.6.3
x11/server/xorg/driver/xorg-video-vesa X.Org 2.3.2 2.3.2 2.3.3

15 Jul 2015 9:29pm GMT

Peter Hutterer: Using git-notes for marking test suite successes

The libinput test suite takes somewhere around 35 minutes now for a full run. That's annoying, especially as I'm running it for every commit before pushing. I've tried optimising things, but attempts at making it parallel have mostly failed so far (almost all tests need a uinput device created) and too many tests rely on specific timeouts to check for behaviours. Containers aren't an option when you have to create uinput devices so I started out farming out into VMs.

Ideally, the test suite should run against multiple commits (on multiple VMs) at the same time while I'm working on some other branch and then accumulate the results. And that's where git notes come in. They're a bit odd to use and quite the opposite of what I expected. But in short: a git note is an object that can be associated with a commit, without changing the commit itself. Sort-of like a post-it note attached to the commit. But there are plenty of limitations, for example you can only have one note (per namespace) and merge conflicts are quite easy to trigger. Look at any git notes tutorial to find out more, there's plenty out there.

Anyway, dealing with merge conflicts is a no-go for me here. So after a bit of playing around, I found something that seems to work out well. A script to run make check and add notes to the commit, combined with a repository setup to fetch those notes and display them automatically. The core of the script is this:


make check
rc=$?
if [ $? -eq 0 ]; then
status="SUCCESS"
else
status="FAIL"
fi

if [ -n "$sha" ]; then
git notes --ref "test-$HOSTNAME" append \
-m "$status: $HOSTNAME: make check `date`" HEAD
fi
exit $rc

Then in my main repository, I add each VM as a remote, adding a fetch path for the notes:


[remote "f22-libinput1"]
url = f22-libinput1.local:/home/whot/code/libinput
fetch = +refs/heads/*:refs/remotes/f22-libinput1/*
fetch = +refs/notes/*:refs/notes/f22-libinput1/*

Finally, in the main repository, I extended the glob that displays notes to 'everything':


$ git config notes.displayRef "*"

Now git log (and by extension tig) displays all notes attached to a commit automatically. All that's needed is a git fetch --all to fetch everything and it's clear in the logs which commit fails and which one succeeded.


:: whot@jelly:~/code/libinput (master)> git log
commit 6896bfd3f5c3791e249a0573d089b7a897c0dd9f
Author: Peter Hutterer
Date: Tue Jul 14 14:19:25 2015 +1000

test: check for fcntl() return value

Mostly to silence coverity complaints.

Signed-off-by: Peter Hutterer

Notes (f22-jelly/test-f22-jelly):
SUCCESS: f22-jelly: make check Tue Jul 14 00:20:14 EDT 2015

Whenever I look at the log now, I immediately see which commits passed the test suite and which ones didn't (or haven't had it run yet). The only annoyance is that since a note is attached to a commit, amending the commit message or rebasing makes the note "go away". I've copied notes manually after this, but it'd be nice to find a solution to that.

Everything else has been working great so far, but it's quite new so there'll be a bit of polishing happening over the next few weeks. Any suggestions to improve this are welcome.

15 Jul 2015 2:31am GMT

09 Jul 2015

feedplanet.freedesktop.org

Peter Hutterer: Why libinput doesn't support edge scrolling

Update June 09, 2015: edge scrolling for clickpads has been merged. Will be availble in libinput 0.20. Consider the rest of this post obsolete.

libinput supports edge scrolling since version 0.7.0. Whoops, how does the post title go with this statement? Well, libinput supports edge scrolling, but only on some devices and chances are your touchpad won't be one of them. Bug 89381 is the reference bug here.

First, what is edge scrolling? As the libinput documentation illustrates, it is scrolling triggered by finger movement within specific regions of the touchpad - the left and bottom edges for vertical and horizontal scrolling, respectively. This is in contrast to two-finger scrolling, triggered by a two-finger movement, anywhere on the touchpad. synaptics had edge scrolling since at least 2002, the earliest commit in the repo. Back then we didn't have multitouch-capable touchpads, these days they're the default and you'd be struggling to find one that doesn't support at least two fingers. But back then edge-scrolling was the default, and touchpads even had the markings for those scroll edges painted on.

libinput adds a whole bunch of features to the touchpad driver, but those features make it hard to support edge scrolling. First, libinput has quite smart software button support. Those buttons are usually on the lowest ~10mm of the touchpad. Depending on finger movement and position libinput will send a right button click, movement will be ignored, etc. You can leave one finger in the button area while using another finger on the touchpad to move the pointer. You can press both left and right areas for a middle click. And so on. On many touchpads the vertical travel/physical resistance is enough to trigger a movement every time you click the button, just by your finger's logical center moving.

libinput also has multi-direction scroll support. Traditionally we only sent one scroll event for vertical/horizontal at a time, even going as far as locking the scroll direction. libinput changes this and only requires a initial threshold to start scrolling, after that the caller will get both horizontal and vertical scroll information. The reason is simple: it's context-dependent when horizontal scrolling should be used, so a global toggle to disable doesn't make sense. And libinput's scroll coordinates are much more fine-grained too, which is particularly useful for natural scrolling where you'd expect the content to move with your fingers.

Finally, libinput has smart palm detection. The large majority of palm touches are along the left and right edges of the touchpad and they're usually indistinguishable from finger presses (same pressure values for example). Without palm detection some laptops are unusable (e.g. the T440 series).

These features interfere heavily with edge scrolling. Software button areas are in the same region as the horizontal scroll area, palm presses are in the same region as the vertical edge scroll area. The lower vertical edge scroll zone overlaps with software buttons - and that's where you would put your finger if you'd want to quickly scroll up in a document (or down, for natural scrolling). To support edge scrolling on those touchpads, we'd need heuristics and timeouts to guess when something is a palm, a software button click, a scroll movement, the start of a scroll movement, etc. The heuristics are unreliable, the timeouts reduce responsiveness in the UI. So our decision was to only provide edge scrolling on touchpads where it is required, i.e. those that cannot support two-finger scrolling, those with physical buttons. All other touchpads provide only two-finger scrolling. And we are focusing on making 2 finger scrolling good enough that you don't need/want to use edge scrolling (pls file bugs for anything broken)

Now, before you get too agitated: if edge scrolling is that important to you, invest the time you would otherwise spend sharpening pitchforks, lighting torches and painting picket signs into developing a model that allows us to do reliable edge scrolling in light of all the above, without breaking software buttons, maintaining palm detection. We'd be happy to consider it.

09 Jul 2015 2:51am GMT

08 Jul 2015

feedplanet.freedesktop.org

Iago Toral: Implementing ARB_shader_storage_buffer

In my previous post I introduced ARB_shader_storage_buffer, an OpenGL 4.3 feature that is coming soon to Mesa and the Intel i965 driver. While that post focused on explaining the features introduced by the extension, in this post I'll dive into some of the implementation aspects, for those who are curious about this kind of stuff. Be warned that some parts of this post will be specific to Intel hardware.

Following the trail of UBOs

As I explained in my previous post, SSBOs are similar to UBOs, but they are read-write. Because there is a lot of code already in place in Mesa's GLSL compiler to deal with UBOs, it made sense to try and reuse all the data structures and code we had for UBOs and specialize the behavior for SSBOs where that was needed, that allows us to build on code paths that are already working well and reuse most of the code.

That path, however, had some issues that bit me a bit further down the road. When it comes to representing these operations in the IR, my first idea was to follow the trail of UBO loads as well, which are represented as ir_expression nodes. There is a fundamental difference between the two though: UBO loads are constant operations because uniform buffers are read-only. This means that a UBO load operation with the same parameters will always return the same value. This has implications related to certain optimization passes that work based on the assumption that other ir_expression operations share this feature. SSBO loads are not like this: since the shader storage buffer is read-write, two identical SSBO load operations in the same shader may not return the same result if the underlying buffer storage has been altered in between by SSBO write operations within the same or other threads. This forced me to alter a number of optimization passes in Mesa to deal with this situation (mostly disabling them for the cases of SSBO loads and stores).

The situation was worse with SSBO stores. These just did not fit into ir_expression nodes: they did not return a value and had side-effects (memory writes) so we had to come up with a different way to represent them. My initial implementation created a new IR node for these, ir_ssbo_store. That worked well enough, but it left us with an implementation of loads and stores that was a bit inconsistent since both operations used very different IR constructs.

These issues were made clear during the review process, where it was suggested that we used GLSL IR intrinsics to represent load and store operations instead. This has the benefit that we can make the implementation more consistent, having both loads and stores represented with the same IR construct and follow a similar treatment in both the GLSL compiler and the i965 backend. It would also remove the need to disable or alter certain optimization passes to be SSBO friendly.

Read/Write coherence

One of the issues we detected early in development was that our reads and writes did not seem to work very well together: some times a read after a write would fail to see the last value written to a buffer variable. The problem here also spawned from following the implementation trail of the UBO path. In the Intel hardware, there are various interfaces to access memory, like the Sampling Engine and the Data Port. The former is a read-only interface and is used, for example, for texture and UBO reads. The Data Port allows for read-write access. Although both interfaces give access to the same memory region, there is something to consider here: if you mix reads through the Sampling Engine and writes through the Data Port you can run into cache coherence issues, this is because the caches in use by the Sampling Engine and the Data Port functions are different. Initially, we implemented SSBO load operations like UBO loads, so we used the Sampling Engine, and ended up running into this problem. The solution, of course, was to rewrite SSBO loads to go though the Data Port as well.

Parallel reads and writes

GPUs are highly parallel hardware and this has some implications for driver developers. Take a sentence like this in a fragment shader program:

float cx = 1.0;

This is a simple assignment of the value 1.0 to variable cx that is supposed to happen for each fragment produced. In Intel hardware running in SIMD16 mode, we process 16 fragments simultaneously in the same GPU thread, this means that this instruction is actually 16 elements wide. That is, we are doing 16 assignments of the value 1.0 simultaneously, each one is stored at a different offset into the GPU register used to hold the value of cx.

If cx was a buffer variable in a SSBO, it would also mean that the assignment above should translate to 16 memory writes to the same offset into the buffer. That may seem a bit absurd: why would we want to write 16 times if we are always assigning the same value? Well, because things can get more complex, like this:

float cx = gl_FragCoord.x;

Now we are no longer assigning the same value for all fragments, each of the 16 values assigned with this instruction could be different. If cx was a buffer variable inside a SSBO, then we could be potentially writing 16 different values to it. It is still a bit silly, since only one of the values (the one we write last), would prevail.

Okay, but what if we do something like this?:

int index = int(mod(gl_FragCoord.x, 8));
cx[index] = 1;

Now, depending on the value we are reading for each fragment, we are writing to a separate offset into the SSBO. We still have a single assignment in the GLSL program, but that translates to 16 different writes, and in this case the order may not be relevant, but we want all of them to happen to achieve correct behavior.

The bottom line is that when we implement SSBO load and store operations, we need to understand the parallel environment in which we are running and work with test scenarios that allow us to verify correct behavior in these situations. For example, if we only test scenarios with assignments that give the same value to all the fragments/vertices involved in the parallel instructions (i.e. assignments of values that do not depend on properties of the current fragment or vertex), we could easily overlook fundamental defects in the implementation.

Dealing with helper invocations

From Section 7.1 of the GLSL spec version 4.5:

"Fragment shader helper invocations execute the same shader code
as non-helper invocations, but will not have side effects that
modify the framebuffer or other shader-accessible memory."

To understand what this means I have to introduce the concept of helper invocations: certain operations in the fragment shader need to evaluate derivatives (explicitly or implicitly) and for that to work well we need to make sure that we compute values for adjacent fragments that may not be inside the primitive that we are rendering. The fragment shader executions for these added fragments are called helper invocations, meaning that they are only needed to help in computations for other fragments that are part of the primitive we are rendering.

How does this affect SSBOs? Because helper invocations are not part of the primitive, they cannot have side-effects, after they had served their purpose it should be as if they had never been produced, so in the case of SSBOs we have to be careful not to do memory writes for helper fragments. Notice also, that in a SIMD16 execution, we can have both proper and helper fragments mixed in the group of 16 fragments we are handling in parallel.

Of course, the hardware knows if a fragment is part of a helper invocation or not and it tells us about this through a pixel mask register that is delivered with all executions of a fragment shader thread, this register has a bitmask stating which pixels are proper and which are helper. The Intel hardware also provides developers with various kinds of messages that we can use, via the Data Port interface, to write to memory, however, the tricky thing is that not all of them incorporate pixel mask information, so for use cases where you need to disable writes from helper fragments you need to be careful with the write message you use and select one that accepts this sort of information.

Vector alignments

Another interesting thing we had to deal with are address alignments. UBOs work with layout std140. In this setup, elements in the UBO definition are aligned to 16-byte boundaries (the size of a vec4). It turns out that GPUs can usually optimize reads and writes to multiples of 16 bytes, so this makes sense, however, as I explained in my previous post, SSBOs also introduce a packed layout mode known as std430.

Intel hardware provides a number of messages that we can use through the Data Port interface to write to memory. Each message has different characteristics that makes it more suitable for certain scenarios, like the pixel mask I discussed before. For example, some of these messages have the capacity to write data in chunks of 16-bytes (that is, they write vec4 elements, or OWORDS in the language of the technical docs). One could think that these messages are great when you work with vector data types, however, they also introduce the problem of dealing with partial writes: what happens when you only write to an element of a vector? or to a buffer variable that is smaller than the size of a vector? what if you write columns in a row_major matrix? etc

In these scenarios, using these messages introduces the need to mask the writes because you need to disable the channels in the vec4 element that you don't want to write. Of course, the hardware provides means to do this, we only need to set the writemask of the destination register of the message instruction to select the right channels. Consider this example:

struct TB {
    float a, b, c, d;
};

layout(std140, binding=0) buffer Fragments {
   TB s[3];
   int index;
};

void main()
{
   s[0].d = -1.0;
}

In this case, we could use a 16-byte write message that takes 0 as offset (i.e writes at the beginning of the buffer, where s[0] is stored) and then set the writemask on that instruction to WRITEMASK_W so that only the fourth data element is actually written, this way we only write one data element of 4 bytes (-1) at offset 12 bytes (s[0].d). Easy, right? However, how do we know, in general, the writemask that we need to use? In std140 layout mode this is easy: since each element in the SSBO is aligned to a 16-byte boundary, we simply need to take the byte offset at which we are writing, divide it by 16 (to convert it to units of vec4) and the modulo of that operation is the byte offset into the chunk of 16-bytes that we are writing into, then we only have to divide that by 4 to get the component slot we need to write to (a number between 0 and 3).

However, there is a restriction: we can only set the writemask of a register at compile/link time, so what happens when we have something like this?:

s[i].d = -1.0;

The problem with this is that we cannot evaluate the value of i at compile/link time, which inevitably makes our solution invalid for this. In other words, if we cannot evaluate the actual value of the offset at which we are writing at compile/link time, we cannot use the writemask to select the channels we want to use when we don't want to write a vec4 worth of data and we have to use a different type of message.

That said, in the case of std140 layout mode, since each data element in the SSBO is aligned to a 16-byte boundary you may realize that the actual value of i is irrelevant for the purpose of the modulo operation discussed above and we can still manage to make things work by completely ignoring it for the purpose of computing the writemask, but in std430 that trick won't work at all, and even in std140 we would still have row_major matrix writes to deal with.

Also, we may need to tweak the message depending on whether we are running on the vertex shader or the fragment shader because not all message types have appropriate SIMD modes (SIMD4x2, SIMD8, SIMD16, etc) for both, or because different hardware generations may not have all the message types or support all the SIMD modes we need need, etc

The point of this is that selecting the right message to use can be tricky, there are multiple things and corner cases to consider and you do not want to end up with an implementation that requires using many different messages depending on various circumstances because of the increasing complexity that it would add to the implementation and maintenance of the code.

Closing notes

This post did not cover all the intricacies of the implementation of ARB_shader_storage_buffer_object, I did not discuss things like the optional unsized array or the compiler details of std430 for example, but, hopefully, I managed to give an idea of the kind of problems one would have to deal with when coding driver support for this or other similar features.

08 Jul 2015 7:02am GMT

04 Jul 2015

feedplanet.freedesktop.org

Rob Clark: happy (gpu) independence day

So, I realized it has been a while since posting about freedreno progress, so in honor of US independence day I figured it was as good an excuse as any for an update about independence from gpu blob driver for snapdragon/adreno..

Back in end of March 2015 at ELC, I gave a freedreno update presentation at ELC, listing the following major tasks left for gles3 support:
  • Uniform Buffer Objects (UBO)
  • Transform Feedback (TF)
  • Multi-Render-Target (MRT)
  • advanced flow control in shader compiler
and additionally for gl3:
  • Multisample anti-aliasing (MSAA)
  • NV_conditional_render
  • 32b depth (z32 and z32_s8) (which I forgot to mention in the presentation)
EDIT: Ilia pointed out that 32b depth is needed for gles3 too, and gl3 additionally needs clipdist/etc (which we'll have to emulate, but hopefully can do in a generic nir pass) and rgtc (which will need sw decompression hopefully in mesa core so other drivers for gles class hw can reuse). Original list was based on what mesa's compute_version() code was checking quite some time back.

Since then, we've gained support for UBO's (a3xx by Ilia Mirkin, and a4xx), MRT (for a3xx and core, again thanks to Ilia.. still needs to be wired up for a4xx), 32b depth (a3xx and core, again thanks to Ilia), and I've finished up shader compiler for loops/flow-control for ir3 (a3xx/a4xx). The shader compiler work was a somewhat larger task than I expected (and I did expect it to be a lot of work), but it also involved moving over to NIR, in addition to re-writing the scheduler and register allocation passes, as well as a lot of re-org to ir3 in order to support multiple basic blocks. The move to NIR was not strictly required, but it brings a lot of benefits in the form of shared support for conversion to SSA, scalarizing, CSE, DCE, constant folding, and algebraic optimizations. And I figured it was less work in the long run to move to NIR first and drop the TGSI frontend, before doing all the refactoring needed to support loops and non-lowerable flow-control. Incidentally, the compiler work should make the shader-compiler part of TF easier (since we need to generate a conditional write to TF buffer iff not overwriting past the end of the TF buffer).

In the mean time, freedreno and drm/msm have also gained support for the a306 gpu found in the new dragonboard 410c. This board is a nice new low cost ($75) snapdragon community board based on the 64bit snapdragon 410. And thanks to a lot of work by linaro and qualcomm, the upstream kernel situation for this board is looking pretty good. It is shipping initially with a 4.0 based kernel (with patches on top for stuff that hadn't yet been merged for 4.0, including a lot of stuff backported from 4.1 and 4.2), including gpu/display/audio/video-codec/etc. I believe that the 4.1 kernel was the first version where a vanilla kernel could boot on db410c with basic stuff (like serial console) working. The kernel support for the gpu and display, other than the adv7533 hdmi bridge chip) landed in 4.2. There is still more work to get *everything* (including audio, vidc, etc) merged upstream, but work continues in that direction, making this quite an exciting board.
Also, we have a GSoC student, Varad, working on freedreno support for android. It is still in early stages, with some debugging still to do, but he has made a lot of progress and things are starting to work.
And since no blog post is complete without some nice screenshots... the other day someone pointed me at a post in the dolphin forums about how dolphin was running on a420 (same device as in the ifc6540). We all had a good laugh about the rendering issues with the blob driver. But, since dolphin was the first gl3 game that worked with freedreno, I was curious how freedreno would do.. so I fired up the ifc6540 and replayed some dolphin fifo logs that would let me render approximately the same scenes:





Yoshi looks to be rendering pretty well.. digimon has a bit of corruption, but no where near as bad as the blob driver. I suspect the issue with digimon is an instruction scheduling issue in the shader compiler (well, no rest for the gpu driver writers), but nice to see that it is already in pretty good shape.

Now we just need steam store or some unigine demos for arm linux :-P



04 Jul 2015 6:54pm GMT

Corbin Simpson: Monte: Types

In type-theoretic terms, Monte has a very boring type system. All objects expressible in Monte form a set, Mont, which has some properties, but not anything interesting from a theoretical point of view. I plan to talk about Mont later, but for now we'll just consider it to be a way for me to make existential or universal claims about Monte's object model.

Let's start with guards. Guards are one of the most important parts of writing idiomatic Monte, and they're also definitely an integral part of Monte's safety and security guarantees. They look like types, but are they actually useful as part of a type system?

Let's consider the following switch expression:

switch (x):
    match c :Char:
        "It's a character"
    match i :Int:
        "It's an integer"
    match _:
        "I don't know what it is!"

The two guards, Char and Int, perform what amounts to a type discrimination. We might have an intuition that if x were to pass Char, then it would not pass Int, and vice versa; we might also have an intuition that the order of checking Char and Int does not matter. I'm going to formalize these and show how strong they can be in Monte.

When a coercion happens, the object being coerced is called the specimen. The result of the coercion is called the prize. You've already been introduced to the guard, the object which is performing the coercion.

It happens that a specimen might override a Miranda method, _conformTo/1, in order to pass guards that it cannot normally pass. We call all such specimens conforming. All specimens that pass a guard also conform to it, but some non-passing specimens might still be able to conform by yielding a prize to the guard.

Here's an axiom of guards: For all objects in Mont, if some object specimen conforms to a guard G, and def prize := G.coerce(specimen, _), then prize passes G. This cannot be proven by any sort of runtime assertion (yet?), but any guard that does not obey this axiom is faulty. One expects that a prize returned from a coercion passes the guard that was performing the coercion; without this assumption, it would be quite foolhardy to trust any guard at all!

With that in mind, let's talk about properties of guards. One useful property is idempotence. An idempotent guard G is one that, for all objects in Mont which pass G, any such object specimen has the equality G.coerce(specimen, _) == specimen. (Monte's equality, if you're unfamiliar with it, considers two objects to be equal if they cannot be distinguished by any response to any message sent at them. I could probably craft equivalency classes out of that rule at some point in the future.)

Why is idempotency good? Well, it formalizes the intuition that objects aren't altered when coerced if they're already "of the right type of object." I expect that if I pass 42 to a function that has the pattern x :Int, I might reasonably expect that x will get 42 bound to it, and not 420 or some other wrong number.

Monte's handling of state is impure. This complicates things. Since an object's internal state can vary, its willingness to respond to messages can vary. Let's be more precise in our definition of passing coercion. An object specimen passes coercion by a guard G if, for some combination of specimen and G internal states, G.coerce(specimen, _) == specimen. If specimen passes for all possible combinations of specimen and G internal states, then we say that specimen always passes coercion by G. (And if specimen cannot pass coercion with any possible combination of states, then it never passes.)

Now we can get to retractability. A idempotent guard G is unretractable if, for all objects in Mont which pass coercion by G, those objects always pass coercion by G. The converse property, that it's possible for some object to pass but not always pass coercion, would make G retractable.

An unretractable guard provides a very comfortable improvement over an idempotent one, similar to dipping your objects in DeepFrozen. I think that most of the interesting possibilities for guards come from unretractable guards. Most of the builtin guards are unretractable, too; data guards like Double and Str are good examples.

Theorem: An unretractable guard G partitions Mont into two disjoint subsets whose members always pass or never pass coercion by G, respectively. The proof is pretty trivial. This theorem lets us formalize the notion of a guard as protecting a section of code from unacceptable values; if Char is unretractable (and it is!), then a value guarded by Char is always going to be a character and never anything else. This theorem also gives us our first stab at a type declaration, where we might say something like "An object is of type Char if it passes Char."

Now let's go back to the beginning. We want to know how Char and Int interact. So, let's define some operations analagous to set union and intersection. The union of two unretractable guards G and H is written Any[G, H] and is defined as an unretractable guard that partitions Mont into the union of the two sets of objects that always pass G or H respectively, and all other objects. A similar definition can be created for the intersection of G and H, written All[G, H] and creating a similar partition with the intersection of the always-passing sets.

Both union and intersection are semigroups on the set of unretractable guards. (I haven't picked a name for this set yet. Maybe Mont-UG?) We can add in identity elements to get monoids. For union, we can use the hypothetical guard None, which refuses to pass any object in Mont, and for intersection, the completely real guard Any can be used.

object None:
    to coerce(_, ej):
        throw(ej, "None shall pass")

It gets better. The operations are also closed over Mont-UG, and it's possible to construct an inverse of any unretractable guard which is also an unretractable guard:

def invertUG(ug):
    return object invertedUG:
        to coerce(specimen, ej):
            escape innerEj:
                ug.coerce(specimen, innerEj)
                throw(ej, "Inverted")
            catch _:
                return specimen

This means that we have groups! Two lovely groups. They're both Abelian, too. Exciting stuff. And, in the big payoff of the day, we get two rings on Mont-UG, depending on whether you want to have union or intersection as your addition or multiplication.

This empowers a programmer, informally, to intuit that if Char and Int are disjoint (and, in this case, they are), then it might not matter in which order they are placed into the switch expression.

That's all for now!

04 Jul 2015 6:20pm GMT

30 Jun 2015

feedplanet.freedesktop.org

Christian Schaller: Fedora Workstation next steps : Introducing Pinos

So this will be the first in a series of blogs talking about some major initiatives we are doing for Fedora Workstation. Today I want to present and talk about a thing we call Pinos.

So what is Pinos? One of the original goals of Pinos was to provide the same level of advanced hardware handling for Video that PulseAudio provides for Audio. For those of you who has been around for a while you might remember how you once upon a time could only have one application using the sound card at the same time until PulseAudio properly fixed that. Well Pinos will allow you to share your video camera between multiple applications and also provide an easy to use API to do so.

Video providers and consumers are implemented as separate processes communicating with DBUS and exchanging video frames using fd passing.

Some features of Pinos

What do we want to do with this in Fedora Workstation?

Who is working on this?
Pinos is being designed and written by Wim Taymans who is the co-creator of the GStreamer multimedia framework and also a regular contributor to the PulseAudio project. Wim is also the working for Red Hat as a Principal Engineer, being in charge of a lot of our multimedia support in both Red Hat Enterprise Linux and Fedora. It is also worth nothing that it draws many of its ideas from an early prototype by William Manley called PulseVideo and builds upon some of the code that was merged into GStreamer due to that effort.

Where can I get the code?
The code is currently hosteed in Wim's private repository on freedesktop. You can get it at cgit.freedesktop.org/~wtay/pinos.

How can I get involved or talk to the author
You can find Wim on Freenode IRC, he uses the name wtay and hangs out in both the #gstreamer and #pulseaudio IRC channels.
Once the project is a bit further along we will get some basic web presence set up and a mailing list created.

FAQ

If Pinos contains Audio support will it eventually replace PulseAudio too?
Probably not, the usecases and goals for the two systems are somewhat different and it is not clear that trying to make Pinos accommodate all the PulseAudio usescases would be worth the effort or possible withour feature loss. So while there is always a temptation to think 'hey, wouldn't it be nice to have one system that can handle everything' we are at this point unconvinced that the gain outweighs the pain.

Will Pinos offer re-directing kernel APIs for video devices like PulseAudio does for Audio? In order to handle legacy applications?
No, that was possible due to the way ALSA worked, but V4L2 doesn't have such capabilities and thus we can not take advantage of them.

Why the name Pinos?
The code name for the project was PulseVideo, but to avoid confusion with the PulseAudio project and avoid people making to many assumptions based on the name we decided to follow in the tradition of Wayland and Weston and take inspiration from local place names related to the creator. So since Wim lives in Pinos de Alhaurin close to Malaga in Spain we decided to call the project Pinos. Pinos is the word for pines in Spanish :)

30 Jun 2015 4:30pm GMT

26 Jun 2015

feedplanet.freedesktop.org

Daniel Vetter: Neat drm/i915 stuff for 4.2

The 4.1 kernel release is still a few weeks off and hence a bit early to talk about 4.2. But the drm subsystem feature cut-off already passed and I'm going on vacation for 2 weeks, so here we go.

First things first: No, i915 does not yet support atomic modesets. But a lot of progress has been made again towards enabling it. As I explained last time around the trouble is that the intel driver has grown its own almost-atomic modeset infrastructure over the past few years. And now we need to convert that to the slightly different proper atomic support infrastructure merged into the drm core, which means lots and lots of small changes all over the driver. A big part merged in this release is the removal of the ->new_config pointer by Ander, Matt & Maarten. This was the old i915-specific pointer to the staged new configuration. Removing it required switching all the CRTC code over to handling the staged configuration stored in the struct drm_atomic_state to be compatible with the atomic core. Unfortunately we still need to do the same for encoder/connector states and for plane states, so there's still lots of shuffling pending for 4.2.

There has also been other feature work going on on the modeset side: Ville cleaned&fixed up the CDCLK support in anticipation of implementing dynamic display clock frequency scaling. Unfortunately that part of his patches hasn't landed yet. Ville has also merged patches to fix up some details in the CPT modeset sequence, maybe this will finally fix the remaining "DP port stuck" issues we still seem to have.

Looking at newer platforms the interesting bit is rotation support for SKL from Sonika and Tvrtko. Compared to older platforms skl now also supports 90° and 270° rotation in the scanout engines, but only when the framebuffer uses a special tiling layout (which have been enabled in 4.0). A related feature is support for plane/CRTC scalers on SKL, provided by Chandra. Skylake has also gained support for the new low-power display states DC5/6. For Broxton basic enabling has landed, but there's nothing too interesting yet besides piles of small adjustments all over. This is because Broxton and Skylake have a common display block (similar to how the render block for atom chips was already shared since Baytrail) and hence share a lot of the infrastructure code. Unfortunately neither of these platforms has yet left the preliminary hardware support label for the i915 driver.

There's also a few minor features in the display code worth mentioning: DP compliance testing infrastructure from Todd Previte - DP compliance test devices have a special DP AUX sidechannel protocol for requesting certain test procedures and hence need a bit of driver support. Most of this will be in userspace though, with the kernel just forward requests and handing back results. Mika Kahola has optimized the DP link training, the kernel will now first try to use the current values (either from a previous modeset or set up by the firmware). PSR has also seen some more work, unfortunately it's still not yet enabled by default. And finally there's been lots of cleanups and improvements under the hood all over, as usual.

A big feature is the dynamic pagetable allocation for gen8+ from Michel Thierry and Ben Widawsky. This will greatly reduce the overhead of PPGTT and is a requirement for 48bit address space support - with that big a VM preallocating all the pagetables is just not possible any more. The gen7 cmd parser is now finally fixed up and enabled by default (thanks to Rebecca Palmer for one crucial fix), which means finally some newer GL extensions can be used without adding kernel hacks. And Chris Wilson has fine-tuned the cmd parser with a big pile of patches to reduce the overhead. And Chris has tuned the RPS boost code more, it should now no longer erratically boost the GPU's clock when it's inappropriate. He has also written a lot of patches to reduce the overhead of execlist command submission, and some of those patches have been merged into this release.

Finally two pieces of prep work: A few patches from John Harrison to prepare for removing the outstanding lazy request. We've added this years ago as a cheap way out of a memory and ringbuffer space preallocation issue and ever since then paid the price for this with added complexity leaking all over the GEM code. Unfortunately the actual removal is still pending. And then Joonas Lahtinen has implemented partial GTT mmap support. This is needed for virtual enviroments like XenGT where the GTT is cut up between the different guests and hence badly fragmented. The merged code only supports linear views and still needs support for fenced buffer objects to be actually useful.

26 Jun 2015 8:58am GMT

25 Jun 2015

feedplanet.freedesktop.org

Peter Hutterer: libinput touchpad gestures

One of the bits we are currently finalising in libinput are touchpad gestures. Gestures on a normal touchscreens are left to the compositor and, in extension, to the client applications. Touchpad gestures are notably different though, they are bound to the location of the pointer or the keyboard focus (depending on the context) and they are less context-sensitive. Two fingers moving together on a touchscreen may be two windows being moved at the same time. On a touchpad however this is always a pinch.

Touchpad gestures are a lot more hardware-sensitive than touchscreens where we can just forward the touch points directly. On a touchpad we may have to consider software buttons or just HW-limitations of the touchpad. This prevents the implementation of touchpad gestures in a higher level - only libinput is aware of the location, size, etc. of software buttons.

Hence - touchpad gestures in libinput. The tree is currently sitting here and is being rebased as we go along, but we're expecting to merge this into master soon.

The interface itself is fairly simple: any device that may send gestures will have the LIBINPUT_DEVICE_CAP_GESTURE capability set. This is currently only implemented for touchpads but there is the potential to support this on other devices too. Two gestures are supported: swipe and pinch (+rotate). Both come with a finger count and both follow a Start/Update/End cycle. Gestures have a finger count that remains the same for the gestures, so if you switch from a two-finger pinch to a three-finger pinch you will see one gesture end and the next one start. Note that how to deal with this is up to the caller - it may very well consider this the same gesture semantically.

Swipe gestures have delta coordinates (horizontally and vertically) of the logical center of the gesture, compared to the previous event. A pinch gesture has the delta coordinates too and a delta angle (clockwise, in degrees). A pinch gesture also has the notion of an absolute scale, the Begin event always has a scale of 1.0 and that changes as the fingers move towards each other further apart. A scale of 2.0 means they're now twice as far apart as originally.

Nothing overly exciting really, it's a simple API that provides a couple of basic elements of data. Once integrated into the desktop properly, it should provide for some improved navigation. OS X has had this for a log time now and it's only time we caught up.

25 Jun 2015 12:50am GMT

20 Jun 2015

feedplanet.freedesktop.org

David Herrmann: From AF_UNIX to kdbus

You're a developer and you know AF_UNIX? You used it occasionally in your code, you know how high-level IPC puts marshaling on top and generally have a confident feeling when talking about it? But you actually have no clue what this fancy new kdbus is really about? During discussions you just nod along and hope nobody notices?

Good.

This is how it should be! As long as you don't work on IPC libraries, there's absolutely no requirement for you to have any idea what kdbus is. But as you're reading this, I assume you're curious and want to know more. So lets pick you up at AF_UNIX and look at a simple example.

AF_UNIX

Imagine a handful of processes that need to talk to each other. You have two options: Either you create a separate socket-pair between each two processes, or you create just one socket per process and make sure you can address all others via this socket. The first option will cause a quadratic growth of sockets and blows up if you raise the number of processes. Hence, we choose the latter, so our socket allocation looks like this:

int fd = socket(AF_UNIX, SOCK_DGRAM | SOCK_CLOEXEC | SOCK_NONBLOCK, 0);

Simple. Now we have to make sure the socket has a name and others can find it. We choose to not pollute the file-system but rather use the managed abstract namespace. As we don't care for the exact names right now, we just let the kernel choose one. Furthermore, we enable credential-transmission so we can recognize peers that we get messages from:

struct sockaddr_un address = { .sun_family = AF_UNIX };
int enable = 1;

setsockopt(fd, SOL_SOCKET, SO_PASSCRED, &enable, sizeof(enable));
bind(fd, (struct sockaddr*)&address, sizeof(address.sun_family));

By omitting the sun_path part of the address, we tell the kernel to pick one itself. This was easy. Now we're ready to go so lets see how we can send a message to a peer. For simplicity, we assume we know the address of the peer and it's stored in destination.

struct sockaddr_un destination = { .sun_family = AF_UNIX, .sun_path = "..." };

sendto(fd, "foobar", 7, MSG_NOSIGNAL, (struct sockaddr*)&destination, sizeof(destination));

…and that's all that is needed to send our message to the selected destination. On the receiver's side, we call into recvmsg to receive the first message from our queue. We cannot use recvfrom as we want to fetch the credentials, too. Furthermore, we also cannot know how big the message is, so we query the kernel first and allocate a suitable buffer. This could be avoided, if we knew the maximum package size. But lets be thorough and support unlimited package sizes. Also note that recvmsg will return any next queued message. We cannot know the sender beforehand, so we also pass a buffer to store the address of the sender of this message:

char control[CMSG_SPACE(sizeof(struct ucred))];
struct sockaddr_un sender = {};
struct ucred creds = {};
struct msghdr msg = {};
struct iovec iov = {};
struct cmsghdr *cmsg;
char *message;
ssize_t l;
int size;

ioctl(fd, SIOCINQ, &size);
message = malloc(size + 1);
iov.iov_base = message;
iov.iov_len = size;

msg.msg_name = (struct sockaddr*)&sender;
msg.msg_namelen = sizeof(sender);
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
msg.msg_control = control;
msg.msg_controllen = sizeof(control);

l = recvmsg(fd, msg, MSG_CMSG_CLOEXEC);

for (cmsg = CMSG_FIRSTHDR(&msg); cmsg; cmsg = CMSG_NXTHDR(&msg, cmsg)) {
        if (cmsg->cmsg_level == SOL_SOCKET && cmsg->cmsg_type == SCM_CREDENTIALS)
                memcpy(&creds, CMSG_DATA(cmsg), sizeof(creds));
}

printf("Message: %s (length: %zd uid: %u sender: %s)\n",
       message, l, creds.uid, sender.sun_path + 1);
free(message);

That's it. With this in place, we can easily send arbitrary messages between our peers. We have no length restriction, we can identify the peers reliably and we're not limited by any marshaling. Sure, we now dropped error-handling, event-loop integration and ignored some nasty corner cases, but that can all be solved. The code stays mostly the same.

Congratulations! You now understand kdbus. In kdbus:

Granted, the code will look slightly different. However, the concept stays the same. You still transmit raw messages, no marshaling is mandated. You can transmit credentials and file-descriptors, you can specify the peer to send messages to and you got to do that all through a single file descriptor. Doesn't sound complex, does it?

So if kdbus is actually just like AF_UNIX+SOCK_DGRAM, why use it?

kdbus

The AF_UNIX setup described above has some significant flaws:

kdbus solves all these issues. Some of these issues could be solved with AF_UNIX (which, btw., would look a lot like AF_NETLINK), some cannot. But more importantly, AF_UNIX was never designed as a shared bus and hence should not be used as such. kdbus, on the other hand, was designed with a shared bus model in mind and as such avoids most of these issues.

With that basic understanding of kdbus, next time a news source reports about the "crazy idea to shove DBus into the kernel", I hope you'll be able to judge for yourself. And if you want to know more, I recommend building the kernel documentation, or diving into the code.


20 Jun 2015 2:48pm GMT

18 Jun 2015

feedplanet.freedesktop.org

Lennart Poettering: The new sd-bus API of systemd

With the new v221 release of systemd we are declaring the sd-bus API shipped with systemd stable. sd-bus is our minimal D-Bus IPC C library, supporting as back-ends both classic socket-based D-Bus and kdbus. The library has been been part of systemd for a while, but has only been used internally, since we wanted to have the liberty to still make API changes without affecting external consumers of the library. However, now we are confident to commit to a stable API for it, starting with v221.

In this blog story I hope to provide you with a quick overview on sd-bus, a short reiteration on D-Bus and its concepts, as well as a few simple examples how to write D-Bus clients and services with it.

What is D-Bus again?

Let's start with a quick reminder what D-Bus actually is: it's a powerful, generic IPC system for Linux and other operating systems. It knows concepts like buses, objects, interfaces, methods, signals, properties. It provides you with fine-grained access control, a rich type system, discoverability, introspection, monitoring, reliable multicasting, service activation, file descriptor passing, and more. There are bindings for numerous programming languages that are used on Linux.

D-Bus has been a core component of Linux systems since more than 10 years. It is certainly the most widely established high-level local IPC system on Linux. Since systemd's inception it has been the IPC system it exposes its interfaces on. And even before systemd, it was the IPC system Upstart used to expose its interfaces. It is used by GNOME, by KDE and by a variety of system components.

D-Bus refers to both a specification, and a reference implementation. The reference implementation provides both a bus server component, as well as a client library. While there are multiple other, popular reimplementations of the client library - for both C and other programming languages -, the only commonly used server side is the one from the reference implementation. (However, the kdbus project is working on providing an alternative to this server implementation as a kernel component.)

D-Bus is mostly used as local IPC, on top of AF_UNIX sockets. However, the protocol may be used on top of TCP/IP as well. It does not natively support encryption, hence using D-Bus directly on TCP is usually not a good idea. It is possible to combine D-Bus with a transport like ssh in order to secure it. systemd uses this to make many of its APIs accessible remotely.

A frequently asked question about D-Bus is why it exists at all, given that AF_UNIX sockets and FIFOs already exist on UNIX and have been used for a long time successfully. To answer this question let's make a comparison with popular web technology of today: what AF_UNIX/FIFOs are to D-Bus, TCP is to HTTP/REST. While AF_UNIX sockets/FIFOs only shovel raw bytes between processes, D-Bus defines actual message encoding and adds concepts like method call transactions, an object system, security mechanisms, multicasting and more.

From our 10year+ experience with D-Bus we know today that while there are some areas where we can improve things (and we are working on that, both with kdbus and sd-bus), it generally appears to be a very well designed system, that stood the test of time, aged well and is widely established. Today, if we'd sit down and design a completely new IPC system incorporating all the experience and knowledge we gained with D-Bus, I am sure the result would be very close to what D-Bus already is.

Or in short: D-Bus is great. If you hack on a Linux project and need a local IPC, it should be your first choice. Not only because D-Bus is well designed, but also because there aren't many alternatives that can cover similar functionality.

Where does sd-bus fit in?

Let's discuss why sd-bus exists, how it compares with the other existing C D-Bus libraries and why it might be a library to consider for your project.

For C, there are two established, popular D-Bus libraries: libdbus, as it is shipped in the reference implementation of D-Bus, as well as GDBus, a component of GLib, the low-level tool library of GNOME.

Of the two libdbus is the much older one, as it was written at the time the specification was put together. The library was written with a focus on being portable and to be useful as back-end for higher-level language bindings. Both of these goals required the API to be very generic, resulting in a relatively baroque, hard-to-use API that lacks the bits that make it easy and fun to use from C. It provides the building blocks, but few tools to actually make it straightforward to build a house from them. On the other hand, the library is suitable for most use-cases (for example, it is OOM-safe making it suitable for writing lowest level system software), and is portable to operating systems like Windows or more exotic UNIXes.

GDBus is a much newer implementation. It has been written after considerable experience with using a GLib/GObject wrapper around libdbus. GDBus is implemented from scratch, shares no code with libdbus. Its design differs substantially from libdbus, it contains code generators to make it specifically easy to expose GObject objects on the bus, or talking to D-Bus objects as GObject objects. It translates D-Bus data types to GVariant, which is GLib's powerful data serialization format. If you are used to GLib-style programming then you'll feel right at home, hacking D-Bus services and clients with it is a lot simpler than using libdbus.

With sd-bus we now provide a third implementation, sharing no code with either libdbus or GDBus. For us, the focus was on providing kind of a middle ground between libdbus and GDBus: a low-level C library that actually is fun to work with, that has enough syntactic sugar to make it easy to write clients and services with, but on the other hand is more low-level than GDBus/GLib/GObject/GVariant. To be able to use it in systemd's various system-level components it needed to be OOM-safe and minimal. Another major point we wanted to focus on was supporting a kdbus back-end right from the beginning, in addition to the socket transport of the original D-Bus specification ("dbus1"). In fact, we wanted to design the library closer to kdbus' semantics than to dbus1's, wherever they are different, but still cover both transports nicely. In contrast to libdbus or GDBus portability is not a priority for sd-bus, instead we try to make the best of the Linux platform and expose specific Linux concepts wherever that is beneficial. Finally, performance was also an issue (though a secondary one): neither libdbus nor GDBus will win any speed records. We wanted to improve on performance (throughput and latency) -- but simplicity and correctness are more important to us. We believe the result of our work delivers our goals quite nicely: the library is fun to use, supports kdbus and sockets as back-end, is relatively minimal, and the performance is substantially better than both libdbus and GDBus.

To decide which of the three APIs to use for you C project, here are short guidelines:

(I am not covering C++ specifically here, this is all about plain C only. But do note: if you use Qt, then QtDBus is the D-Bus API of choice, being a wrapper around libdbus.)

Introduction to D-Bus Concepts

To the uninitiated D-Bus usually appears to be a relatively opaque technology. It uses lots of concepts that appear unnecessarily complex and redundant on first sight. But actually, they make a lot of sense. Let's have a look:

So much for the various concepts D-Bus knows. Of course, all these new concepts might be overwhelming. Let's look at them from a different perspective. I assume many of the readers have an understanding of today's web technology, specifically HTTP and REST. Let's try to compare the concept of a HTTP request with the concept of a D-Bus method call:

Of course, comparing an HTTP request to a D-Bus method call is a bit comparing apples and oranges. However, I think it's still useful to get a bit of a feeling of what maps to what.

From the shell

So much about the concepts and the gray theory behind them. Let's make this exciting, let's actually see how this feels on a real system.

Since a while systemd has included a tool busctl that is useful to explore and interact with the D-Bus object system. When invoked without parameters, it will show you a list of all peers connected to the system bus. (Use --user to see the peers of your user bus instead):

$ busctl
NAME                                       PID PROCESS         USER             CONNECTION    UNIT                      SESSION    DESCRIPTION
:1.1                                         1 systemd         root             :1.1          -                         -          -
:1.11                                      705 NetworkManager  root             :1.11         NetworkManager.service    -          -
:1.14                                      744 gdm             root             :1.14         gdm.service               -          -
:1.4                                       708 systemd-logind  root             :1.4          systemd-logind.service    -          -
:1.7200                                  17563 busctl          lennart          :1.7200       session-1.scope           1          -
[…]
org.freedesktop.NetworkManager             705 NetworkManager  root             :1.11         NetworkManager.service    -          -
org.freedesktop.login1                     708 systemd-logind  root             :1.4          systemd-logind.service    -          -
org.freedesktop.systemd1                     1 systemd         root             :1.1          -                         -          -
org.gnome.DisplayManager                   744 gdm             root             :1.14         gdm.service               -          -
[…]

(I have shortened the output a bit, to make keep things brief).

The list begins with a list of all peers currently connected to the bus. They are identified by peer names like ":1.11". These are called unique names in D-Bus nomenclature. Basically, every peer has a unique name, and they are assigned automatically when a peer connects to the bus. They are much like an IP address if you so will. You'll notice that a couple of peers are already connected, including our little busctl tool itself as well as a number of system services. The list then shows all actual services on the bus, identified by their service names (as discussed above; to discern them from the unique names these are also called well-known names). In many ways well-known names are similar to DNS host names, i.e. they are a friendlier way to reference a peer, but on the lower level they just map to an IP address, or in this comparison the unique name. Much like you can connect to a host on the Internet by either its host name or its IP address, you can also connect to a bus peer either by its unique or its well-known name. (Note that each peer can have as many well-known names as it likes, much like an IP address can have multiple host names referring to it).

OK, that's already kinda cool. Try it for yourself, on your local machine (all you need is a recent, systemd-based distribution).

Let's now go the next step. Let's see which objects the org.freedesktop.login1 service actually offers:

$ busctl tree org.freedesktop.login1
└─/org/freedesktop/login1
  ├─/org/freedesktop/login1/seat
  │ ├─/org/freedesktop/login1/seat/seat0
  │ └─/org/freedesktop/login1/seat/self
  ├─/org/freedesktop/login1/session
  │ ├─/org/freedesktop/login1/session/_31
  │ └─/org/freedesktop/login1/session/self
  └─/org/freedesktop/login1/user
    ├─/org/freedesktop/login1/user/_1000
    └─/org/freedesktop/login1/user/self

Pretty, isn't it? What's actually even nicer, and which the output does not show is that there's full command line completion available: as you press TAB the shell will auto-complete the service names for you. It's a real pleasure to explore your D-Bus objects that way!

The output shows some objects that you might recognize from the explanations above. Now, let's go further. Let's see what interfaces, methods, signals and properties one of these objects actually exposes:

$ busctl introspect org.freedesktop.login1 /org/freedesktop/login1/session/_31
NAME                                TYPE      SIGNATURE RESULT/VALUE                             FLAGS
org.freedesktop.DBus.Introspectable interface -         -                                        -
.Introspect                         method    -         s                                        -
org.freedesktop.DBus.Peer           interface -         -                                        -
.GetMachineId                       method    -         s                                        -
.Ping                               method    -         -                                        -
org.freedesktop.DBus.Properties     interface -         -                                        -
.Get                                method    ss        v                                        -
.GetAll                             method    s         a{sv}                                    -
.Set                                method    ssv       -                                        -
.PropertiesChanged                  signal    sa{sv}as  -                                        -
org.freedesktop.login1.Session      interface -         -                                        -
.Activate                           method    -         -                                        -
.Kill                               method    si        -                                        -
.Lock                               method    -         -                                        -
.PauseDeviceComplete                method    uu        -                                        -
.ReleaseControl                     method    -         -                                        -
.ReleaseDevice                      method    uu        -                                        -
.SetIdleHint                        method    b         -                                        -
.TakeControl                        method    b         -                                        -
.TakeDevice                         method    uu        hb                                       -
.Terminate                          method    -         -                                        -
.Unlock                             method    -         -                                        -
.Active                             property  b         true                                     emits-change
.Audit                              property  u         1                                        const
.Class                              property  s         "user"                                   const
.Desktop                            property  s         ""                                       const
.Display                            property  s         ""                                       const
.Id                                 property  s         "1"                                      const
.IdleHint                           property  b         true                                     emits-change
.IdleSinceHint                      property  t         1434494624206001                         emits-change
.IdleSinceHintMonotonic             property  t         0                                        emits-change
.Leader                             property  u         762                                      const
.Name                               property  s         "lennart"                                const
.Remote                             property  b         false                                    const
.RemoteHost                         property  s         ""                                       const
.RemoteUser                         property  s         ""                                       const
.Scope                              property  s         "session-1.scope"                        const
.Seat                               property  (so)      "seat0" "/org/freedesktop/login1/seat... const
.Service                            property  s         "gdm-autologin"                          const
.State                              property  s         "active"                                 -
.TTY                                property  s         "/dev/tty1"                              const
.Timestamp                          property  t         1434494630344367                         const
.TimestampMonotonic                 property  t         34814579                                 const
.Type                               property  s         "x11"                                    const
.User                               property  (uo)      1000 "/org/freedesktop/login1/user/_1... const
.VTNr                               property  u         1                                        const
.Lock                               signal    -         -                                        -
.PauseDevice                        signal    uus       -                                        -
.ResumeDevice                       signal    uuh       -                                        -
.Unlock                             signal    -         -                                        -

As before, the busctl command supports command line completion, hence both the service name and the object path used are easily put together on the shell simply by pressing TAB. The output shows the methods, properties, signals of one of the session objects that are currently made available by systemd-logind. There's a section for each interface the object knows. The second column tells you what kind of member is shown in the line. The third column shows the signature of the member. In case of method calls that's the input parameters, the fourth column shows what is returned. For properties, the fourth column encodes the current value of them.

So far, we just explored. Let's take the next step now: let's become active - let's call a method:

# busctl call org.freedesktop.login1 /org/freedesktop/login1/session/_31 org.freedesktop.login1.Session Lock

I don't think I need to mention this anymore, but anyway: again there's full command line completion available. The third argument is the interface name, the fourth the method name, both can be easily completed by pressing TAB. In this case we picked the Lock method, which activates the screen lock for the specific session. And yupp, the instant I pressed enter on this line my screen lock turned on (this only works on DEs that correctly hook into systemd-logind for this to work. GNOME works fine, and KDE should work too).

The Lock method call we picked is very simple, as it takes no parameters and returns none. Of course, it can get more complicated for some calls. Here's another example, this time using one of systemd's own bus calls, to start an arbitrary system unit:

# busctl call org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager StartUnit ss "cups.service" "replace"
o "/org/freedesktop/systemd1/job/42684"

This call takes two strings as input parameters, as we denote in the signature string that follows the method name (as usual, command line completion helps you getting this right). Following the signature the next two parameters are simply the two strings to pass. The specified signature string hence indicates what comes next. systemd's StartUnit method call takes the unit name to start as first parameter, and the mode in which to start it as second. The call returned a single object path value. It is encoded the same way as the input parameter: a signature (just o for the object path) followed by the actual value.

Of course, some method call parameters can get a ton more complex, but with busctl it's relatively easy to encode them all. See the man page for details.

busctl knows a number of other operations. For example, you can use it to monitor D-Bus traffic as it happens (including generating a .cap file for use with Wireshark!) or you can set or get specific properties. However, this blog story was supposed to be about sd-bus, not busctl, hence let's cut this short here, and let me direct you to the man page in case you want to know more about the tool.

busctl (like the rest of system) is implemented using the sd-bus API. Thus it exposes many of the features of sd-bus itself. For example, you can use to connect to remote or container buses. It understands both kdbus and classic D-Bus, and more!

sd-bus

But enough! Let's get back on topic, let's talk about sd-bus itself.

The sd-bus set of APIs is mostly contained in the header file sd-bus.h.

Here's a random selection of features of the library, that make it compare well with the other implementations available.

The API is currently not fully documented, but we are working on completing the set of manual pages. For details see all pages starting with sd_bus_.

Invoking a Method, from C, with sd-bus

So much about the library in general. Here's an example for connecting to the bus and issuing a method call:

#include <stdio.h>
#include <stdlib.h>
#include <systemd/sd-bus.h>

int main(int argc, char *argv[]) {
        sd_bus_error error = SD_BUS_ERROR_NULL;
        sd_bus_message *m = NULL;
        sd_bus *bus = NULL;
        const char *path;
        int r;

        /* Connect to the system bus */
        r = sd_bus_open_system(&bus);
        if (r < 0) {
                fprintf(stderr, "Failed to connect to system bus: %s\n", strerror(-r));
                goto finish;
        }

        /* Issue the method call and store the respons message in m */
        r = sd_bus_call_method(bus,
                               "org.freedesktop.systemd1",           /* service to contact */
                               "/org/freedesktop/systemd1",          /* object path */
                               "org.freedesktop.systemd1.Manager",   /* interface name */
                               "StartUnit",                          /* method name */
                               &error,                               /* object to return error in */
                               &m,                                   /* return message on success */
                               "ss",                                 /* input signature */
                               "cups.service",                       /* first argument */
                               "replace");                           /* second argument */
        if (r < 0) {
                fprintf(stderr, "Failed to issue method call: %s\n", error.message);
                goto finish;
        }

        /* Parse the response message */
        r = sd_bus_message_read(m, "o", &path);
        if (r < 0) {
                fprintf(stderr, "Failed to parse response message: %s\n", strerror(-r));
                goto finish;
        }

        printf("Queued service job as %s.\n", path);

finish:
        sd_bus_error_free(&error);
        sd_bus_message_unref(m);
        sd_bus_unref(bus);

        return r < 0 ? EXIT_FAILURE : EXIT_SUCCESS;
}

Save this example as bus-client.c, then build it with:

$ gcc bus-client.c -o bus-client `pkg-config --cflags --libs libsystemd`

This will generate a binary bus-client you can now run. Make sure to run it as root though, since access to the StartUnit method is privileged:

# ./bus-client
Queued service job as /org/freedesktop/systemd1/job/3586.

And that's it already, our first example. It showed how we invoked a method call on the bus. The actual function call of the method is very close to the busctl command line we used before. I hope the code excerpt needs little further explanation. It's supposed to give you a taste how to write D-Bus clients with sd-bus. For more more information please have a look at the header file, the man page or even the sd-bus sources.

Implementing a Service, in C, with sd-bus

Of course, just calling a single method is a rather simplistic example. Let's have a look on how to write a bus service. We'll write a small calculator service, that exposes a single object, which implements an interface that exposes two methods: one to multiply two 64bit signed integers, and one to divide one 64bit signed integer by another.

#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <systemd/sd-bus.h>

static int method_multiply(sd_bus_message *m, void *userdata, sd_bus_error *ret_error) {
        int64_t x, y;
        int r;

        /* Read the parameters */
        r = sd_bus_message_read(m, "xx", &x, &y);
        if (r < 0) {
                fprintf(stderr, "Failed to parse parameters: %s\n", strerror(-r));
                return r;
        }

        /* Reply with the response */
        return sd_bus_reply_method_return(m, "x", x * y);
}

static int method_divide(sd_bus_message *m, void *userdata, sd_bus_error *ret_error) {
        int64_t x, y;
        int r;

        /* Read the parameters */
        r = sd_bus_message_read(m, "xx", &x, &y);
        if (r < 0) {
                fprintf(stderr, "Failed to parse parameters: %s\n", strerror(-r));
                return r;
        }

        /* Return an error on division by zero */
        if (y == 0) {
                sd_bus_error_set_const(ret_error, "net.poettering.DivisionByZero", "Sorry, can't allow division by zero.");
                return -EINVAL;
        }

        return sd_bus_reply_method_return(m, "x", x / y);
}

/* The vtable of our little object, implements the net.poettering.Calculator interface */
static const sd_bus_vtable calculator_vtable[] = {
        SD_BUS_VTABLE_START(0),
        SD_BUS_METHOD("Multiply", "xx", "x", method_multiply, SD_BUS_VTABLE_UNPRIVILEGED),
        SD_BUS_METHOD("Divide",   "xx", "x", method_divide,   SD_BUS_VTABLE_UNPRIVILEGED),
        SD_BUS_VTABLE_END
};

int main(int argc, char *argv[]) {
        sd_bus_slot *slot = NULL;
        sd_bus *bus = NULL;
        int r;

        /* Connect to the user bus this time */
        r = sd_bus_open_user(&bus);
        if (r < 0) {
                fprintf(stderr, "Failed to connect to system bus: %s\n", strerror(-r));
                goto finish;
        }

        /* Install the object */
        r = sd_bus_add_object_vtable(bus,
                                     &slot,
                                     "/net/poettering/Calculator",  /* object path */
                                     "net.poettering.Calculator",   /* interface name */
                                     calculator_vtable,
                                     NULL);
        if (r < 0) {
                fprintf(stderr, "Failed to issue method call: %s\n", strerror(-r));
                goto finish;
        }

        /* Take a well-known service name so that clients can find us */
        r = sd_bus_request_name(bus, "net.poettering.Calculator", 0);
        if (r < 0) {
                fprintf(stderr, "Failed to acquire service name: %s\n", strerror(-r));
                goto finish;
        }

        for (;;) {
                /* Process requests */
                r = sd_bus_process(bus, NULL);
                if (r < 0) {
                        fprintf(stderr, "Failed to process bus: %s\n", strerror(-r));
                        goto finish;
                }
                if (r > 0) /* we processed a request, try to process another one, right-away */
                        continue;

                /* Wait for the next request to process */
                r = sd_bus_wait(bus, (uint64_t) -1);
                if (r < 0) {
                        fprintf(stderr, "Failed to wait on bus: %s\n", strerror(-r));
                        goto finish;
                }
        }

finish:
        sd_bus_slot_unref(slot);
        sd_bus_unref(bus);

        return r < 0 ? EXIT_FAILURE : EXIT_SUCCESS;
}

Save this example as bus-service.c, then build it with:

$ gcc bus-service.c -o bus-service `pkg-config --cflags --libs libsystemd`

Now, let's run it:

$ ./bus-service

In another terminal, let's try to talk to it. Note that this service is now on the user bus, not on the system bus as before. We do this for simplicity reasons: on the system bus access to services is tightly controlled so unprivileged clients cannot request privileged operations. On the user bus however things are simpler: as only processes of the user owning the bus can connect no further policy enforcement will complicate this example. Because the service is on the user bus, we have to pass the --user switch on the busctl command line. Let's start with looking at the service's object tree.

$ busctl --user tree net.poettering.Calculator
└─/net/poettering/Calculator

As we can see, there's only a single object on the service, which is not surprising, given that our code above only registered one. Let's see the interfaces and the members this object exposes:

$ busctl --user introspect net.poettering.Calculator /net/poettering/Calculator
NAME                                TYPE      SIGNATURE RESULT/VALUE FLAGS
net.poettering.Calculator           interface -         -            -
.Divide                             method    xx        x            -
.Multiply                           method    xx        x            -
org.freedesktop.DBus.Introspectable interface -         -            -
.Introspect                         method    -         s            -
org.freedesktop.DBus.Peer           interface -         -            -
.GetMachineId                       method    -         s            -
.Ping                               method    -         -            -
org.freedesktop.DBus.Properties     interface -         -            -
.Get                                method    ss        v            -
.GetAll                             method    s         a{sv}        -
.Set                                method    ssv       -            -
.PropertiesChanged                  signal    sa{sv}as  -            -

The sd-bus library automatically added a couple of generic interfaces, as mentioned above. But the first interface we see is actually the one we added! It shows our two methods, and both take "xx" (two 64bit signed integers) as input parameters, and return one "x". Great! But does it work?

$ busctl --user call net.poettering.Calculator /net/poettering/Calculator net.poettering.Calculator Multiply xx 5 7
x 35

Woohoo! We passed the two integers 5 and 7, and the service actually multiplied them for us and returned a single integer 35! Let's try the other method:

$ busctl --user call net.poettering.Calculator /net/poettering/Calculator net.poettering.Calculator Divide xx 99 17
x 5

Oh, wow! It can even do integer division! Fantastic! But let's trick it into dividing by zero:

$ busctl --user call net.poettering.Calculator /net/poettering/Calculator net.poettering.Calculator Divide xx 43 0
Sorry, can't allow division by zero.

Nice! It detected this nicely and returned a clean error about it. If you look in the source code example above you'll see how precisely we generated the error.

And that's really all I have for today. Of course, the examples I showed are short, and I don't get into detail here on what precisely each line does. However, this is supposed to be a short introduction into D-Bus and sd-bus, and it's already way too long for that …

I hope this blog story was useful to you. If you are interested in using sd-bus for your own programs, I hope this gets you started. If you have further questions, check the (incomplete) man pages, and inquire us on IRC or the systemd mailing list. If you need more examples, have a look at the systemd source tree, all of systemd's many bus services use sd-bus extensively.

18 Jun 2015 10:00pm GMT