20 Nov 2019

feedPlanet GNOME

Alberto Ruiz: Hanging the Red Hat

This is an extract of an email I just sent internally at Red Hat that I wanted to share with the wider GNOME and Freedesktop community.

After 6+ wonderful years at Red Hat, I've decided to hang the fedora to go and try new things. For a while I've been craving for a new challenge and I've felt the urge to try other things outside of the scope of Red Hat so with great hesitation I've finally made the jump.

I am extremely proud of the work done by the teams I have had the honour to run as engineering manager, I met wonderful people, I've worked with extremely talented engineers and learned lots. I am particularly proud of the achievements of my latest team from increasing the bootloader team and improving our relationship with GRUB upstream, to our wins at teaching Lenovo how to do upstream hardware support to improvements in Thunderbolt, Miracast, Fedora/RHEL VirtualBox guest compatibility… the list goes on and credit goes mostly to my amazing team.

Thanks to this job I have been able to reach out to other upstreams beyond GNOME, like Fedora, LibreOffice, the Linux Kernel, Rust, GRUB… it has been an amazing ride and I've met wonderful people in each one of them.

I would also like to make a special mention to my manager, Christian Schaller, who has supported me all the way in several ways both professionally and personally. There is this thing people say: "people do not leave companies, they leave managers". Well this is certainly not the case, in Christian I have found not only a great manager but a true friend.

images

As for my experience at Red Hat, I have never lasted more than 2 years in the same spot before, I truly found my place there, deep in my heart I know I will always be a Red Hatter, but there are some things I want to try and learn elsewhere. This job switch has been the hardest departure I ever had and in many ways it breaks my heart to leave. If you are considering joining Red Hat, do not hesitate, there is no better place to write and advocate for Free Software.

I will announce what I will be doing next once I start in December.

20 Nov 2019 6:22pm GMT

19 Nov 2019

feedPlanet GNOME

Nick Richards: Linux Application Summit 2019

I was lucky enough to be sponsored by the GNOME Foundation to attend the 2019 Linux Application Summit, hosted in Barcelona between November 12th and 15th 2019.

Sponsored by the GNOME Foundation

It was a great conference with a diverse crew of people who all care about making apps on Linux better. I particularly enjoyed Frank's keynote on Linux apps from the perspective of Nextcloud, an Actual ISV. Also worth your time is Rob's talk on how Flathub would like to help more developers earn money from their work; Adrien on GTK and scalable UIs for phones; Robin on tone of voice and copywriting; Emel on Product Management in the context of GNOME Recipes and Paul Brown on direct language and better communication. There were also great lightning talks including a starring turn by one of my former colleagues Martin Abente Lahaye who showed off the work he's been doing to make the Sugar educational applications more widely available with Flatpak. After a bit of review and some polish in the cafe they're now starting to appear on Flathub. All of these videos are available to watch in the YouTube livestream playback, and I'm sure individually soon when appropriately processed.

I gave a talk entitled Product Management In Open Source. Astute readers will recognise the title from the similar talk I gave last year at GUADEC, however the content is actually fairly different. Emel's talk that I mentioned above covered quite a lot of the basic material so I concentrated more on how individual app developers could use Product Management techniques to make their own practice a bit more deliberate and help them guide and prioritize their work.

If you'd like to watch the video then you're more than welcome, although I must warn you that I managed to lose my voice a couple of days before so my delivery isn't exactly what I would wish for. Still, I think you'll find it worthwhile.

19 Nov 2019 10:25pm GMT

Jussi Pakkanen: Some intricacies of ABI stability

There is a big discussion ongoing in the C++ world about ABI stability. People want to make a release of the standard that does a big ABI break, so a lot of old cruft can be removed and made better. This is a big and arduous task, which has a lot of "fun" and interesting edge, corner and hypercorner cases. It might be interesting to look at some of the lesser known ones (this post is not exhaustive, not by a long shot). All information here is specific to Linux, but other OSs should be roughly similar.

The first surprising thing to note is that nobody really cares about ABI stability. Even the people who defend stable ABIs in the committee do not care about ABI stability as such. What they do care about is that existing programs keep on working. A stable ABI is just a tool in making that happen. For many problems it is seemingly the only tool. Nevertheless, ABI stability is not the end goal. If the same outcome can be achieved via some other mechanism, then it can be used instead. Thinking about this for a while leads us to the following idea:

Since "C++ness" is just linking against libstdc++.so, could we not create a new one, say libstdc++2.so, that has a completely different ABI (and even API), build new apps against that and keep the old one around for running old apps?

The answer to this questions turns out to be yes. Even better, you can already do this today on any recent Debian based distribution (and probably most other distros too, but I have not tested). By default the Clang C++ compiler shipped by the distros uses the GNU C++ standard library. However you can install the libc++ stdlib via system packages and use it with the -stdlib=libc++ command line argument. If you go even deeper, you find that the GNU standard library's name is libstdc++.so.6, meaning that it has already had five ABI breaking updates.

So … problem solved then? No, not really.

Problem #1: the ABI boundary

Suppose you have a shared library built against the old ABI that exports a function that looks like this:

void do_something(const std::unordered_map<int, int> &m);

If you build code with the new ABI and call this function, the bit representation of the unordered map causes problems. The caller has a pointer to a bunch of bits in the new representation whereas the callee expects bits in the old representation. This code compiles and links but will invoke UB at runtime when called and, at best, crash your app.

Problem #2: the hidden symbols

This one is a bit complicated and needs some background information. Suppose we have a shared library foo that is implemented in C++ but exposes a plain C API. Internally it makes calls to the C++ standard library. We also have a main program that uses said library. So far, so good, everything works.

Let's add a second shared library called bar that also implemented in C++ and exposes a C API. We can link the main app against both these libraries and call them and everything works.

Now comes the twist. Let's compile the bar library against a new C++ ABI. The result looks like this:


A project mimicing this setup can be obtained from this Github repo. In it the abi1 and abi2 libraries both export a function with the same name that returns an int that is either 1 or 2. Libraries foo and bar check the return value and print a message saying whether they got the value they were expecting. It should be reiterated that the use of the abi libraries is fully internal. Nothing about them leaks to the exposed interface. But when we compile and run the program that calls both libraries, we get the following output snippet:

Foo invoked the correct ABI function.
Bar invoked the wrong ABI function.

What has happened is that both libraries invoked the function from abi1, which means that in the real world bar would have crashed in the same way as in problem #1. Alternatively both libraries could have called abi2, which would have broken foo. Determining when this happens is left as an exercise to the reader.

The reason this happens is that the functions in abi1 and abi2 have the same mangled name and the fact that symbol lookup is global to a process. Once any given name is determined, all usages anywhere in the same process will point to the same entity. This will happen even for non-weak symbols.

Can this be solved?

As far as I know, there is no known real-world solution to this problem that would scale to a full operating system (i.e. all of Debian, FreeBSD or the like). If there are any university professors reading this needing problems for your grad students, this could be one of them. The problem itself is fairly simple to formulate: make it possible to run two different, ABI incompatible C++ standard libraries within one process. The solution will probably require changes in the compiler, linker and runtime loader. For example, you might extend symbol resolution rules so that they are not global, but instead symbols from, say library bar would first be looked up in its direct descendents (in this case only abi2) and only after that in other parts of the tree.

To get you started, here is one potential solution I came up with while writing this post. I have no idea if it actually works, but I could not come up with an obvious thing that would break. I sadly don't have the time or know-how to implement this, but hopefully someone else has.

Let's start by defining that the new ABI is tied to C++23 for simplicity. That is, code compiled with -std=c++23 uses the new ABI and links against libstdc++.so.7, whereas older standard versions use the old ABI. Then we take the Itanium ABI specification and change it so that all mangled names start with, say, _^ rather than _Z as currently. Now we are done. The different ABIs mangle to different names and thus can coexist inside the same process without problems. One would probably need to do some magic inside the standard library implementations so they don't trample on each other.

The only problem this does not solve is calling a shared library with a different ABI. This can be worked around by writing small wrapper functions that expose an internal "C-like" interface and can call external functions directly. These can be linked inside the same library without problems because the two standard libraries can be linked in the same shared library just fine. There is a bit of a performance and maintenance penalty during the transition, but it will go away once all code is rebuilt with the new ABI.

Even with this, the transition is not a light weight operation. But if you plan properly ahead and do the switch, say, once every two standard releases (six years), it should be doable.

19 Nov 2019 9:05pm GMT

Federico Mena-Quintero: Refactoring the Length type

CSS length values have a number and a unit, e.g. 5cm or 6px. Sometimes the unit is a percentage, like 50%, and SVG says that lengths with percentage units should be resolved with respect to a certain rectangle. For example, consider this circle element:

<circle cx="50%" cy="75%" r="4px" fill="black"/>

This means, draw a solid black circle whose center is at 50% of the width and 75% of the height of the current viewport. The circle should have a 4-pixel radius.

The process of converting that kind of units into absolute pixels for the final drawing is called normalization. In SVG, percentage units sometimes need to be normalized with respect to the current viewport (a local coordinate system), or with respect to the size of another object (e.g. when a clipping path is used to cut the current shape in half).

One detail about normalization is that it can be with respect to the horizontal dimension of the current viewport, the vertical dimension, or both. Keep this in mind: at normalization time, we need to be able to distinguish between those three modes.

The original C version

I have talked about the original C code for lengths before; the following is a small summary.

The original C code had this struct to represent lengths:

typedef struct {
    double length;
    char factor;
} RsvgLength;

The parsing code would set the factor field to a character depending on the length's unit: 'p' for percentages, 'i' for inches, etc., and '\0' for the default unit, which is pixels.

Along with that, the normalization code needed to know the direction (horizontal, vertical, both) to which the length in question refers. It did this by taking another character as an argument to the normalization function:

double
_rsvg_css_normalize_length (const RsvgLength * in, RsvgDrawingCtx * ctx, char dir)
{
    if (in->factor == '\0')            /* pixels, no need to normalize */
        return in->length;
    else if (in->factor == 'p') {      /* percentages; need to consider direction */
        if (dir == 'h')                                     /* horizontal */
            return in->length * ctx->vb.rect.width;
        if (dir == 'v')                                     /* vertical */
            return in->length * ctx->vb.rect.height;
        if (dir == 'o')                                     /* both */
            return in->length * rsvg_viewport_percentage (ctx->vb.rect.width,
                                                          ctx->vb.rect.height);
    } else { ... }
}

The original post talks about how I found a couple of bugs with how the directions are identified at normalization time. The function above expects one of 'h'/'v'/'o' for horizontal/vertical/both, and one or two places in the code passed the wrong character.

Making the C version cleaner

Before converting that code to Rust, I removed the pesky characters and made the code use proper enums to identify a length's units.

+typedef enum {
+    LENGTH_UNIT_DEFAULT,
+    LENGTH_UNIT_PERCENT,
+    LENGTH_UNIT_FONT_EM,
+    LENGTH_UNIT_FONT_EX,
+    LENGTH_UNIT_INCH,
+    LENGTH_UNIT_RELATIVE_LARGER,
+    LENGTH_UNIT_RELATIVE_SMALLER
+} LengthUnit;
+
 typedef struct {
     double length;
-    char factor;
+    LengthUnit unit;
 } RsvgLength;

Then, do the same for the normalization function, so it will get the direction in which to normalize as an enum instead of a char.

+typedef enum {
+    LENGTH_DIR_HORIZONTAL,
+    LENGTH_DIR_VERTICAL,
+    LENGTH_DIR_BOTH
+} LengthDir;

 double
-_rsvg_css_normalize_length (const RsvgLength * in, RsvgDrawingCtx * ctx, char dir)
+_rsvg_css_normalize_length (const RsvgLength * in, RsvgDrawingCtx * ctx, LengthDir dir)

Making the C version easier to get right

While doing the last change above, I found a place in the code that used the wrong direction by mistake, probably due to a cut&paste error. Part of the problem here is that the code was specifying the direction at normalization time.

I decided to change it so that each direction value carried its own direction since initialization, so that subsequent code wouldn't have to worry about that. Hopefully, initializing a width field should make it obvious that it needed LENGTH_DIR_HORIZONTAL.

 typedef struct {
     double length;
     LengthUnit unit;
+    LengthDir dir;
 } RsvgLength;

That is, so that instead of

  /* at initialization time */
  foo.width = _rsvg_css_parse_length (str);

  ...

  /* at rendering time */
  double final_width = _rsvg_css_normalize_length (&foo.width, ctx, LENGTH_DIR_HORIZONTAL);

we would instead do this:

  /* at initialization time */
  foo.width = _rsvg_css_parse_length (str, LENGTH_DIR_HORIZONTAL);

  ...

  /* at rendering time */
  double final_width = _rsvg_css_normalize_length (&foo.width, ctx);

This made the drawing code, which deals with a lot of coordinates at the same time, a lot less noisy.

Initial port to Rust

To recap, this was the state of the structs after the initial refactoring in C:

typedef enum {
    LENGTH_UNIT_DEFAULT,
    LENGTH_UNIT_PERCENT,
    LENGTH_UNIT_FONT_EM,
    LENGTH_UNIT_FONT_EX,
    LENGTH_UNIT_INCH,
    LENGTH_UNIT_RELATIVE_LARGER,
    LENGTH_UNIT_RELATIVE_SMALLER
} LengthUnit;

typedef enum {
    LENGTH_DIR_HORIZONTAL,
    LENGTH_DIR_VERTICAL,
    LENGTH_DIR_BOTH
} LengthDir;

typedef struct {
    double length;
    LengthUnit unit;
    LengthDir dir;
} RsvgLength;

This ported to Rust in a straightforward fashion:

pub enum LengthUnit {
    Default,
    Percent,
    FontEm,
    FontEx,
    Inch,
    RelativeLarger,
    RelativeSmaller
}

pub enum LengthDir {
    Horizontal,
    Vertical,
    Both
}

pub struct RsvgLength {
    length: f64,
    unit: LengthUnit,
    dir: LengthDir
}

It got a similar constructor that took the direction and produced an RsvgLength:

impl RsvgLength {
    pub fn parse (string: &str, dir: LengthDir) -> RsvgLength { ... }
}

(This was before using Result; remember that the original C code did very little error checking!)

The initial Parse trait

It was at that point that it seemed convenient to introduce a Parse trait, which all CSS value types would implement to parse themselves from string.

However, parsing an RsvgLength also needed an extra piece of data, the LengthDir. My initial version of the Parse trait had an associated called Data, through which one could pass an extra piece of data during parsing/initialization:

pub trait Parse: Sized {
    type Data;
    type Err;

    fn parse (s: &str, data: Self::Data) -> Result<Self, Self::Err>;
}

This was explicitly to be able to pass a LengthDir to the parser for RsvgLength:

impl Parse for RsvgLength {
    type Data = LengthDir;
    type Err = AttributeError;

    fn parse (string: &str, dir: LengthDir) -> Result <RsvgLength, AttributeError> { ... }
}

This was okay for lengths, but very noisy for everything else that didn't require an extra bit of data. In the rest of the code, the helper type was Data = () and there was a pair of extra parentheses () in every place that parse() was called.

Removing the helper Data type

Introducing one type per direction

Over a year later, that () bit of data everywhere was driving me nuts. I started refactoring the Length module to remove it.

First, I introduced three newtypes to wrap Length, and indicate their direction at the same time:

pub struct LengthHorizontal(Length);
pub struct LengthVertical(Length);
pub struct LengthBoth(Length);

This was done with a macro because now each wrapper type needed to know the relevant LengthDir.

Now, for example, the declaration for the <circle> element looked like this:

pub struct NodeCircle {
    cx: Cell<LengthHorizontal>,
    cy: Cell<LengthVertical>,
    r: Cell<LengthBoth>,
}

(Ignore the Cell everywhere; we got rid of that later.)

Removing the dir field

Since now the information about the length's direction is embodied in the LengthHorizontal/LengthVertical/LengthBoth types, this made it possible to remove the dir field from the inner Length struct.

 pub struct RsvgLength {
     length: f64,
     unit: LengthUnit,
-    dir: LengthDir
 }

+pub struct LengthHorizontal(Length);
+pub struct LengthVertical(Length);
+pub struct LengthBoth(Length);
+
+define_length_type!(LengthHorizontal, LengthDir::Horizontal);
+define_length_type!(LengthVertical, LengthDir::Vertical);
+define_length_type!(LengthBoth, LengthDir::Both);

Note the use of a define_length_type! macro to generate code for those three newtypes.

Removing the Data associated type

And finally, this made it possible to remove the Data associated type from the Parse trait.

 pub trait Parse: Sized {
-    type Data;
     type Err;

-    fn parse(parser: &mut Parser<'_, '_>, data: Self::Data) -> Result<Self, Self::Err>;
+    fn parse(parser: &mut Parser<'_, '_>) -> Result<Self, Self::Err>;
 }

The resulting mega-commit removed a bunch of stray parentheses () from all calls to parse(), and the code ended up a lot easier to read.

Removing the newtypes

This was fine for a while. Recently, however, I figured out that it would be possible to embody the information for a length's direction in a different way.

But to get there, I first needed a temporary refactor.

Replacing the macro with a trait with a default implementation

Deep in the guts of length.rs, the key function that does something different based on LengthDir is its scaling_factor method:

enum LengthDir {
    Horizontal,
    Vertical,
    Both,
}

impl LengthDir {
    fn scaling_factor(self, x: f64, y: f64) -> f64 {
        match self {
            LengthDir::Horizontal => x,
            LengthDir::Vertical => y,
            LengthDir::Both => viewport_percentage(x, y),
        }
    }
}

That method gets passed, for example, the width/height of the current viewport for the x/y arguments. The method decides whether to use the width, height, or a combination of both.

And of course, the interesting part of the define_length_type! macro was to generate code for calling LengthDir::Horizontal::scaling_factor()/etc. as appropriate depending on the LengthDir in question.

First I made a trait called Orientation with a scaling_factor method, and three zero-sized types that implement that trait. Note how each of these three implementations corresponds to one of the match arms above:

pub trait Orientation {
    fn scaling_factor(x: f64, y: f64) -> f64;
}

pub struct Horizontal;
pub struct Vertical;
pub struct Both;

impl Orientation for Horizontal {
    fn scaling_factor(x: f64, _y: f64) -> f64 {
        x
    }
}

impl Orientation for Vertical {
    fn scaling_factor(_x: f64, y: f64) -> f64 {
        y
    }
}

impl Orientation for Both {
    fn scaling_factor(x: f64, y: f64) -> f64 {
        viewport_percentage(x, y)
    }
}

Now most of the contents of the define_length_type! macro can go in the default implementation of a new trait LengthTrait. Crucially, this trait has an Orientation associated type, which it uses to call into the Orientation trait:

pub trait LengthTrait: Sized {
    type Orientation: Orientation;

    ...

    fn normalize(&self, values: &ComputedValues, params: &ViewParams) -> f64 {
        match self.unit() {
            LengthUnit::Px => self.length(),

            LengthUnit::Percent => {
                self.length() *
                <Self::Orientation>::scaling_factor(params.view_box_width, params.view_box_height)
            }

            ...
    }
}

Note that the incantation is <Self::Orientation>::scaling_factor(...) to call that method on the associated type.

Now the define_length_type! macro is shrunk a lot, with the interesting part being just this:

macro_rules! define_length_type {
    {$name:ident, $orient:ty} => {
        pub struct $name(Length);

        impl LengthTrait for $name {
            type Orientation = $orient;
        }
    }
}

define_length_type! { LengthHorizontal, Horizontal }
define_length_type! { LengthVertical, Vertical }
define_length_type! { LengthBoth, Both }

We moved from having three newtypes of length-with-LengthDir to three newtypes with dir-as-associated-type.

Removing the newtypes and the macro

After that temporary refactoring, we had the Orientation trait and the three zero-sized types Horizontal, Vertical, Both.

I figured out that one can use PhantomData as a way to carry around the type that Length needs to normalize itself, instead of using an associated type in an extra LengthTrait. Behold!

pub struct Length<O: Orientation> {
    pub length: f64,
    pub unit: LengthUnit,
    orientation: PhantomData<O>,
}

impl<O: Orientation> Length<O> {
    pub fn normalize(&self, values: &ComputedValues, params: &ViewParams) -> f64 {
        match self.unit {
            LengthUnit::Px => self.length,

            LengthUnit::Percent => {
                self.length 
                    * <O as Orientation>::scaling_factor(params.view_box_width, params.view_box_height)
            }

            ...
        }
    }
}

Now the incantation is <O as Orientation>::scaling_factor() to call the method on the generic type; it is no longer an associated type in a trait.

With that, users of lengths look like this; here, our <circle> element from before:

pub struct Circle {
    cx: Length<Horizontal>,
    cy: Length<Vertical>,
    r: Length<Both>,
}

I'm very happy with the readability of all the code now. I used to think of PhantomData as a way to deal with wrapping pointers from C, but it turns out that it is also useful to keep a generic type around should one need it.

The final Length struct is this:

pub struct Length<O: Orientation> {
    pub length: f64,
    pub unit: LengthUnit,
    orientation: PhantomData<O>,
}

And it only takes up as much space as its length and unit fields; PhantomData is zero-sized after all.

(Later, we renamed Orientation to Normalize, but the code's structure remained the same.)

Summary

Over a couple of years, librsvg's type that represents CSS lengths went from a C representation along the lines of "all data in the world is an int", to a Rust representation that uses some interesting type trickery:

Refactoring never ends!

19 Nov 2019 4:01pm GMT

Richard Hughes: Growing the fwupd ecosystem

Yesterday I wrote a blog about what hardware vendors need to provide so I can write them a fwupd plugin. A few people contacted me telling me that I should make it more generic, as I shouldn't be the central point of failure in this whole ecosystem. The sensible thing, of course, is growing the "community" instead, and building up a set of (paid) consultants that can help the OEMs and ODMs, only getting me involved to review pull requests or for general advice. This would certainly reduce my current feeling of working at 100% and trying to avoid burnout.

As a first step, I've created an official page that will list any consulting companies that I feel are suitable to recommend for help with fwupd and the LVFS. The hardware vendors would love to throw money at this stuff, so they don't have to care about upstream project release schedules and dealing with a gumpy maintainer like me. I've pinged the usual awesome people like Igalia, and hopefully more companies will be added to this list during the next few days.

If you do want your open-source consultancy to be added, please email me a two paragraph corporate-friendly blurb I can include on that new page, also with a link I can use for the "more details" button. If you're someone I've not worked with before, you should be in a position to explain the difference between a capsule update and a DFU update, and be able to tell me what a version format is. I don't want to be listing companies that don't understand what fwupd actually is :)

19 Nov 2019 12:26pm GMT

18 Nov 2019

feedPlanet GNOME

Richard Hughes: Google and fwupd sitting in a tree

I've been told by several sources (but not by Google directly, heh) that from Christmas onwards the "Designed for ChromeBook" sticker requires hardware vendors to use fwupd rather than random non-free binaries. This does make a lot of sense for Google, as all the firmware flash tools I've seen the source for are often decades old, contain layer-on-layers of abstractions, have dubious input sanitisation and are quite horrible to use. Many are setuid, which doesn't make me sleep well at night, and I suspect the security team at Google also. Most vendor binaries are built for the specific ODM hardware device, and all of them but one doesn't use any kind of source control or formal review process.

The requirement from Google has caused mild panic among silicon suppliers and ODMs, as they're having to actually interact with an open source upstream project and a slightly grumpy maintainer that wants to know lots of details about hardware that doesn't implement one of the dozens of existing protocols that fwupd supports. These are companies that have never had to deal with working with "outside" people to develop software, and it probably comes as quite a shock to the system. To avoid repeating myself these are my basic rules when adding support for a device with a custom protocol in fwupd:

Now, if all this makes me sound like a grumpy upstream maintainer then I apologize. I'm currently working with about half a dozen silicon suppliers who all failed some or all of the above bullets. I'm multiplexing myself with about a dozen companies right now, and supporting fwupd isn't actually my entire job at Red Hat. I'm certainly not going to agree to "signing off a timetable" for each vendor as none of the vendors actually pay me to do anything…

Given interest in fwupd has exploded in the last year or so, I wanted to post something like this rather than have a 10-email back and forth about my expectations with each vendor. Some OEMs and even ODMs are now hiring developers with Linux experience, and I'm happy to work with them as fwupd becomes more important. I've already helped quite a few developers at random vendors get up to speed with fwupd and would be happy to help more. As the importance of fwupd and the LVFS grows more and more, vendors will need to hire developers who can build, extend and support their hardware. As fwupd grows, I'll be asking vendors to do more of the work, as "get upstream to do it" doesn't scale.

18 Nov 2019 3:41pm GMT

Matthew Garrett: Extending proprietary PC embedded controller firmware

I'm still playing with my X210, a device that just keeps coming up with new ways to teach me things. I'm now running Coreboot full time, so the majority of the runtime platform firmware is free software. Unfortunately, the firmware that's running on the embedded controller (a separate chip that's awake even when the rest of the system is asleep and which handles stuff like fan control, battery charging, transitioning into different power states and so on) is proprietary and the manufacturer of the chip won't release data sheets for it. This was disappointing, because the stock EC firmware is kind of annoying (there's no hysteresis on the fan control, so it hits a threshold, speeds up, drops below the threshold, turns off, and repeats every few seconds - also, a bunch of the Thinkpad hotkeys don't do anything) and it would be nice to be able to improve it.

A few months ago someone posted a bunch of fixes, a Ghidra project and a kernel patch that lets you overwrite the EC's code at runtime for purposes of experimentation. This seemed promising. Some amount of playing later and I'd produced a patch that generated keyboard scancodes for all the missing hotkeys, and I could then use udev to map those scancodes to the keycodes that the thinkpad_acpi driver would generate. I finally had a hotkey to tell me how much battery I had left.

But something else included in that post was a list of the GPIO mappings on the EC. A whole bunch of hardware on the board is connected to the EC in ways that allow it to control them, including things like disabling the backlight or switching the wifi card to airplane mode. Unfortunately the ACPI spec doesn't cover how to control GPIO lines attached to the embedded controller - the only real way we have to communicate is via a set of registers that the EC firmware interprets and does stuff with.

One of those registers in the vendor firmware for the X210 looked promising, with individual bits that looked like radio control. Unfortunately writing to them does nothing - the EC firmware simply stashes that write in an address and returns it on read without parsing the bits in any way. Doing anything more with them was going to involve modifying the embedded controller code.

Thankfully the EC has 64K of firmware and is only using about 40K of that, so there's plenty of room to add new code. The problem was generating the code in the first place and then getting it called. The EC is based on the CR16C architecture, which binutils supported until 10 days ago. To be fair it didn't appear to actually work, and binutils still has support for the more generic version of the CR16 family, so I built a cross assembler, wrote some assembly and came up with something that Ghidra was willing to parse except for one thing.

As mentioned previously, the existing firmware code responded to writes to this register by saving it to its RAM. My plan was to stick my new code in unused space at the end of the firmware, including code that duplicated the firmware's existing functionality. I could then replace the existing code that stored the register value with code that branched to my code, did whatever I wanted and then branched back to the original code. I hacked together some assembly that did the right thing in the most brute force way possible, but while Ghidra was happy with most of the code it wasn't happy with the instruction that branched from the original code to the new code, or the instruction at the end that returned to the original code. The branch instruction differs from a jump instruction in that it gives a relative offset rather than an absolute address, which means that branching to nearby code can be encoded in fewer bytes than going further. I was specifying the longest jump encoding possible in my assembly (that's what the :l means), but the linker was rewriting that to a shorter one. Ghidra was interpreting the shorter branch as a negative offset, and it wasn't clear to me whether this was a binutils bug or a Ghidra bug. I ended up just hacking that code out of binutils so it generated code that Ghidra was happy with and got on with life.

Writing values directly to that EC register showed that it worked, which meant I could add an ACPI device that exposed the functionality to the OS. My goal here is to produce a standard Coreboot radio control device that other Coreboot platforms can implement, and then just write a single driver that exposes it. I wrote one for Linux that seems to work.

In summary: closed-source code is more annoying to improve, but that doesn't mean it's impossible. Also, strange Russians on forums make everything easier.

comment count unavailable comments

18 Nov 2019 8:19am GMT

Julita Inca: LAS 2019: A GNOME + KDE conference

Thanks to the sponsorship of GNOME, I was able to attend the Linux App Summit 2019 held in Barcelona. This conference was hosted by two free desktop communities such as GNOME and KDE. Usually the technologies used to create applications are GTK and QT, and the objective of this conference was to present ongoing application projects that run in many Linux platforms and beyond on both, desktops and on mobiles. The ecosystem involved, the commercial part and the U project manager perspective were also presented in three core days. I had the chance to hear some talks as pictured: Adrien Plazas, Jordan and Tobias, Florian are pictured in the first place. The keynote was in charge of Mirko Boehm with the title “The Economics of FOSS�, Valentin and Adam Jones from Freedesktop SDK and Nick Richards where he pointed out the "write" strategy. You might see more details on Twitter.

Women's presence was very noticeable in this conference. As it is shown in the following picture, a UX designer such as Robin presented a communication approach to understand what the users want, the developer Heather Ellsworth explained also her work at Ubuntu making GNOME Snap apps, the enthusiastic Aniss from OpenStreetMap community also did a lightning talk about her experiences to make a FOSS community stronger. At the bottom of the picture we see the point of view of the database admin: Shola, the KDE developer: Hannah, and the closing ceremony presented by Muriel (local team organizer).

On Friday, some BoFs were set. The engagement Bof leading by Nuritzi is pictured first, followed by the KDE team. The Snap Packaging Workshop happened in the meeting room.

Lighting talks were part also of this event at the end of the day, every day. Nuritzi was prized for her effort to run the event. Thanks Julian & Tobias for joining to see Park Güell.Social events were also arranged, we started a tour from the Casa Batlló and we walked towards the Gothic quarter. The tours happened at nigth after the talks, and lasted 1.5 h.Food expenses were covered by GNOME in my case as well as for other members. Thanks!

My participation was basically done a talk in the unconference part, I organized the GNOME games with Jorge (a local organizer) and I wrote a GTK code in C with Matthias.The games started with the "Glotones" where we used flans to eat quickly, the "wise man" where lots of Linux questions were asked, and the "Sing or die" game where the participants changed the lyrics of sticky songs using the words GNOME, KDE and LinuxAppSummit. Some of the moments were pictured as follows:The imagination of the teams were fantastic, they sang and created "geek" choreographies as we requested:One of the games that lasted until the very end was "Guessing the word". The words depicted in the photo:LAS, root, and GPL played by Nuritzi, Neil, and Jordan, respectively.It was lovely to see again several-years GNOME's members as Florian, who is always supporting my ideas for the GNOME games 🙂 the generous, intelligent and funny Javier Jardon, and the famous GNOME developer Zeeshan who also loves Rust and airplanes.

It was also delightful to meet new people. I met GNOME people such as Ismael, and Daniel who helped me to debug my mini GTK code. I also met KDE people such as Albert and Muriel. In the last photo, we are in the photo with the "wise man" and the "flan man"

Special Thanks to the local organizers Jorge and Aleix, Ismael for supporting me for almost the whole conference with my flu, and Nuritzi for the sweet chocolates she gave me.The photo group was a success, and generally, I liked the event LAS 2019 in Barcelona.

Barcelona is a place with novel architectures and I enjoyed the walking time there…

Thanks again GNOME, I will finish my reconstruction image GTK code I started in this event to make it also in parallel using HPC machines in the near future.

18 Nov 2019 3:21am GMT

17 Nov 2019

feedPlanet GNOME

Jussi Pakkanen: What is -pipe and should you use it?

Every now and then you see people using the -pipe compiler argument. This is particularly common on vintage handwritten makefiles. Meson even uses the argument by default, but what does it actually do? GCC manpages say the following:

-pipe
Use pipes rather than temporary files for communication
between the various stages of compilation. This fails
to work on some systems where the assembler is unable to
read from a pipe; but the GNU assembler has no trouble.

So presumably this is a compile speed optimization. What sort of an improvement does it actually provide? I tested this by compiling LLVM on my desktop machine both with and without the -pipe command line argument. Without it I got the following time:

ninja 14770,75s user 799,50s system 575% cpu 45:04,98 total

Adding the argument produced the following timing:

ninja 14874,41s user 791,95s system 584% cpu 44:41,22 total

This is an improvement of less than one percent. Given that I was using my machine for other things at the same time, the difference is most likely statistically insignificant.

This argument may have been needed in the ye olden times of supporting tens of broken commercial unixes. Nowadays the only platform where this might make a difference is Windows, given that its file system is a lot slower than Linux's. But is its pipe implementation any faster? I don't know, and I'll let other people measure that.

The "hindsight is perfect" design lesson to be learned

Looking at this now, it is fairly easy to see that this command line option should not exist. Punting the responsibility of knowing whether files or pipes are faster (or even work) on any given platform to the user is poor usability. Most people don't know that and performance characteristics of operating systems change over time. Instead this should be handled inside the compiler with logic roughly like the following:

if(assembler_supports_pipes(...) &&
pipes_are_faster_on_this_platform(...)) {
communicate_with_pipes();
} else {
communicate_with_files();
}

17 Nov 2019 11:28pm GMT

16 Nov 2019

feedPlanet GNOME

Molly de Blanc: GNOME Patent Troll Defense Fund reaches nearly 4,000 donors!

A lot has happened since our announcement that Rothschild Imaging Ltd was alleging that GNOME is violating one of their patents. We wanted to provide you with a brief update of what has been happening over the past few weeks.

Legal cases can be expensive, and the cost of a patent case can easily reach over a million dollars. As a small non-profit, we decided to reach out to our community and ask for financial support towards our efforts to keep patent trolls out of open source. More than 3,800 of you have stepped up and contributed to the GNOME Patent Troll Legal Defense Fund. We'd like to sincerely thank everyone who has donated. If you need any additional documentation for an employer match, please contact us.

Individuals aren't the only supporters of this initial fundraiser. The Debian Project generously reached out with a donation and Igalia also donated to support our legal efforts.

There's been a wonderful outpouring of support from the free and open source software communities. The Software Freedom Conservancy issued a statement. Meanwhile the Open Invention Network is lending a hand in the search for examples or prior art.

We set ourselves an ambitious fundraising goal of $1.1 million to support our defense. We expect the majority of this to be raised from corporate sponsorship, but we're going to keep working for more individual and community donations. Please share our GiveLively donation page with your social media networks. If you're a non-profit that has issued (or is is interested in issuing) a statement of support, we'd love to hear from you.

If you want to receive updates on the case, please sign up for the GNOME Legal Updates List.

16 Nov 2019 3:56pm GMT

15 Nov 2019

feedPlanet GNOME

Hans de Goede: Creating an USB3 OTG cable for the Thinkpad 8

The Lenovo Thinkpad 8 and also the Asus T100CHI both have an USB3 micro-B connector, but using a standard USB3 OTG (USB 3 micro-B to USB3-A receptacle) cable results in only USB2 devices working. USB3 devices are not recognized.

Searching the internet learns that many people have this problem and that the solution is to find a USB3 micro-A to USB3-A receptacle cable. This sounds like nonsense to me as micro-B really is micro-AB and is supposed to use the ID pin to automatically switch between modes dependent on the used cable.; and this does work for the USB-2 parts of the micro-B connector on the Thinkpad. Yet people do claim success with such cables (with a more square micro-A connector, instead of the trapezoid micro-B connector). The only problem is such cables are not for sale anywhere.

So I guessed that this means is that they have swapped the Rx and Tx superspeed pairs on the USB3 only part of the micro-B connector, and I decided to cut open one of my USB3 micro-A to USB3-A receptacle cables and swap the superspeed pairs. Here is what the cable looks like when it it cut open:



If you are going to make such a cable yourself, to get this picture I first removed the outer plastic isolation (part of it is still there on the right side in this picture). Then I folded away the shield wires till they are all on one side (wires at the top of the picture). After this I removed the metal foil underneath the shield wires.

Having removed the first 2 layers of shielding this reveals 4 wires in the standard USB2 colors: red, black, green and white. and 2 separately shielded cable pairs. On the picture above the separately shielded pairs have been cut, giving us 4 pairs, 2 on each end of cable; and the shielding has been removed from 3 of the 4 pairs, you can still see the shielding on the 4th pair.

A standard USB3 cable uses the following color codes:

So to swap RX and TX we need to connect purple to blue / blue to purple and orange to yellow / yellow to orange, resulting in:



Note the wires are just braided together here, not soldered yet. This is a good moment to carefully test the cable. Also note that the superspeed wire pairs must be length matched, so you need to cut and strip all 8 cables at the same length! If everything works you can put some solder on those braided together wires, re-test after soldering, and then cover them with some heat-shrink-tube:



And then cover the entire junction with a bigger heat-shrink-tube:



And you have a superspeed capable cable even though no one will sell you one.

Note that the Thinkpad 8 supports ACA mode so if you get an ACA capable "Y" cable or an ACA charging HUB then you can charge and use the Thinkpad 8 USB port at the same time. Typically ACA "Y" cables or hubs are USB2 only. So the superspeed mod from this blogpost will not help with those. The Asus T100CHI has a separate USB2 micro-B just for charging, so you do not need anything special there to charge + connect an USB device.

15 Nov 2019 8:10pm GMT

Travis Reitter: Catching Java exceptions in Swift via j2objc

Java can be converted to Objective-C and called from Swift code in an iOS app with the tool j2objc. I wrote this piece on how to handle Java exceptions from this Swift code.

15 Nov 2019 5:04am GMT

Travis Reitter: Long-term betting on dependencies

Years ago, when I started planning my cocktail app, I looked at options for code re-use between Android and iOS. Critically, I knew it would take me a while to get the first platform release out so I was worried any tool I expected to use might be unmaintained by then.

I found the tool j2objc and it looked really promising:

So, I could always adapt the tool for my own needs if it did get abandoned. But the maintainers had significant motivation and resources to maintain the project so that seemed like a low risk anyhow.

I didn't imagine Android and iOS would change so much in the time it took to get my Android app completed. Both platforms even changed their primary development language in that span along with a lot of best practices and recommended components. Even though both platforms do a great job of keeping their documentation up-to-date and the most relevant pieces easy to find, learning to develop on one of these platforms is challenging. There still is a lot of outdated information out there (be it Stack Overflow posts or an incredible number of tutorials) and there are stacks and stacks of components to learn. Expectations for modern mobile apps are really high so the number of details to get right can be daunting.

Then to build your app for the other platform you get to start all over! :)

Thankfully, my bet on j2objc proved to be a good one. It's actively maintained by very helpful developers and works as expected. I've completed most of the risky work in porting the core of my app to iOS and any work I do on that core benefits the apps on both platforms.

There are very few compromises I have to make because language features in Java map surprisingly well to both Objective-C and Swift.

But one important exception remained. I'll cover that in a subsequent post.


15 Nov 2019 5:03am GMT

14 Nov 2019

feedPlanet GNOME

Christian Kellner: fwupd and bolt power struggles

As readers of this blog might remember, there is a mode where the firmware (BIOS) is responsible for powering the Thunderbolt controller. This means that if no device is connected to the USB type C port the controller will be physically powered down. The obvious upside is battery savings. The downside is that, for a system in that state, we cannot tell if it has a Thunderbolt controller, nor determine any of its properties, like firmware version. Luckily, there is an interface to tell the firmware (BIOS) to "force-power" the controller. The interface is a write only sysfs attribute. The writes are not reference counted, i.e. two separate commands to enable the force-power state followed by a single disable, will indeed disable the controller. For some time boltd and the firmware update daemon both directly poked that interface. This lead to some interference, leading in turn to strange timing bugs. The canonical example goes like this: fwupd force-powers the controller, uevents will be triggered and Thunderbolt entries appear in sysfs. The boltd daemon will be started via udev+systemd activation. The daemon initializes itself and starts enumerating and probing the Thunderbolt controller. Meanwhile fwupd is done with its thing and cuts the power to the controller. That makes boltd and the controller sad because they were still in the middle of getting to know each other.

boltctl force-power

boltctl power -q can be used to inspect the current force power settings

To fix this issue, boltd gained a force-power D-Bus API and fwupd in turn gained support for using that new API. No more fighting over the force-power sysfs interface. So far so good. But an unintended side-effect of that change was that now bolt was always being started, indirectly by fwupd via D-Bus activation, even if there was no Thunderbolt controller in the system to begin with. Since the daemon currently does not exit even if there is no Thunderbolt hardware1, you have a system-daemon running, but not doing anything useful. This understandably made some people unhappy (rhbz#1650881, lp#1801796). I recently made a small change to the fwupd, which should do away with this issue: before making a call to boltd, fwupd now itself checks if the force-power facility is present. If not, don't bother asking boltd and starting it in the process. The change is included in fwupd 1.3.3. Now both machine and people should be happy, I hope.

Footnotes:

  1. That is a complicated story that needs new systemd features. See #92 for the interesting technical details.

14 Nov 2019 12:52pm GMT

13 Nov 2019

feedPlanet GNOME

Daniel García Moreno: LAS 2019, Barcelona

The event

The Linux App Summit (LAS) is a great event that bring together a lot of linux application developers, from the bigger communities, it's organized by GNOME and KDE in collaboration and it's a good place to talk about the Linux desktop, application distribution and development.

This year the event was organized in Barcelona, this is not too far from my home town, Málaga, so I want to be there.

I sent a talk proposal and was accepted, so I was talking about distributing services with flatpak and problems related to service deployment in a flatpaked world.

Clicking in this image you can find my talk in the event streaming. The sound is not too good and my accent doesn't help, but there it is :D

The event was a really great event, with really good talks, about different topics, we've some technical talks, some talks about design, talks about language, about distribution, about the market and economics, and at least two about "removing" the system tray 😋

It was really interesting the talk about the "future" inclusion of payments in flathub because I think that this will give a new incentive to people to write and publish apps in flathub and could be a great step to get donations for developers.

Another talk that I liked was the one about the maintenance of flatpak repositories, it's always interesting to know how the things works and this talk give an easy introduction to ostree, flatpak, repositories and application distribution.

Besides the talks, this event is really interesting for the people that bring together. I've been talking with a lot of people, not too much, because I'm a shy person, but I've the opportunity to talk a bit with some Fractal developers, and during a coffee talk with Jordan Petridis, we've time to share some ideas about a cool new functionality that maybe we can implement in the near future, thanks to the outreachy program and maybe some help from the gstreamer people.

I'm also very happy to be able to spend some time talking with Martín Abente, about sugar labs, the hack computer and the different ways to teach kids with free software. Martín is a really nice person and I liked a lot to meet him and share some thoughts.

The city

This is not my first time in Barcelona, I was here at the beginning of this year, but this is a great city and I've no time to visit all the places the first time.

So I've spent the Thursday afternoon doing some tourism, visiting the "Sagrada Familia" and the "Montjuïc" fountain.

If you have not been in Barcelona and you have the opportunity to come here, don't hesitate, it's a really good city, with a great architecture to admire and really nice culture and people, and here you can find good food to enjoy.

Thank you all

I was sponsored by the GNOME Foundation, I'm really thankful for this opportunity, to come here, give a talk and share some time with great people that makes the awesome Linux and open source community possible.

I want to thank to my employer Endless because it's really a privilege to have a job that allows this kind of interactions with the community, and my team Hack, because they I've missed some meetings this week and I was not very responsive during the week.

And I want to thank to the LAS organization, because this was a really good event, good job, you can be very proud.

13 Nov 2019 11:00pm GMT

Sebastian Dröge: The GTK Rust bindings are not ready yet? Yes they are!

When talking to various people at conferences in the last year or at conferences, a recurring topic was that they believed that the GTK Rust bindings are not ready for use yet.

I don't know where that perception comes from but if it was true, there wouldn't have been applications like Fractal, Podcasts or Shortwave using GTK from Rust, or I wouldn't be able to do a workshop about desktop application development in Rust with GTK and GStreamer at the Linux Application Summit in Barcelona this Friday (code can be found here already) or earlier this year at GUADEC.

One reason I sometimes hear is that there is not support for creating subclasses of GTK types in Rust yet. While that was true, it is not true anymore nowadays. But even more important: unless you want to create your own special widgets, you don't need that. Many examples and tutorials in other languages make use of inheritance/subclassing for the applications' architecture, but that's because it is the idiomatic pattern in those languages. However, in Rust other patterns are more idiomatic and even for those examples and tutorials in other languages it wouldn't be the one and only option to design applications.

Almost everything is included in the bindings at this point, so seriously consider writing your next GTK UI application in Rust. While some minor features are still missing from the bindings, none of those should prevent you from successfully writing your application.

And if something is actually missing for your use-case or something is not working as expected, please let us know. We'd be happy to make your life easier!

P.S.

Some people are already experimenting with new UI development patterns on top of the GTK Rust bindings. So if you want to try developing an UI application but want to try something different than the usual signal/callback spaghetti code, also take a look at those.

13 Nov 2019 3:02pm GMT