18 Jan 2020

feedPlanet GNOME

Julian Sparber: Digitizing a analog water meter

For a University project a spent some time working on a project to digitally track the water consumption in my shared flat. Since nowadays everything is about data collection, I wanted to give this idea a shot. In my flat we have a simple analog water meter in my house.

Sadly, my meter is really dirt under the glass and i couldn't manage to clean it. This will cause problems down the road.

The initial idea was easy, add a webcam on top of the meter and read the number on the upper half it. But I soon realized that the project won't be that simple. The number shows only the use of 1m^3 (1000 liters), this means that I would have a change only every couple of days, which is useless and boring. So, I had to read the analog gauges, which show the fraction in 0.0001, 0.001, 0.01 and 0.1 m^3. This discovery blocked me, and I was like "this is way to complicated".

I have no idea how I found or what reminded me of OpenCV, but that was the solution. OpenCV is an awesome tool for computer vision, it has many features like Facial recognition, Gesture recognition … and also shape recognition. What's a analog gauge? It's just a circle with an triangular arrow indicating the value.

Let's jump in to the project

I'm using a Raspberry Pi 1, a Logitech webcam, a juice bottle and some leds out of a bicycle light.

You need to find a juice bottle which fits nicely over the water meter. Cut of the top and bottom of the bottle and replace one side with a cardboard or wood with a hole in the middle. Attach the webcam centered over the hole and place a led on each side of the webcam to illuminate the water meter (you my need to cover them with paper to reduce reflection on the plastic of the meter).

First step is to set up the Raspberry Pi 1 (it doesn't have to be a RPI, any computer running linux should work fine). You have to install a Linux Distro on the device, I used Archlinux. You can find a guide to install it on a Raspberry Pi 1 here.

After the initial setup you need to install git, python3 and opencv:
sudo pacman -S python3 git opencv
Clone the needed code to a know location:
git clone https://github.com/jsparber/water-meter-code.git
You need to create a new git repository to store the data and clone it to /home/alarm/water-meter-data/. If you want to use a different name or location you need to modify the name in measure.sh

On the RPI I have a cronjob which runs a script every minute. The script turns on the the led and then takes a picture then it turns the led off again, to save some energy.
With crontab -e you can modify the cron jobs, add * * * * * /home/alarm/code/take_photo.sh to run the take_photo.sh every minute, you may need to adjust the path depending on where you cloned the git repo.

After the picture is taken it calls a second script which then uses OpenCV to read the gauges and it appends the found values to a file which then is pushed to git repo. I had a issue with the webcam. After some time my script couldn't access the webcam anymore, I solved it by rebooting the RPI when it wasn't possible to take a picture. (I did a quick search on the internet, most people solved this issue by changing the cam)

A nice optional feature is the home made switch connected to the RPI on the above picture. The schematics are really simple it's only a 1KOhm resistor, a transistor and a USB extension cable. The transistor is switched on via the GPIO pin 18 of the Raspberry PI and gives power to the connected USB device. In this case I used it to connect the Leds.

Inside the USB extension cable there should be 4 different colored cables. We need to cut only the red one and connect it the same way as the schematics above show it, where the red_in goes to the male connector and the red_out to the female side of the cable. The GND needs to be connected to the ground pin of the Raspberry Pi, if you need to power something which requires more then 500mA you should connect the ground directly to the power source the same way as you did with the +5V red cable. You need to use the same power source for the switch and the RPI or it may not work.

And now the OpenCv part

First my code finds the circles of the right size on the image, and uses the two most left ones as gauges for 0.1 m^3 and 0.01 m^3 (Sadly since my meter is so dirty I can't reliably read the other two values).

The input image. The found circles of the right size

As the second step I create a mask which filters out everything what's not red (remember the arrows are read). I take the contour of the mask which encloses the center of the circle I want to read. Then it finds the fairest point from the center of the circle which is the tip of the arrow. The software then creates a virtual line between the center and the tip, which is then used to calculate the angle which is basically the value shown on the gauge. The same thing is repeated for the other gauges.

The mask with only red areas showing. The arrows found on the source image. This lines are used to calculate the angle.

This system sounds extremely simple, but to make everything work well together it isn't that easy. OpenCV requires a lot of tuning, e.g selecting the right red color so that it detects it well but stays working even with light changes.

Conclusions

I learned a lot during this project, especially about OpenCV which i never used before. Sadly my water meter was really dirty so a couldn't read all values and get also some wrong readings. So far I didn't decide for what i want to use the collected data therefore I didn't spend much time on finding a solution for read errors and problems I have when the gauges make a full turn. A easy solution would be to just keep an internal count of the water. And when we are unsure about a value we can go back to the memorized value.

The final plot can be found here. All values are saved directly without filter this causes the plot to have quite some noise but it allows to change the function used to filter later and adapt it to future needs.

My code is published on github:

Some sources which helped me a lot, many thanks to them:

18 Jan 2020 10:22am GMT

17 Jan 2020

feedPlanet GNOME

Tobias Bernard: Doing Things That Scale

There was a point in my life when I ran Arch, had an elaborate personalized terminal prompt, and my own custom icon theme. I stopped doing all these things at various points for different reasons, but underlying them all is a general feeling that it's taken me some time to figure out how to articulate: I no longer want to invest time in things that don't scale.

What I mean by that in particular is things that

  1. Only fix a problem for myself (and maybe a small group of others)
  2. Have to be maintained in perpetuity (by me)

Not only is it highly wasteful for me to come up with a custom solution to every problem, but in most cases those solutions would be worse than ones developed in collaboration with others. It also means nobody will help maintain these solutions in the long run, so I'll be stuck with extra work, forever.

Conversely, things that scale

  1. Fix the problem in way that will just work™ for most people, most of the time
  2. Are developed, used, and maintained by a wider community

A few examples:

I used to have an Arch GNU/Linux setup with tons of tweaks and customizations. These days I just run vanilla Fedora. It's not perfect, but for actually getting things done it's way better than what I had before. I'm also much happier knowing that if something goes seriously wrong I can reinstall and get to a usable system in half an hour, as opposed to several hours of tedious work for setting up Arch. Plus, this is a setup I can actually install for friends and relatives, because it does a decent job at getting people to update when I'm not around.

Until relatively recently I always set a custom monospace font in my editor and terminal when setting up a new machine. At some point I realized that I wouldn't have to do that if the default was nicer, so I just opened an issue. A discussion ensued, a better default was agreed upon, and voilà - my problem was solved. One less thing to do after every install. And of course, everyone else now gets a nicer default font too!

I also used to use ZSH with a configuration framework and various plugins to get autocompletion, git status, a fancy prompt etc. A few years ago I switched to fish. It gives me most of what I used to get from my custom ZSH thing, but it does so out of the box, no configuration needed. Of course ideally we'd have all of these things in the default shell so everyone gets these features for free, but that's hard to do unfortunately (if you're interested in making it happen I'd love to talk!).

Years ago I used to maintain my own extension set to the Faenza icon theme, because Faenza didn't cover every app I was using. Eventually I realized that trying to draw a consistent icon for every single third party app was impossible. The more icons I added, the more those few apps that didn't have custom icons stuck out. Nowadays when I see an app with a poor icon I file an issue asking if the developer would like help with a nicer one. This has worked out great in most cases, and now I probably have more consistent app icons on my system than back when I used a custom theme. And of course, everyone gets to enjoy the nicer icons, not only me.

Some other things that don't scale (in no particular order):

The free software community tends to celebrate custom, hacky solutions to problems as something positive ("It's so flexible!"), even when these hacks are only necessary because things are broken by default. It's nice that people with a lot of time and technical skills can fix their own problems, but the benefits from that don't automatically trickle down to everybody else.

If we want ethical technology to become accessible to more people, we need to invest our (very limited) time and energy in solutions that scale. This means good defaults instead of endless customization, apps instead of scripts, "it just works" instead of "read the fucking manual". The extra effort to make proper solutions that work for everyone, rather than hacks just for ourselves can seem daunting, but is always worth it in the long run. Just as with accessibility and commenting your code, the person most likely to benefit from it is you, in the future.

17 Jan 2020 9:45pm GMT

Hans de Goede: Plug and play support for (Gaming) keyboards with a builtin LCD panel

A while ago as a spin-off of my project to improve support for Logitech wireless keyboards and mice I have also done some work on improving support for (Gaming) keyboards with a builtin LCD panel.

Specifically if you have a Logitech MX5000, G15, G15 v2 or G510 and you want the LCD panel to show something somewhat useful then on Fedora 31 you can now install the lcdproc package and it will automatically recognize the keyboard and show "top" like information on it. No need to manually write an LCDd.conf or anything, this works fully plug and play:

sudo dnf install lcdproc
sudo udevadm trigger


If you have a MX5000 and you do not want the LCD panel to show "top" like info, you may still want to install the mx5000tools package, this will automatically send the system time to the keyboard, after which it will display the time.

Once the 5.5 kernel becomes available as an update for Fedora you will also be able to use the keys surrounding the LCD panel to control the lcdproc menu-s on the LCD panel. The 5.5 kernel will also export key backlight brightness control through the standardized /sys/class/leds API, so that you can control it from e.g. the GNOME control-center's power-settings and you get a nice OSD when toggling the brightnesslevel using the key on the keyboard.

The 5.5 kernel will also make the "G" keys send standard input-events (evdev events), once userspace support for the new key-codes these send has landed, this will allow e.g. binding them to actions in GNOME control-center's keyboard-settings. But only under Wayland as the new keycodes are > 255 and X11 does not support this.

17 Jan 2020 1:39pm GMT

16 Jan 2020

feedPlanet GNOME

Alberto Ruiz: GTK: OSX a11y support

Everybody knows that I have always been a firm believer in Gtk+'s potential to be a great cross platform toolkit beyond Linux. GIMP and Inkscape, as an example, are loyal users that ship builds for those platforms. The main challenge is the short amount of maintainers running, testing and improving those platforms.

Gtk+ has a few shortcomings one of them, one of the biggest ones is lack of a11y support outside of Linux. Since I have regular access to a modern OSX machine I decided to give this a go (and maybe learn son Obj-C in the process).

So I started by having a look at how ATK works and how it relates to the GTK DOM, my main goal was to have a GTK3 module that would walk through the toplevels and build an OSX accessibility tree.

So my initial/naive attempt is in this git repo, which you can build by installing gtk from brew.

Some of the shortcomings that I have found to actually test this and move forward:

So this is my progress thus far, I think once I get to a point where I can iterate over the concept, it would be easier to start sketching the mapping between ATK and NSAccessibility. I would love feedback or help, so if you are interested please reach out by filing an issue on the gitlab project!

16 Jan 2020 5:49pm GMT

15 Jan 2020

feedPlanet GNOME

Federico Mena-Quintero: Exposing C and Rust APIs: some thoughts from librsvg

Librsvg exports two public APIs: the C API that is in turn available to other languages through GObject Introspection, and the Rust API.

You could call this a use of the facade pattern on top of the rsvg_internals crate. That crate is the actual implementation of librsvg, and exports an interface with many knobs that are not exposed from the public APIs. The knobs are to allow for the variations in each of those APIs.

This post is about some interesting things that have come up during the creation/separation of those public APIs, and the implications of having an internals library that implements both.

Initial code organization

When librsvg was being ported to Rust, it just had an rsvg_internals crate that compiled as a staticlib to a .a library, which was later linked into the final librsvg.so.

Eventually the code got to the point where it was feasible to port the toplevel C API to Rust. This was relatively easy to do, since everything else underneath was already in Rust. At that point I became interested in also having a Rust API for librsvg - first to port the test suite to Rust and be able to run tests in parallel, and then to actually have a public API in Rust with more modern idioms than the historical, GObject-based API in C.

Version 2.45.5, from February 2019, is the last release that only had a C API.

Most of the C API of librsvg is in the RsvgHandle class. An RsvgHandle gets loaded with SVG data from a file or a stream, and then gets rendered to a Cairo context. The naming of Rust source files more or less matched the C source files, so where there was rsvg-handle.c initially, later we had handle.rs with the Rustified part of that code.

So, handle.rs had the Rust internals of the RsvgHandle class, and a bunch of extern "C" functions callable from C. For example, for this function in the public C API:

void rsvg_handle_set_base_gfile (RsvgHandle *handle,
                                 GFile      *base_file);

The corresponding Rust implementation was this:

#[no_mangle]
pub unsafe extern "C" fn rsvg_handle_rust_set_base_gfile(
    raw_handle: *mut RsvgHandle,
    raw_gfile: *mut gio_sys::GFile,
) {
    let rhandle = get_rust_handle(raw_handle);        // 1

    assert!(!raw_gfile.is_null());                    // 2
    let file: gio::File = from_glib_none(raw_gfile);  // 3

    rhandle.set_base_gfile(&file);                    // 4
}
  1. Get the Rust struct corresponding to the C GObject.
  2. Check the arguments.
  3. Convert from C GObject reference to Rust reference.
  4. Call the actual implementation of set_base_gfile in the Rust struct.

You can see that this function takes in arguments with C types, and converts them to Rust types. It's basically just glue between the C code and the actual implementation.

Then, the actual implementation of set_base_gfile looked like this:

impl Handle {
    fn set_base_gfile(&self, file: &gio::File) {
        if let Some(uri) = file.get_uri() {
            self.set_base_url(&uri);
        } else {
            rsvg_g_warning("file has no URI; will not set the base URI");
        }
    }
}

This is an actual method for a Rust Handle struct, and takes Rust types as arguments - no conversions are necessary here. However, there is a pesky call to rsvg_g_warning, about which I'll talk later.

I found it cleanest, although not the shortest code, to structure things like this:

In the very first versions of the code where the public API was implemented in Rust, the extern "C" functions actually contained their implementation. However, after some refactoring, it turned out to be cleaner to leave those functions just with the task of converting C to Rust types and vice-versa, and put the actual implementation in very Rust-y code. This made it easier to keep the unsafe conversion code (unsafe because it deals with raw pointers coming from C) only in the toplevel functions.

Growing out a Rust API

This commit is where the new, public Rust API started. That commit just created a Cargo workspace with two crates; the rsvg_internals crate that we already had, and a librsvg_crate with the public Rust API.

The commits over the subsequent couple of months are of intense refactoring:

Needing to call a C macro

However, there was a little problem. The Rust code cannot call g_warning, a C macro in glib that prints a message to stderr or uses structured logging. Librsvg used that to signal conditions where something went (recoverably) wrong, but there was no way to return a proper error code to the caller - it's mainly used as a debugging aid.

This is what the rsvg_internals used to be able to call that C macro:

First, the C code exports a function that just calls the macro:

/* This function exists just so that we can effectively call g_warning() from Rust,
 * since glib-rs doesn't bind the g_log functions yet.
 */
void
rsvg_g_warning_from_c(const char *msg)
{
    g_warning ("%s", msg);
}

Second, the Rust code binds that function to be callable from Rust:

pub fn rsvg_g_warning(msg: &str) {
    extern "C" {
        fn rsvg_g_warning_from_c(msg: *const libc::c_char);
    }

    unsafe {
        rsvg_g_warning_from_c(msg.to_glib_none().0);
    }
}

However! Since the standalone librsvg_crate does not link to the C code from the public librsvg.so, the helper rsvg_g_warning_from_c is not available!

A configuration feature for the internals library

And yet! Those warnings are only meaningful for the C API, which is not able to return error codes from all situations. However, the Rust API is able to do that, and so doesn't need the warnings printed to stderr. My first solution was to add a build-time option for whether the rsvg_internals library is being build for the C library, or for the Rust one.

In case we are building for the C library, the code calls rsvg_g_warning_from_c as usual.

But in case we are building for the Rust library, that code is a no-op.

This is the bit in rsvg_internals/Cargo.toml to declare the feature:

[features]
# Enables calling g_warning() when built as part of librsvg.so
c-library = []

And this is the corresponding code:

#[cfg(feature = "c-library")]
pub fn rsvg_g_warning(msg: &str) {
    unsafe {
        extern "C" {
            fn rsvg_g_warning_from_c(msg: *const libc::c_char);
        }

        rsvg_g_warning_from_c(msg.to_glib_none().0);
    }
}

#[cfg(not(feature = "c-library"))]
pub fn rsvg_g_warning(_msg: &str) {
    // The only callers of this are in handle.rs. When those functions
    // are called from the Rust API, they are able to return a
    // meaningful error code, but the C API isn't - so they issues a
    // g_warning() instead.
}

The first function is the one that is compiled when the c-library feature is enabled; this happens when building rsvg_internals to link into librsvg.so.

The second function does nothing; it is what is compiled when rsvg_internals is being used just from the librsvg_crate crate with the Rust API.

While this worked well, it meant that the internals library was built twice on each compilation run of the whole librsvg module: once for librsvg.so, and once for librsvg_crate.

Making programming errors a g_critical

While g_warning() means "something went wrong, but the program will continue", g_critical() means "there is a programming error". For historical reasons Glib does not abort when g_critical() is called, except by setting G_DEBUG=fatal-criticals, or by running a development version of Glib.

This commit turned warnings into critical errors when the C API was called out of order, by using a similar rsvg_g_critical_from_c() wrapper for a C macro.

Separating the C-callable code into yet another crate

To recapitulate, at that point we had this:

librsvg/
|  Cargo.toml - declares the Cargo workspace
|
+- rsvg_internals/
|  |  Cargo.toml
|  +- src/
|       c_api.rs - convert types and return values, call into implementation
|       handle.rs - actual implementation
|       *.rs - all the other internals
|
+- librsvg/
|    *.c - stub functions that call into Rust
|    rsvg-base.c - contains rsvg_g_warning_from_c() among others
|
+- librsvg_crate/
   |  Cargo.toml
   +- src/
   |    lib.rs - public Rust API
   +- tests/ - tests for the public Rust API
        *.rs

At this point c_api.rs with all the unsafe functions looked out of place. That code is only relevant to librsvg.so - the public C API -, not to the Rust API in librsvg_crate.

I started moving the C API glue to a separate librsvg_c_api crate that lives along with the C stubs:

+- librsvg/
|    *.c - stub functions that call into Rust
|    rsvg-base.c - contains rsvg_g_warning_from_c() among others
|    Cargo.toml
|    c_api.rs - what we had before

This made the dependencies look like the following:

      rsvg_internals
       ^           ^
       |             \
       |               \
librsvg_crate     librsvg_c_api
  (Rust API)             ^
                         |
                    librsvg.so
                      (C API)

And also, this made it possible to remove the configuration feature for rsvg_internals, since the code that calls rsvg_g_warning_from_c now lives in librsvg_c_api.

With that, rsvg_internals is compiled only once, as it should be.

This also helped clean up some code in the internals library. Deprecated functions that render SVGs directly to GdkPixbuf are now in librsvg_c_api and don't clutter the rsvg_internals library. All the GObject boilerplate is there as well now; rsvg_internals is mostly safe code except for the glue to libxml2.

Summary

It was useful to move all the code that dealt with incoming C types, our outgoing C return values and errors, into the same place, and separate it from the "pure Rust" code.

This took gradual refactoring and was not done in a single step, but it left the resulting Rust code rather nice and clean.

When we added a new public Rust API, we had to shuffle some code around that could only be linked in the context of a C library.

Compile-time configuration features are useful (like #ifdef in the C world), but they do cause double compilation if you need a C-internals and a Rust-internals library from the same code.

Having proper error reporting throughout the Rust code is a lot of work, but pretty much invaluable. The glue code to C can then convert and expose those errors as needed.

If you need both C and Rust APIs into the same code base, you may end up naturally using a facade pattern for each. It helps to gradually refactor the internals to be as "pure idiomatic Rust" as possible, while letting API idiosyncrasies bubble up to each individual facade.

15 Jan 2020 5:15pm GMT

Sébastien Wilmet: New essay: A DAG of components – for an internal architecture too

I've written a new essay: A DAG of components â€" for an internal architecture too

I've also set up a public git repository with the sources (and backup of the PDFs).

List of all my essays (two so far).

15 Jan 2020 5:51am GMT

14 Jan 2020

feedPlanet GNOME

Alexander Larsson: Introducing GVariant schemas

GLib supports a binary data format called GVariant, which is commonly used to store various forms of application data. For example, it is used to store the dconf database and as the on-disk data in OSTree repositories.

The GVariant serialization format is very interesting. It has a recursive type-system (based on the DBus types) and is very compact. At the same time it includes padding to correctly align types for direct CPU reads and has constant time element lookup for arrays and tuples. This make GVariant a very good format for efficient in-memory read-only access.

Unfortunately the APIs that GLib has for accessing variants are not always great. They are based on using type strings and accessing children via integer indexes. While this is very dynamic and flexible (especially when creating variants) it isn't a great fit for the case where you have serialized data in a format that is known ahead of time.

Some negative aspects are:

If you look at some other binary formats, like Google protobuf, or Cap'n Proto they work by describing the types your program use in a schema, which is compiled into code that you use to work with the data.

For many use-cases this kind of setup makes a lot of sense, so why not do the same with the GVariant format?

With the new GVariant Schema Compiler you can!

It uses a interface definition language where you define the types, including extra information like field names and other attributes, from which it generates C code.

For example, given the following schema:

type Gadget {
  name: string;
  size: {
    width: int32;
    height: int32;
  };
  array: []int32;
  dict: [string]int32;
};

It generates (among other things) these accessors:

const char *    gadget_ref_get_name   (GadgetRef v);
GadgetSizeRef   gadget_ref_get_size   (GadgetRef v);
Arrayofint32Ref gadget_ref_get_array  (GadgetRef v);
const gint32 *  gadget_ref_peek_array (GadgetRef v,
                                       gsize    *len);
GadgetDictRef   gadget_ref_get_dict   (GadgetRef v);

gint32 gadget_size_ref_get_width  (GadgetSizeRef v);
gint32 gadget_size_ref_get_height (GadgetSizeRef v);

gsize  arrayofint32_ref_get_length (Arrayofint32Ref v);
gint32 arrayofint32_ref_get_at     (Arrayofint32Ref v,
                                    gsize           index);

gboolean gadget_dict_ref_lookup (GadgetDictRef v,
                                 const char   *key,
                                 gint32       *out);

Not only are these accessors easier to use and understand due to using C types and field names instead of type strings and integer indexes, they are also a lot faster.

I wrote a simple performance test that just decodes a structure over an over. Its clearly a very artificial test, but the generated code is over 600 times faster than the code using g_variant_get(), which I think still says something.

Additionally, the compiler has a lot of other useful features:

14 Jan 2020 3:02pm GMT

Christian Hergert: GtkSourceView on GTK 4

I spent some time this cycle porting GtkSourceView to GTK 4. It was a good opportunity to help me catch up on how GTK 4's internals have changed into something modern. It gave me a chance to fix a few pot-holes along the way too.

One of the pot-holes was one I left in GtkTextView years ago. When I plumbed the pixelcache into GTK 3's TextView I had only cached the primary text content. It seemed fine at the time because the gutters (used for line numbers) is just not that many pixels. So if we have to re-generate that every frame, so be it.

However, in a HiDPI world and 4k monitors on our laps things start to get… warm. So while changing the drawing model in GtkTextView we decided to make the GtkTextView gutters real widgets. Doing so means that GtkSourceGutterRenderer will be real GtkWidget's going forward and can do all sorts of neat stuff widgets can do.

But to address the speed of rendering we needed a better way to avoid walking the text btree linearly so many times while rendering the gutter. I've added a new class GtkSourceGutterLines to allow collecting information about the text buffer in one-pass. The renderers can then use that information when creating render nodes to avoid further tree scans.

I have some other plans for what I'd like to see before a 5.0 of GtkSourceView. I've already written a more memory-compact undo/redo engine for GTK's GtkTextView, GtkEntry, GtkText, and friends which allowed me to delete that code from the GtkSourceView port. Better yet, you get undo/redo in all the places you would, well, expect it.

In particular I would like to see the async+GListModel based API for completion from Builder land upstream. Builder also has a robust snippet engine which could be reusable from GtkSourceView as that is a fairly useful thing across editors. Perhaps we could extract Builder's indenter APIs and movements engine too. These are used by Builder's Vim emulation quite heavily, for example.

If you like following development of stuff I'm doing you can always get that fix here on Twitter given my blogging infrequency.

14 Jan 2020 5:17am GMT

09 Jan 2020

feedPlanet GNOME

Kalev Lember: GNOME 3.34.3 in Fedora 31 updates-testing

Just a quick heads up that GNOME 3.34.3 just hit Fedora 31 updates-testing repo. It's a fairly small update; mostly just gnome-shell/mutter fixes and translation updates to leaf applications.

If you are a GNOME user, please install the update from updates-testing and give it a quick spin and leave karma in the feedback section at https://bodhi.fedoraproject.org/updates/FEDORA-2020-194da76ba0

Thanks!

09 Jan 2020 6:03pm GMT

08 Jan 2020

feedPlanet GNOME

Sam Thursfield: Last month in Tracker

Here's an incomplete report of some work done on Tracker during the last month!

Bugs

Jean Felder fixed a thorny issue that was causing wrong track durations for MP3s.

Rasmus Thomsen has been testing on Alpine Linux, fixing one issue and finding several more. Alpine Linux uses musl libc instead of the more common GNU libc, which triggers bugs that we don't usually see. Finding and fixing these issues could be a great learning experience for someone who wants to dig deep into the platform!

There's an ongoing issue reported by many Ubuntu users which seems to be due to SQLite database corruption. SQLite is rather a black box to me, so I don't know how or when we might get to the bottom of why this corruption is happening.

Ubuntu CI

We now test each commit on Ubuntu as well as Fedora. This a nice step forwards. It's also triggering more intermittent failures in the CI - we've made huge progress in the last few years on bringing the CI up from zero, but there are some latent issues like these which we need to get rid of.

Tracker 3.0

Carlos has done more architectural work in the 'master' branch, working towards having a generic SPARQL store in tracker.git, and all GNOME/desktop/filesystem related code in tracker-miners.git.

As part of this, the tracker CLI tool is now split between tracker.git and tracker-miners.git (MR1, MR2).

We also moved the libtracker-control and libtracker-miner libraries into tracker-miners.git, and made the libtracker-control API private. As far as I know, the libtracker-control library is only being used by GNOME Photos to manage indexing of removable devices. We want to keep track of which apps need porting to 3.0, so please let me know if this is going to affect anything else.

New website

Tracker is famous enough that it merits a real website, not just an outdated set of wiki pages. So I made a real Tracker website, aiming to collect links to relevant user and developer documentation and to have a minimal overview and FAQ section. We can build and deploy this straight from the tracker.git repo, so whereas the wiki is easily forgotten, the new website lives in the same repo as the sourcecode. The next step will be to merge this and then tidy up most of the old wiki pages

08 Jan 2020 1:50pm GMT

07 Jan 2020

feedPlanet GNOME

Guido Günther: Introducing gtherm

Continuous temperature monitoring from the kernel's /sys/class/thermal/ in an application can be cumbersome. gtherm aims to make that simpler by providing a daemon (gthd) that exports thermal zones and cooling cells over DBus and providing a small library libgtherm (and GObject introspection bindings). gthcli is a simple command line client that displays the currently found values:

Thermal Zones
-------------
      dbus path: /org/sigxcpu/Thermal/ThermalZone/0
           type: cpu-thermal
    temperature: 53,00째C
cooling devices: /org/sigxcpu/Thermal/CoolingDevice/0

      dbus path: /org/sigxcpu/Thermal/ThermalZone/3
           type: max170xx_battery
    temperature: 36,60째C

      dbus path: /org/sigxcpu/Thermal/ThermalZone/2
           type: vpu-thermal
    temperature: 54,00째C

      dbus path: /org/sigxcpu/Thermal/ThermalZone/1
           type: gpu-thermal
    temperature: 54,00째C
cooling devices: /org/sigxcpu/Thermal/CoolingDevice/1

Cooling Devices
---------------
    dbus path: /org/sigxcpu/Thermal/CoolingDevice/0
         type: thermal-idle-0
    max state: 100
current state: 0

    dbus path: /org/sigxcpu/Thermal/CoolingDevice/1
         type: 38000000.gpu
    max state: 6
current state: 0

There's support for gnome-usage in the works:

gnome-usage thermal view

Next up is support for trip points (and maybe tuning cooling behaviour from userspace later on).

07 Jan 2020 10:19am GMT

05 Jan 2020

feedPlanet GNOME

Andrés G. Aragoneses: Introducing geewallet

Version 0.4.2.187 of geewallet has just been published to the snap store! You can install it by looking for its name in the store or by installing it from the command line with `snap install geewallet`. It features a very simplistic and minimalistic UI/UX. Nothing very fancy, especially because it has a single codebase that targets many (potential) platforms, e.g. you can also find it in the Android App Store.

What was my motivation to create geewallet in the first place, around 2 years ago? Well, I was very excited about the "global computing platform" that Ethereum was promising. At the time, I thought it would be like the best replacement of Namecoin: decentralised naming system, but not just focusing on this aspect, but just bringing Turing-completeness so that you can build whatever you want on top of it, not just a key-value store. So then, I got ahold of some ethers to play with the platform. But by then, I didn't find any wallet that I liked, especially when considering security. Most people were copy+pasting their private keys into a website (!) called MyEtherWallet. Not only this idea was terrifying (since you had to trust not just the security skills of the sysadmin who was in charge of the domain&server, but also that the developers of the software don't turn rogue…), it was even worse than that, it was worse than using a normal hot wallet. And what I wanted was actually a cold wallet, a wallet that could run in an offline device, to make sure hacking it would be impossible (not faraday-cage-impossible, but reasonably impossible).

So there I did it, I created my own wallet.

After some weeks, I added bitcoin support on it thanks to the library NBitcoin (good work Nicholas!). After some months, I added a cross-platform UI besides the first archaic command-line frontend. These days it looks like this:



What was my motivation to make geewallet a brain wallet? Well, at the time (and maybe nowadays too, before I unveil this project at least), the only decent brain wallet out there that seemed sufficiently secure (against brute force attacks) was WarpWallet, from the Keybase company. If you don't believe in their approach, they even have placed a bounty in a decently small passphrase (so if you ever think that this kind of wallet would be hacked, you would be certainly safe to think that any cracker would target this bounty first, before thinking of you). The worst of it, again, was that to be able to use it you had again to use a web interface, so you had the double-trust problem again. Now geewallet brings the same WarpWallet seed generation algorithm (backed by unit tests of course) but on a desktop/mobile approach, so that you can own the hardware where the seed is generated. No need to write anymore long seeds of random words in pieces of paper: your mind is the limit! (And of course geewallet will warn the user in case the passphrase is too short and simple: it even detects if all the words belong to the dictionary, to deter low entropy, from the human perspective.)

Why did I add support for Litecoin and Ethereum Classic to the wallet? First, let me tell you that bitcoin and ethereum, as technological innovations and network effects, are very difficult to beat. And in fact, I'm not a fan of the proliferation of dubious portrayed awesome new coins/tokens that claim to be as efficient and scalable as these first two. They would need not only to beat the network effect when it comes to users, but also developers (all the best cryptographers are working in Bitcoin and Ethereum technologies). However, Litecoin and Ethereum-Classic are so similar to Bitcoin and Ethereum, respectively, that adding support for them was less than a day's work. And they are not completely irrelevant: Litecoin may bring zero-knowledge proofs in an upcoming update soon (plus, its fees are lower today, so it's an alternative cheaper testnet with real value); and Ethereum-Classic has some inherent characteristics that may make it more decentralised than Ethereum in the long run (governance not following any cult of personality, plus it will remain as a Turing-complete platform on top of Proof Of Work, instead of switching to Proof of Stake; to understand why this is important, I recommend you to watch this video).

Another good reason of why I started something like this from scratch is because I wanted to use F# in a real open source project. I had been playing with it for a personal (private) project 2 years before starting this one, so I wanted to show the world that you can build a decent desktop app with simple and not too opinionated/academic functional programming. It reuses all the power of the .NET platform: you get debuggers, you can target mobile devices, you get immutability by default; all three in one, in this decade, at last. (BTW, everything is written in F#, even the build scripts.)

What's the roadmap of geewallet? The most important topics I want to cover shortly are three:
With less priority:

Areas where I would love contributions from the community:

And just in case I wasn't clear:

I'm excited about the world of private-key management. I think we can do much better than what we have today: most people think of hardware wallets to be unhackable or cold storage, but most of them are used via USB or Bluetooth! Which means they are not actually cold storage, so software wallets with offline-support (also called air-gapped) are more secure! I think that eventually these tools will even merge with other ubiquitous tools with which we're more familiar today: password managers!

You can follow the project on twitter (yes I promise I will start using this platform to publish updates).

PS: If you're still not convinced about these technologies or if you didn't understand that PoW video I posted earlier, I recommend you to go back to basics by watching this other video produced by a mathematician educator which explains it really well.


05 Jan 2020 4:31pm GMT

03 Jan 2020

feedPlanet GNOME

Sébastien Wilmet: New essay: Trying to convince application developers to write API documentation

I've written a new short essay: Trying to convince application developers to write API documentation

I've created the Short essays page on my website. I plan to write more essays in the future, as short articles that can be read independently. Around the theme of programming best-practices. I'll inform you on my blog when I publish a new essay.

Note, it's unfortunately not written in ConTeXt (see this previous blog post), as I haven't found a text editor for ConTeXt that just works and is easy to install, with all the features I'm accustomed to when I write a LaTeX document. So I fell back to using LaTeX.

03 Jan 2020 10:27am GMT

02 Jan 2020

feedPlanet GNOME

Ravgeet Dhillon: Celebrating GNOME Newcomers’ contributions

A few weeks ago, I sat down to solve some issues related to the GNOME Engagement team. While going through the list, I found this issue created by Umang Jain, which looked forward to celebrating the contributions made by GNOME Newcomers. It was opened in late 2017 and a lot of discussions happened during this period. So, I decided to take on this issue and solve it programmatically.

Problem

There is no doubt that newcomers work hard to make their first contribution to a project they do not know about. So, it’s really important to recognize and celebrate their contributions when they make one.

With GNOME being a large project, there is a need for an automated system which recognizes the contributions made by the newcomers and help the GNOME Engagement team to seamlessly identify them.

The following points were listed on the issue which we need to solve, but I will only consider solving the relevant ones.

Approach

Many GNOME people proposed there views and workarounds to tackle this problem. Taking the best cues out of each suggestion, I decided to use the Gitlab API. Gitlab API has all the features which can help us to take on this problem effectively.

Using Gitlab API, a list of all the users(with their first ten contributions) present on GNOME Gitlab Instance, is fetched. Along with this, a list of projects is also fetched using Gitlab API. The list of users is traversed which divides the users into Newcomers and Regular contributors. This is achieved by checking when the user first contributed to a GNOME project. If the contribution was made in the last 15 days, then the contributor is categorized as a Newcomer. After the newcomers are identified, they are filtered based on the type of contribution made. Currently, notable contributions are related to merge requests and issues.

After going through the above procedure, a detailed report is created as a JSON file. This JSON file can be found here.

Scheduling scan

The above process is scheduled to run once a day using Gitlab CI. It takes about 5 hours to complete. Once the scan is completed, the result of this whole process is pushed back to the project repository for future use.

Resources

You can find out the project here. You can also open issues and merge requests to make the project better.

Lemme know if you have any doubt, appreciation or anything else that you would like to communicate to me. You can tweet me @ravgeetdhillon. I reply to all the questions as quickly as possible. ðŸ˜" And if you liked this post, please share it with your twitter community as well.

02 Jan 2020 1:00pm GMT

01 Jan 2020

feedPlanet GNOME

Christian Hergert: Introducing Bonsai

TL;DR: Pair your Linux devices, developer APIs to share files, create object graphs with partial sync between devices, transactions, secondary indexes, rebasing, and more built upon GVariant and LMDB. Tooling to build cloudless multi-device services.

A picture of a Bonsai tree and a gnomeI've been spending a great deal of time thinking about what types of products I'd like to see in GNOME and what is missing to make that happen.

One observation is that I want access to my files and application data on all my computing devices but I don't want to store that data on other peoples computers. I have computers, they have internet access, I shouldn't have to use a multi-tenancy cloud if I'm running as much Free Software as I do. But if that is going to be competitive it needs to be easier than the alternatives.

But to build this I need a few fundamental layers to build applications atop. I'll need access to files using all the GIO file APIs we love (GFile, GFileEnumerator, GIOStream, etc). I'll also need the ability to read and write application data in a way that can be shared between devices which may not always be connected to my home Wi-Fi. In particular, we need to give developers great tools to make applications that natively support device synchronization.

What I've built to experiment with this all is Bonsai. It is very much an experiment at this phase but it is getting interesting enough to collaborate with others who would like to join me.

Bonsai consists of a daemon that you run on your "mostly connected" computer. Although that could easily be a raspberry pi quality computer in your home. That computer hosts the "upstream" storage space for files and application content.

Other devices like laptops, phones, or IoT can be paired with that primary device. They communicate using TLS connections using pinned self-signed certificates with point-to-point D-Bus serialization on top. The D-Bus serialization makes it convenient to use gdbus-codegen to generate proxies and services.

One service available to devices is the storage service. It can be consumed from libbonsai-storage to allow applications to browse, create, move, modify and stream file content.

Applications are much better when they can communicate between devices. So a Data-Access-Object library, aptly named libbonsai-dao, provides serializable object storage built upon GVariant and LMDB. It supports primary and secondary indexes, queries, cursors, transactions, and incremental sync between devices. It has the ability to rebase local changes atop changes pulled from the primary Bonsai device.

That last bit is neat because it means if an application is running on two devices which create new content they don't clobber each others history.
The primary issue here is dealing with merge conflicts but libbonsai-dao provides some design for data objects to do the right thing.

Bonsai could also could serve as a base to build interesting services like backup, VPN, media sharing and casting, news, notes, calendars, contacts, and more. But honestly, it can only do that if people are actually interested in something like this. If so, let me know and see if you can lend your time or ideas for what you'd want this to become.

01 Jan 2020 9:56pm GMT

31 Dec 2019

feedPlanet GNOME

Joaquim Rocha: Wrapping Up 2019

It's the last night of the year and the decade, and here is the mandatory End of Year's post.

Family

This year was without a doubt the most difficult in my (still young) life. Things were setting up to be a great year at the beginning, there were big plans for the Hack project I was working on at Endless with my colleagues, and my wife Helena was going to start an illustration course after our son finally started at the kindergarten (in Germany it's common for kids to enter it when they're already 2 years old…), besides other personal projects we were preparing.
However, in a visit to the dentist by my wife in order to check something bothering her, she ended up being disagnosed with mouth cancer.
As would happen with anyone, the news really shook us and made us go through all the common wonderings of why would such thing happen to someone who has no family history of such sicknesses, doesn't drink, doesn't smoke, etc.

Still this is a positive post! Everything moved very quickly and neatly on the doctors side after the diagnostic. The tests and surgery happened as fast as they could possibly be done, and since apparently it was disagnosed at a very early stage, Helena "only" needed two surgeries and no aggressive treatments.
In the end, we are very thankful to all the doctors, nurses, and staff. It couldn't have been better, from the great quality of the services, to the friendliness of the people involved. A big and honest thank you again to the great people who dealt with us at Berlin's Unfallkrankenhaus.

We are extremely lucky to have universal healthcare coverage. Besides the normal (and public) insurance we have, we only had to pay very little extra costs that are even neglegible. I cannot imagine having to worry with the sickness and also with the costs of treatments.

Being away from our family when this happened also made it more difficult as we had to juggle the hospital trips with taking care of our son (who was not yet in the kindergarten when this started) and my work. On the work side, I need to thank Cosimo and Endless, who made it clear I'd have all the time I needed to organize things on my side; that was extremely important. And we also need to thank our neighbor Ilka, who took care of our son a few times while we both were away. Of course, many more people offered their support, and we had Helena's mother over for a couple of weeks in the second surgery. All the support and nice words was important and we're grateful to have such great people in our lives.

One last thing to end this subject. I really need to emphasize Helena's attitude towards her situation. We have been together for a long time, and I knew she was a positive person, but her positive attitude in the face of such a serious case was mind-blowing even for the doctors (one even said "Do you know what this means? …. Yes? Okay, this is weird, I had never had anyone behaving like this after the news…"). I feel like the drama was all mine and she had to recomfort me, even though she was the one who had to endure the initial uncertainty, the surgeries, the recovery…
After so much time together and so many experiences we shared, this problem made me admire even more the person I love. I wish our kids get that attitude to life and not my traditional-and-very-Portuguese fatality 🙂

Work

On the work side things also had a twist. At about the same time Helena was having her second surgery, my work at Endless was about to change too, and I joined Kinvolk for a temporary position, as explained in this post, since I wasn't sure about mixing friendship and work.
Well, it turns out that I liked the work, the people, and the possibilities at Kinvolk so much that (in November) I accepted the proposal to make it permanent!

Technically, coming from the Linux desktop world, it felt "foreign" to take over a Go + React project like Nebraska, but I already feel very comfortable with this "ecosystem".

I am genuinely excited about what is coming from Kinvolk, and I will keep working on the company's existing and new products. We are also looking for great people to help deliver great & 100% Open Source solutions, so check out our open positions.

Community

About GNOME/community work. It's difficult to find the time and energy to do anything tech-related outside of work, so I cannot realistically think I will be an active contributor in my spare time.
Still, I keep my eye and interest in the GNOME and flatpak communities. Last year (2018) I "flatpaked" two old games (noiz2sa and rRootage) and added them to Flathub, and now I am in the process of getting Robocode into flathub (more on that soon).

That's it!

And that's all for this year's wrap-up! Despite a very difficult situation, we end the decade feeling very happy and fortunate. I wish everybody a great new decade! Love.

31 Dec 2019 9:55pm GMT