20 Mar 2018

feedPlanet GNOME

Sebastian Dröge: GStreamer Rust bindings 0.11 / plugin writing infrastructure 0.2 release

Following the GStreamer 1.14 release and the new round of gtk-rs releases, there are also new releases for the GStreamer Rust bindings (0.11) and the plugin writing infrastructure (0.2).

Thanks also to all the contributors for making these releases happen and adding lots of valuable changes and API additions.

GStreamer Rust Bindings

The main changes in the Rust bindings were the update to GStreamer 1.14 (which brings in quite some new API, like GstPromise), a couple of API additions (GstBufferPool specifically) and the addition of the GstRtspServer and GstPbutils crates. The former allows writing a full RTSP server in a couple of lines of code (with lots of potential for customizations), the latter provides access to the GstDiscoverer helper object that allows inspecting files and streams for their container format, codecs, tags and all kinds of other metadata.

The GstPbutils crate will also get other features added in the near future, like encoding profile bindings to allow using the encodebin GStreamer element (a helper element for automatically selecting/configuring encoders and muxers) from Rust.

But the biggest changes in my opinion is some refactoring that was done to the Event, Message and Query APIs. Previously you would have to use a view on a newly created query to be able to use the type-specific functions on it

let mut q = gst::Query::new_position(gst::Format::Time);
if pipeline.query(q.get_mut().unwrap()) {
    match q.view() {
        QueryView::Position(ref p) => Some(p.get_result()),
        _ => None,
} else {

Now you can directly use the type-specific functions on a newly created query

let mut q = gst::Query::new_position(gst::Format::Time);
if pipeline.query(&mut q) {
} else {

In addition, the views can now dereference directly to the event/message/query itself and provide access to their API, which simplifies some code even more.

Plugin Writing Infrastructure

While the plugin writing infrastructure did not see that many changes apart from a couple of bugfixes and updating to the new versions of everything else, this does not mean that development on it stalled. Quite the opposite. The existing code works very well already and there was just no need for adding anything new for the projects I and others did on top of it, most of the required API additions were in the GStreamer bindings.

So the status here is the same as last time, get started writing GStreamer plugins in Rust. It works well!

20 Mar 2018 11:52am GMT

19 Mar 2018

feedPlanet GNOME

Carlos Soriano: GitLab + Flatpak – GNOME’s full flow

In this post I will explain how GitLab, CI, Flatpak and GNOME apps come together into, in my opinion, a dream-come-true full flow for GNOME, a proposal to be implemented by all GNOME apps.

Needless to say I enjoy seeing a plan that involves several moving pieces from different initiatives and people being put together into something bigger, I definitely had a good time ✌.

Generated Flatpak for every work in progress

The biggest news: From now on designers, testers and people with curiosity can install any work in progress (a.k.a 'merge request') in an automated way with a simple click and a few minutes. With the integrated GitLab CI now we generate a Flatpak file for every merge request in Nautilus!

In case you are not familiar with Flatpak, this technology allows anyone using different Linux distributions to install an application that will use exactly the same environment as the developers are using, providing a seamless synchronized experience.

For example, do you want to try out the recent work done by Nikita that makes Nautilus views distribute the space between icons? Simply click here or download the artifacts of any merge request pipeline. It's also possible to browse other artifacts, like build and test logs:


Notes: Due to a recent bug in Software you might need to install the 3.28 Flatpak Platform & Sdk manually; this usually happen automatically. In the meantime install the current master development Flatpak Nautilus with a single click here. In Ubuntu you might need to install Flatpak first.

Parallel installation

Now, a way to quickly test latest works in progress in Nautilus it's a considerable improvement, but a user probably don't want to mess up with the system installation of Nautilus or other GNOME projects installation, specially since it's a system component. So we have worked on a way to make possible a full parallel installation and full parallel run of Nautilus versions alongside the system installation. We have also provided support for this setup in the UI to make it easily recognizable and ensure the user is not confused about what version of Nautilus is looking at. This is how it looks after installing any of the Flatpak files mentioned above:

Screenshot from 2018-03-19 21-35-41.png

We can see Nautilus system installation and the developer preview running at the same time, the unstable version has a blue color in the header bar and a icon with gears. As a side note you can also see the work of Nikita I mentioned before, the developer version of the views now distribute the space of the icons.

It's possible to install more versions and run them all at the same time, you can see here how the different installed versions are found in the search of GNOME Shell where I also have the stable Flatpak Nautilus installed:

Screenshot from 2018-03-19 21-37-24

Another positive note is that this also removes the need to close the system instance of the app when contributing to GNOME, it was one of the most reported confusing steps of our newcomers guide.

Issues templates

One of the biggest difficulties we have for people reporting issues is that they either have an outdated application, the application is modified downstream, or the environment is completely different as the one the developers are using, making the experience difficult and frustrating for both the reporter and the developer. Needless to say all of us had to deal with 'worksforme' issues…

With Flatpak, GitLab and the work explained before we can fix this and boost considerably our success with bugs.

We have created a "bug" template where reporters are instructed to download the Flatpaked application in order to test and reproduce in the exact same environment and version as the developers, testers, and everyone involved is using. Here's part of how the issue template looks like:


When created, the issue renders as:


Which is considerably clearer.

Notes: The plan is to provide the stable app Flatpak too.

Full continuous integration

The last step to close this plan is to make sure that GNOME projects build in all the major distributors. After all, most of us are working both upstream in GNOME and downstream in a Linux distribution. For that, we have made a full array of builds that runs weekly:


Which also fixes another issue we have experience for years: Distribution packagers delivering some GNOME applications different than intended, causing subtle but also sometimes major issues. Now we can point out to this graph that contains the commands to build the application as exact documentation on how to package GNOME projects, directly from the maintainer.

'How to' for GNOME maintainers

For the full CI and Flatpak file generation take a look at Nautilus GitLab CI. For the cross distro weekly array additionally create an scheduled pipeline like this. It's also possible to do more regularly the weekly array of CI, however keep in mind the resources are limited and that the important part is that every MR is buildable and that the tests passes. Otherwise it can be confusing to contributors if the pipeline is failing for a one of the jobs and not for others. For non apps projects, you can pick a single distribution you are comfortable with, other ideas are welcome.

A more complex CI is possible, take a look at the magic work of Jordan Petridis in librsvg. I heard Jordan will do a blog post soon about more CI magic, which will be interesting to read.

For parallel installation, it's mainly this MR for master and this commit for the stable version; however there has been a couple of commits on top of each, follow them up to today's date (19-03-2018).

For issues templates, take a look at the templates folder. We were discussing here a default template to be used for GNOME projects,however there was not much input in there so for now I imagined better to experiment with this in Nautilus. Also, this will make more sense once we can put a default template in place, this is something GitLab will probably work on soon.

Finishing up…

On the last 4 days Ernestas Kulik, Jordan Petridis, and me have been working trying to time box this effort and come with a complete proposal by today, each of us working in a part of the plan, and I think we can say we achieved it. Alex Larsson and other people around in #flatpak provided us with valuable help. Work by Florian Mullner and Christian Hergert were an inspiration for us too. Andrea Veri and Javier Jardon put a considerable amount of their time into setting up an AWS instance for CI so we can have fast builds. Big thanks to all of them.

As you may guess, this CI setup for an organization like GNOME with more than 500 projects is quite resource consuming. Good news is that we have some help from sponsors happening, many thanks to them! Stay tuned for the announcements.

Hope you like the direction GNOME is going, for me it's exciting to modernize and make more dynamic how GNOME development happens, I can see we have come a long way since a year ago. If you have any thoughts, comments or ideas let any of us know!

Advertisements &b &b

19 Mar 2018 11:37pm GMT

Philippe Normand: GStreamer’s playbin3 overview for application developers

Multimedia applications based on GStreamer usually handle playback with the playbin element. I recently added support for playbin3 in WebKit. This post aims to document the changes needed on application side to support this new generation flavour of playbin.

So, first of, why is it named playbin3 anyway? The GStreamer 0.10.x series had a playbin element but a first rewrite (playbin2) made it obsolete in the GStreamer 1.x series. So playbin2 was renamed to playbin. That's why a second rewrite is nicknamed playbin3, I suppose :)

Why should you care about playbin3? Playbin3 (and the elements it's using internally: parsebin, decodebin3, uridecodebin3 among others) is the result of a deep re-design of playbin2 (along with decodebin2 and uridecodebin) to better support:

This work was carried on mostly by Edward Hervey, he presented his work in detail at 3 GStreamer conferences. If you want to learn more about this and the internals of playbin3 make sure to watch his awesome presentations at the 2015 gst-conf, 2016 gst-conf and 2017 gst-conf.

Playbin3 was added in GStreamer 1.10. It is still considered experimental but in my experience it works already very well. Just keep in mind you should use at least the latest GStreamer 1.12 (or even the upcoming 1.14) release before reporting any issue in Bugzilla. Playbin3 is not a drop-in replacement for playbin, both elements share only a sub-set of GObject properties and signals. However, if you don't want to modify your application source code just yet, it's very easy to try playbin3 anyway:

$ USE_PLAYBIN3=1 my-playbin-based-app

Setting the USE_PLAYBIN environment variable enables a code path inside the GStreamer playback plugin which swaps the playbin element for the playbin3 element. This trick provides a glance to the playbin3 element for the most lazy people :) The problem is that depending on your use of playbin, you might get runtime warnings, here's an example with the Totem player:

$ USE_PLAYBIN3=1 totem ~/Videos/Agent327.mp4
(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'video-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'audio-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'text-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'video-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'audio-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'text-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'
sys:1: Warning: g_object_get_is_valid_property: object class 'GstPlayBin3' has no property named 'n-audio'
sys:1: Warning: g_object_get_is_valid_property: object class 'GstPlayBin3' has no property named 'n-text'
sys:1: Warning: ../../../../gobject/gsignal.c:3492: signal name 'get-video-pad' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

As mentioned previously, playbin and playbin3 don't share the same set of GObject properties and signals, so some changes in your application are required in order to use playbin3.

If your application is based on the GstPlayer library then you should set the GST_PLAYER_USE_PLAYBIN3 environment variable. GstPlayer already handles both playbin and playbin3, so no changes needed in your application if you use GstPlayer!

Ok, so what if your application relies directly on playbin? Some changes are needed! If you previously used playbin stream selection properties and signals, you will now need to handle the GstStream and GstStreamCollection APIs. Playbin3 will emit a stream collection message on the bus, this is very nice because the collection includes information (metadata!) about the streams (or tracks) the media asset contains. In playbin this was handled with a bunch of signals (audio-tags-changed, audio-changed, etc), properties (n-audio, n-video, etc) and action signals (get-audio-tags, get-audio-pad, etc). The new GstStream API provides a centralized and non-playbin-specific access point for all these informations. To select streams with playbin3 you now need to send a select_streams event so that the demuxer can know exactly which streams should be exposed to downstream elements. That means potentially improved performance! Once playbin3 completed the stream selection it will emit a streams selected message, the application should handle this message and potentially update its internal state about the selected streams. This is also the best moment to update your UI regarding the selected streams (like audio track language, video track dimensions, etc).

Another small difference between playbin and playbin3 is about the source element setup. In playbin there is a source read-only GObject property and a source-setup GObject signal. In playbin3 only the latter is available, so your application should rely on source-setup instead of the notify::source GObject signal.

The gst-play-1.0 playback utility program already supports playbin3 so it provides a good source of inspiration if you consider porting your application to playbin3. As mentioned at the beginning of this post, WebKit also now supports playbin3, however it needs to be enabled at build time using the CMake -DUSE_GSTREAMER_PLAYBIN3=ON option. This feature is not part of the WebKitGTK+ 2.20 series but should be shipped in 2.22. As a final note I wanted to acknowledge my favorite worker-owned coop Igalia for allowing me to work on this WebKit feature and also our friends over at Centricular for all the quality work on playbin3.

19 Mar 2018 7:13am GMT

18 Mar 2018

feedPlanet GNOME

Benjamin Otte: textures and paintables

With GTK4, we've been trying to find better solution for image data. In GTK3 the objects we used for this were pixbufs and Cairo surfaces. But they don't fit the bill anymore, so now we have GdkTexture and GdkPaintable.


GdkTexture is the replacement for GdkPixbuf. Why is it better?
For a start, it is a lot simpler. The API looks like this:

int gdk_texture_get_width (GdkTexture *texture);
int gdk_texture_get_height (GdkTexture *texture);

void gdk_texture_download (GdkTexture *texture,
                           guchar     *data,
                           gsize       stride);

So it is a 2D pixel array and if you want to, you can download the pixels. It is also guaranteed immutable, so the pixels will never change. Lots of constructors exist to create textures from files, resources, data or pixbufs.

But the biggest difference between textures and pixbufs is that they don't expose the memory that they use to store the pixels. In fact, before gdk_texture_download() is called, that data doesn't even need to exist.
And this is used in the GL texture. The GtkGLArea widget for example uses this method to pass data around. GStreamer is expected to pass video in the form of GL textures, too.


But sometimes, you have something more complex than an immutable bunch of pixels. For example you could have an animated GIF or a scalable SVG. That's where GdkPaintable comes in.
In abstract terms, GdkPaintable is an interface for objects that know how to render themselves at any size. Inspired by CSS images, they can optionally provide intrinsic sizing information that GTK widgets can use to place them.
So the core of the GdkPaintable interface are the function make the paintable render itself and the 3 functions that provide sizing information:

void gdk_paintable_snapshot (GdkPaintable *paintable,
                             GdkSnapshot  *snapshot,
                             double        width,
                             double        height);

int gdk_paintable_get_intrinsic_width (GdkPaintable *paintable);
int gdk_paintable_get_intrinsic_height (GdkPaintable *paintable);
int gdk_paintable_get_intrinsic_aspect_ratio (GdkPaintable *paintable);

On top of that, the paintable can emit the "invalidate-contents" and "invalidate-size" signals when its contents or size changes.

To make this more concrete, let's take a scalable SVG as an example: The paintable implementation would return no intrinsic size (the return value 0 for those sizing function achieves that) and whenever it is drawn, it would draw itself pixel-exact at the given size.
Or take the example of the animated GIF: It would provide its pixel size as its intrinsic size and draw the current frame of the animation scaled to the given size. And whenever the next frame of the animation should be displayed, it would emit the "invalidate-size" signal.
And last but not least, GdkTexture implements this interface.

We're currently in the process of changing all the code that in GTK3 accepted GdkPixbuf to now accept GdkPaintable. The GtkImage widget of course has been changed already, as have the drag'n'drop icons or GtkAboutDialog. Experimental patches exist to let applications provide paintables to the GTK CSS engine.

And if you now put all this information together about GStreamer potentially providing textures backed by GL images and creating paintables that do animations that can then be uploaded to CSS, you can maybe see where this is going

18 Mar 2018 5:50pm GMT

Jens Georg: On the way to 0.28

Shotwell 0.28 "Braunschweig" is out.

Shotwell 0.28 about box

Half a year later than I was expecting it to be, sorry. This release fixes 60 Bugs! Get it at GNOME's download server, from GIT or in the Shotwell PPA really soon™. A big thank you to all contributors that make up all the bits and pieces for such a release.

Notable features:

Things we have lost:

18 Mar 2018 10:21am GMT

Philippe Normand: Web Engines Hackfest 2014

Last week I attended the Web Engines Hackfest. The event was sponsored by Igalia (also hosting the event), Adobe and Collabora.

As usual I spent most of the time working on the WebKitGTK+ GStreamer backend and Sebastian Dröge kindly joined and helped out quite a bit, make sure to read his post about the event!

We first worked on the WebAudio GStreamer backend, Sebastian cleaned up various parts of the code, including the playback pipeline and the source element we use to bridge the WebCore AudioBus with the playback pipeline. On my side I finished the AudioSourceProvider patch that was abandoned for a few months (years) in Bugzilla. It's an interesting feature to have so that web apps can use the WebAudio API with raw audio coming from Media elements.

I also hacked on GstGL support for video rendering. It's quite interesting to be able to share the GL context of WebKit with GStreamer! The patch is not ready yet for landing but thanks to the reviews from Sebastian, Mathew Waters and Julien Isorce I'll improve it and hopefully commit it soon in WebKit ToT.

Sebastian also worked on Media Source Extensions support. We had a very basic, non-working, backend that required… a rewrite, basically :) I hope we will have this reworked backend soon in trunk. Sebastian already has it working on Youtube!

The event was interesting in general, with discussions about rendering engines, rendering and JavaScript.

18 Mar 2018 9:18am GMT

Patrick Griffis: Flatpaking application plugins

Sometimes you simply do not want to bundle everything in a single package such as optional plugins with large dependencies or third party plugins that are not supported. In this post I'll show you how to handle this with Flatpak using HexChat as an example.

Flatpak has a feature called extensions that allows a package to be mounted within another package. This is used in a variety of ways but it can be used by any application as a way to insert any optional bits. So lets see how to define one (details omitted for brevity):

  "app-id": "io.github.Hexchat",
  "add-extensions": {
    "io.github.Hexchat.Plugin": {
      "version": "2",
      "directory": "extensions",
      "add-ld-path": "lib",
      "merge-dirs": "lib/hexchat/plugins",
      "subdirectories": true,
      "no-autodownload": true,
      "autodelete": true
  "modules": [
      "name": "hexchat",
      "post-install": [
       "install -d /app/extensions"

The exact details of these are best documented in the Extension section of man flatpak-metadata but I'll go over the ones used here:

So now that we defined an extension point lets make an extension:

  "id": "io.github.Hexchat.Plugin.Perl",
  "branch": "2",
  "runtime": "io.github.Hexchat",
  "runtime-version": "stable",
  "sdk": "org.gnome.Sdk//3.26",
  "build-extension": true,
  "separate-locales": false,
  "appstream-compose": false,
  "build-options": {
    "prefix": "/app/extensions/Perl",
    "env": {
      "PATH": "/app/extensions/Perl/bin:/app/bin:/usr/bin"
  "modules": [
      "name": "perl"
      "name": "hexchat-perl",
      "post-install": [
        "install -Dm644 plugins/perl/perl.so ${FLATPAK_DEST}/lib/hexchat/plugins/perl.so",
        "install -Dm644 --target-directory=${FLATPAK_DEST}/share/metainfo data/misc/io.github.Hexchat.Plugin.Perl.metainfo.xml",
        "appstream-compose --basename=io.github.Hexchat.Plugin.Perl --prefix=${FLATPAK_DEST} --origin=flatpak io.github.Hexchat.Plugin.Perl"

So again going over some key points quickly: id has the correct prefix, branch refers to the extension version, build-extension should be obvious, runtime is what defines the extension-point. Some less obvious things to make note of is that your extensions prefix will not be in $PATH or $PKG_CONFIG_PATH by default so you may need to set them (see build-options in man flatpak-manifest). $FLATPAK_DEST is also defined as your extensions prefix though not everything expands variables.

While not required you also should install appstream metainfo for easy discover-ability. For example:

<?xml version="1.0" encoding="UTF-8"?>
<component type="addon">
  <name>Perl Plugin</name>
  <summary>Provides a scripting interface in Perl</summary>
  <url type="homepage">https://hexchat.github.io/</url>

Which will be shown in GNOME-Software:


18 Mar 2018 4:00am GMT

16 Mar 2018

feedPlanet GNOME

Matthias Clasen: Fedora Atomic Workstation: Ruling the commandline

In my recent posts, I've mostly focused on finding my way around with GNOME Builder and using it to do development in Flatpak sandboxes. But I am not really the easiest target audience for an IDE like GNOME Builder, having spent most of my life on the commandline with tools like vim and make.

So, what about the commandline in an Atomic Workstation environment? There are many container tools, like buildah, atomic, oc, podman, and so on. I am not going to talk about these, since I don't know them very well, and they are covered, e.g. on www.projectatomic.io.

But there are a few commands that are essential to life on the Atomic Workstation: rpm-ostree and flatpak.


First of all, there's rpm-ostree, which is the commandline frontend to the rpm-ostreed daemon that manages the OS image(s) on the Atomic Workstation.

You can run

rpm-ostree status

to get some information about your OS image (and the other images that may be present on your system). And you can run

rpm-ostree upgrade

to get the latest update for your OS image (the terminology clash here is a bit unfortunate; rpm-ostree calls an upgrade what most Linux distros and packaging tools call an update).

You can run this command as normal user in a terminal, and rpm-ostreed will present you with a polkit dialog to do its privileged operations. Recently, rpm-ostreed has also gained the ability to check for and deploy upgrades automatically.

An important thing to keep in mind is that rpm-ostree never changes your running system. You have to reboot into the new image to see the changes, so

systemctl reboot

should be in your repertoire of commands as well. Alternatively, you can use the -reboot option to tell rpm-ostree to reboot when the upgrade command completes.


The other essential command is flatpak. Where rpm-ostree controls your OS image, flatpak rules the applications. flatpak has many commands that are worth exploring, I'll only mention the most important ones here.

It is quite common to have more than one source for flatpaks enabled.

flatpak remotes

lists them all. If you want to find applications, then

flatpak search

will do that for you, and

flatpak install

will let you install what you found. An important detail to point out here is that applications can be installed in system-wide (in /var) or per-user (in ~/.local/share). You can choose the location with the -user and -system options. If you choose to install system-wide, you will get a polkit prompt, since this is a privileged operation.

After installing applications, you should keep them up-to-date by installing updates. The most straightforward way to so is to just run

flatpak update

which will install available updates for all applications. To just check if updates are available, you can use

flatpak remote-ls --updates
Launching applications

Probably the most important thing you will want to do with flatpak is to run applications. Unsurprisingly, the command to do so is called run, and it expects you to specify the unique application ID:

flatpak run org.gnome.gitg

This is certainly a departure from the traditional commandline, and could be considered cumbersome (even though it has bash completion for the application ID).

Thankfully, flatpak has recently gained a way to recover the familiar interface. It now installs shell wrappers for the flatpak run command in ~/.local/share/flatpak/bin. After adding that directory to your PATH, you can run gitg like this:


If (like me), you are still not satisfied with this, you can add a shell alias to get the traditional command name back:

alias gitg=org.gnome.gitg

Now gitg works again, as it used to. Nice!

16 Mar 2018 6:51pm GMT

15 Mar 2018

feedPlanet GNOME

Emmanuele Bassi: pkg-config and paths

This is something of a frequently asked question, as it comes up every once in a while. The pkg-config documentation is fairly terse, and even pkgconf hasn't improved on that.

The problem

Let's assume you maintain a project that has a dependency using pkg-config.

Let's also assume that the project you are depending on loads some files from a system path, and your project plans to install some files under that path.

The questions are:

The answer to both questions is: by using variables in the pkg-config file. Sadly, there's still some confusion as to how those variables work, so this is my attempt at clarifying the issue.

Defining variables in pkg-config files

The typical preamble stanza of a pkg-config file is something like this:


Each variable can reference other variables; for instance, in the example above, all the other directories are relative to the prefix variable.

Those variables that can be extracted via pkg-config itself:

$ pkg-config --variable=includedir project-a

As you can see, the --variable command line argument will automatically expand the ${prefix} token with the content of the prefix variable.

Of course, you can define any and all variables inside your own pkg-config file; for instance, this is the definition of the giomoduledir variable inside the gio-2.0 pkg-config file:




This way, the giomoduledir variable will be expanded to /usr/lib/gio/modules when asking for it.

If you are defining a path inside your project's pkg-config file, always make sure you're using a relative path!

We're going to see why this is important in the next section.

Using variables from pkg-config files

Now, this is where things get complicated.

As I said above, pkg-config will expand the variables using the definitions coming from the pkg-config file; so, in the example above, getting the giomoduledir will use the prefix provided by the gio-2.0 pkg-config file, which is the prefix into which GIO was installed. This is all well and good if you just want to know where GIO installed its own modules, in the same way you want to know where its headers are installed, or where the library is located.

What happens, though, if your own project needs to install GIO modules in a shared location? More importantly, what happens if you're building your project in a separate prefix?

If you're thinking: "I should install it into the same location as specified by the GIO pkg-config file", think again. What happens if you are building against the system's GIO library? The prefix into which it has been installed is only going to be accessible by the administrator user; or it could be on a read-only volume, managed by libostree, so sudo won't save you.

Since you're using a separate prefix, you really want to install the files provided by your project under the prefix used to configure your project. That does require knowing all the possible paths used by your dependencies, hard coding them into your own project, and ensuring that they never change.

This is clearly not great, and it places additional burdens on your role as a maintainer.

The correct solution is to tell pkg-config to expand variables using your own values:

$ pkg-config \
> --define-variable=prefix=/your/prefix \
> --variable=giomoduledir
> gio-2.0

This lets you rely on the paths as defined by your dependencies, and does not attempt to install files in locations you don't have access to.

Build systems

How does this work, in practice, when building your own software?

If you're using Meson, you can use the get_pkgconfig_variable() method of the dependency object, making sure to replace variables:

gio_dep = dependency('gio-2.0')
giomoduledir = gio_dep.get_pkgconfig_variable(
  define_variable: [ 'libdir', get_option('libdir') ],

This is the equivalent of the --define-variable/--variable command line arguments.

If you are using Autotools, sadly, the PKG_CHECK_VAR m4 macro won't be able to help you, because it does not allow you to expand variables. This means you'll have to deal with it in the old fashioned way:

giomoduledir=`$PKG_CONFIG --define-variable=libdir=$libdir --variable=giomoduledir gio-2.0`

Which is annoying, and yet another reason why you should move off from Autotools and to Meson. 😃


All of this, of course, works only if paths are expressed as locations relative to other variables. If that does not happen, you're going to have a bad time. You'll still get the variable as requested, but you won't be able to make it relative to your prefix.

If you maintain a project with paths expressed as variables in your pkg-config file, check them now, and make them relative to existing variables, like prefix, libdir, or datadir.

If you're using Meson to generate your pkg-config file, make sure that the paths are relative to other variables, and file bugs if they aren't.

15 Mar 2018 4:45pm GMT

14 Mar 2018

feedPlanet GNOME

Bastian Ilsø Hougaard: Reflections on the GNOME 3.28 Release Video

I just flipped the switch for the 3.28 Release Video. I'm really excited for all the new awesome features the community has landed, but I am a bit sad that I don't have time to put more effort into the video this time around. A busy time schedule collided with technical difficulties in recording some of the apps. When I was staring at my weekly schedule Monday there didn't seem much chance for a release video to be published at all..

However, in the midst of all that I decided to take this up as a challenge and see what I could come up with given the 2-3 days time. In the end, I identified some time/energy demanding issues I need to find solutions to:

  1. Building GNOME Apps before release and recording them is painful and prone to error and frustration. I hit errors when upgrading Fedora Rawhide, and even after updating many apps were not on the latest version. Flatpak applications are fortunately super easy to deal with for me, but not all applications are available as flatpaks. And ideally I will need to setup a completely clean environment since many apps draw on content in the home folder. Also, currently I need to post-process all the raw material to get the transparent window films.
  2. I run out of (8GB) memory several times and it's almost faster to hold power button down and boot again, than to wait for Linux memory handling to deal with it.. Will definitely need to find a solution to this - it builds up a lot of frustration for me.

I am already working on a strategy for the first problem. A few awesome developers have helped me record some of the apps in the past and this has been really helpful to deal with this. I'm trying to make a list of contacts I need to get in touch with to get these recordings done, and I need to send out emails in time with the freezes in the release cycle. It makes my work and the musician's work much easier if we know exactly what will go in the video and for how long. I also had a chat with Felipe about maybe making a gnome shell extension tool which could take care of setting wallpaper, recording in the right resolution and uploading to a repository somewhere. As for the second problem, I think I'm going to need a new laptop or upgrade my current one. I definitely have motivation to look into that based on this experience now, hehe..

"Do you have time for the next release video?" You might ask and that is a valid question. I don't see the problem to be time, but more a problem of spending my contribution energy effectively. I really like making these videos - but mainly the animation and video editing parts of it. Building apps, working around errors and bugs, post-processing and all that just to get the recording assets I need, that's the part that I currently feel takes up the most of my contribution energy. If I can minimize that, I think I will have much more creative energy to spend on the video itself. Honestly, all the awesome contributions in our GNOME Apps and components really deserve that much extra polish.

Thanks everyone for helping with the video this time around!

14 Mar 2018 11:36pm GMT

Nirbheek Chauhan: Latency in Digital Audio

We've come a long way since Alexander Graham Bell, and everything's turned digital.

Compared to analog audio, digital audio processing is extremely versatile, is much easier to design and implement than analog processing, and also adds effectively zero noise along the way. With rising computing power and dropping costs, every operating system has had drivers, engines, and libraries to record, process, playback, transmit, and store audio for over 20 years.

Today we'll talk about the some of the differences between analog and digital audio, and how the widespread use of digital audio adds a new challenge: latency.

Analog vs Digital

Analog data flows like water through an empty pipe. You open the tap, and the time it takes for the first drop of water to reach you is the latency. When analog audio is transmitted through, say, an RCA cable, the transmission happens at the speed of electricity and your latency is:

wire length/speed of electricity

This number is ridiculously small-especially when compared to the speed of sound. An electrical signal takes 0.001 milliseconds to travel 300 metres (984 feet). Sound takes 874 milliseconds (almost a second).

All analog effects and filters obey similar equations. If you're using, say, an analog pedal with an electric guitar, the signal is transformed continuously by an electrical circuit, so the latency is a function of the wire length (plus capacitors/transistors/etc), and is almost always negligible.

Digital audio is transmitted in "packets" (buffers) of a particular size, like a bucket brigade, but at the speed of electricity. Since the real world is analog, this means to record audio, you must use an Analog-Digital Converter. The ADC quantizes the signal into digital measurements (samples), packs multiple samples into a buffer, and sends it forward. This means your latency is now:

(wire length/speed of electricity) + buffer size

We saw above that the first part is insignificant, what about the second part?

Latency is measured in time, but buffer size is measured in bytes. For 16-bit integer audio, each measurement (sample) is stored as a 16-bit integer, which is 2 bytes. That's the theoretical lower limit on the buffer size. The sample rate defines how often measurements are made, and these days, is usually 48KHz. This means each sample contains 0.021ms of audio. To go lower, you need to increase the sample rate to 96KHz or 192KHz.

However, when general-purpose computers are involved, the buffer size is almost never lower than 32 bytes, and is usually 128 bytes or larger. For 16-bit integer audio at 48KHz, a 32 byte buffer is 0.67ms, and a 128 byte buffer is 2.67ms. This is our buffer size and hence the base latency while recording (or playing) digital audio.

Digital effects operate on individual buffers, and will add an additional amount of latency depending on the delay added by the CPU processing required by the effect. Such effects may also add latency if the algorithm used requires that, but that's the same with analog effects.

The Digital Age

So everyone's using digital. But isn't 2.67ms a lot of additional latency?

It might seem that way till you think about it in real-world terms. Sound travels less than a meter (3 feet) in that time, and that sort of delay is completely unnoticeable by humans-otherwise we'd notice people's lips moving before we heard their words.

In fact, 2.67ms is too small for the majority of audio applications!

To process such small buffer sizes, you'd have to wake the CPU up 375 times a second, just for audio. This is highly inefficient, and wastes a lot of power. You really don't want that on your phone or your laptop, and is completely unnecessary in most cases anyway.

For instance, your music player will usually use a buffer size of ~200ms, which is just 5 CPU wakeups per second. Note that this doesn't mean that you will hear sound 200ms after hitting "play". The audio player will just send 200ms of audio to the sound card at once, and playback will begin immediately.

Of course, you can't do that with live playback such as video calls-you can't "read-ahead" data you don't have. You'd have to invent a time machine first. As a result, apps that use real-time communication have to use smaller buffer sizes because that directly affects the latency of live playback.

That brings us back to efficiency. These apps also need to conserve power, and 2.67ms buffers are really wasteful. Most consumer apps that require low latency use 10-15ms buffers, and that's good enough for things like voice/video calling, video games, notification sounds, and so on.

Ultra Low Latency

There's one category left: musicians, sound engineers, and other folk that work in the pro-audio business. For them, 10ms of latency is much too high!

You usually can't notice a 10ms delay between an event and the sound for it, but when making music, you can hear it when two instruments are out-of-sync by 10ms or if the sound for an instrument you're playing is delayed. Instruments such as drum snare are more susceptible to this problem than others, which is why the stage monitors used in live concerts must not add any latency.

The standard in the music business is to use buffers that are 5ms or lower, down to the 0.67ms number that we talked about above.

Power consumption is absolutely no concern, and the real problems are the accumulation of small amounts of latencies everywhere in your stack, and ensuring that you're able to read buffers from the hardware or write buffers to the hardware fast enough.

Let's say you're using an app on your computer to apply digital effects to a guitar that you're playing. This involves capturing audio from the line-in port, sending it to the application for processing, and playing it from the sound card to your amp.

The latency while capturing and outputting audio are both multiples of the buffer size, so it adds up very quickly. The effects app itself will also add a variable amount of latency, and at 2.67ms buffer sizes you will find yourself quickly approaching a 10ms latency from line-in to amp-out. The only way to lower this is to use a smaller buffer size, which is precisely what pro-audio hardware and software enables.

The second problem is that of CPU scheduling. You need to ensure that the threads that are fetching/sending audio data to the hardware and processing the audio have the highest priority, so that nothing else will steal CPU-time away from them and cause glitching due to buffers arriving late.

This gets harder as you lower the buffer size because the audio stack has to do more work for each bit of audio. The fact that we're doing this on a general-purpose operating system makes it even harder, and requires implementing real-time scheduling features across several layers. But that's a story for another time!

I hope you found this dive into digital audio interesting! My next post will be about my journey in implementing ultra low latency capture and render on Windows in the WASAPI plugin for GStreamer. This was already possible on Linux with the JACK GStreamer plugin and on macOS with the CoreAudio GStreamer plugin, so it will be interesting to see how the same problems are solved on Windows. Tune in!

14 Mar 2018 1:13am GMT

13 Mar 2018

feedPlanet GNOME

Karuna Grewal: Network Stats Makes Its Way to Libgtop

Hey there if you are reading this then probably network stats might be of some interest to you , but still if it doesn't, just recall that while requesting this page you had your share of packets being transferred over the vast network and delivered to your system. I guess now you'd like to check out the work which has been going on in Libgtop and exploit the network stats details to your personal use.

This post is going to be a brief update about what's new in Libgtop

Crux of the NetStats Implementation

The implementation which I've used requires intensive use of pcap handles to start a capture on your system and segregate the packets into the process they belong to. The following part is a detailed explanation of this , so in case you feel to just skip the details jump to the next part.

Flow of the setup is as follows:

Assigning Packets to their Respective Processes

In my opinion this was the coolest part of the entire project which gave me the liberty to filter the packets until I'd finally atomized them. This felt like a recursive butchering of the packets without having a formal medical science qualification. So bear in mind you can flaunt that you too can operate like doctors but only on inanimate objects 😜 . Well, jokes aside coming to technical aspect -

What the packet says is , keep discarding the headers prepended to it until you reach the desired header, in our case reaching the TCP headers. The flow of parsing was somewhat simple :

It required checking the following headers in the sequence as below-

A bit of Network Stats design dosage

Just so that you are in sync with how my implementation seeks to do what it should do , the design is such :

We know that every process creates various connections. Any general socket detail look like

src ip (fixed): src port (variable) - dest ip (fixed) : dest port (variable)

These sockets are listed in the /proc/net/tcp. You'd like to have a more detailed explanation about this. The packet headers will give us the connection details , we'll just have to assign it to the correct process. This means each process has a list of connections , and each connection has a list of packets which in turn has all the relevant packets. So getting network stats for a process is as simple as summing up the packet length details in the packet list for each connection in the connection list for that process.

Note: While summing up stale packets aren't considered.

This is what the design looks like:


TCP parsing

The parameters passed to this callback are:

Given that we have all the necessary details, now the packet is initialized and all its fields are assigned values using the above mentioned parameters passed to the TCP parsing callback. Now we check which connection to put this new packet into. For checking this we make use of a reference packet bound to each connection. After adding the packet to the connection, in case we had to create a new connection itself then we even need to add it to the relevant process . For doing this we use the Inode to PID mapping expalined in the earlier post.

Getting stats

In every refresh cycle getting stats is as simple as just summing up packet lengths for a given process.

Choosing interface to expose the libgtop API

Next thing of concern was how do we make this functionality available to other applications.

API exposed through DBus

dbus interface


Setting up the system bus

To setup the System bus these two files had to be added to the following paths



Inspecting using D-Feet

Here's what we see on inspecting the interface using d-feet


GetStats output : {pid:(bytes sent, bytes recv)}


You might be done with reading all these implementational details but the most important thing I haven't mentioned until now is everyone whose mind has been behind helping me do all this.

(Keeping it to be in lexical order)

I'm extremely grateful to Felipe Borges and Robert Roth for their constant support and reviews.

Felipe's design related corrections of switching implementation to say singletons and on the Libgtop end Robert Roth helping me with those quick hacks with the daemon and DBus even after his working hours and from working on weekends to finally getting things done is what makes me indebted to them .

I've a tint of guilt for pinging you all on weekends too .

Did I just forget to mention the entire community in general , because members like Aberto and Carlos were also the ones I sought help from.

If you did reach this fag end without getting bored let me tell you that I'm yet to post the details about the Usage integration.

Feel free to check the work.

Stay tuned !🙂

13 Mar 2018 7:38am GMT

Federico Mena-Quintero: Librsvg and Gnome-class accepting interns

I would like to mentor people for librsvg and gnome-class this Summer, both for Outreachy and Summer of Code.

Librsvg projects

Project: port filter effects from C to Rust

Currently librsvg implements SVG filter effects in C. These are basic image processing filters like Gaussian blur, matrix convolution, Porter-Duff alpha compositing, etc.

There are some things that need to be done:

For this project, it will be especially helpful to have a little background in image processing. You don't need to be an expert; just to have done some pixel crunching at some point. You need to be able to read C and write Rust.

Project: CSS styling with rust-selectors

Librsvg uses an very simplistic algorithm for CSS cascading. It uses libcroco to parse CSS style data; libcroco is unmaintained and rather prone to exploits. I want to use Servo's selectors crate to do the cascading; we already use the rust-cssparser crate as a tokenizer for basic CSS properties.

For this project, it will be helpful to know a bit of how CSS works. Definitely be comfortable with Rust concepts like ownership and borrowing. You don't need to be an expert, but if you are going through the "fighting the borrow checker" stage, you'll have a harder time with this. Or it may be what lets you grow out of it! You need to be able to read C and write Rust.

Bugs for newcomers: We have a number of easy bugs for newcomers to librsvg. Some of these are in the Rust part, some in the C part, some in both - take your pick!

Projects for gnome-class

Gnome-class is the code generator that lets you write GObject implementations in Rust. Or at least that's the intention - the project is in early development. The code is so new that practically all of our bugs are of an exploratory nature.

Gnome-class works like a little compiler. This is from one of the examples; note the call to gobject_gen! in there:

struct SignalerPrivate {
    val: Cell<u32>

impl Default for SignalerPrivate {
    fn default() -> Self {
        SignalerPrivate {
            val: Cell::new(0)

gobject_gen! {
    class Signaler {
        type InstancePrivate = SignalerPrivate;

    impl Signaler {
        signal fn value_changed(&self);

        fn set_value(&self, v: u32) {
            let private = self.get_priv();

Gnome-class implements this gobject_gen! macro as follows:

  1. First we parse the code inside the macro using the syn crate. This is a crate that lets you parse Rust source code from the TokenStream that the compiler hands to implementations of procedural macros. You give a TokenStream to syn, and it gives you back structs that represent function definitions, impl blocks, expressions, etc. From this parsing stage we build an Abstract Syntax Tree (AST) that closely matches the structure of the code that the user wrote.

  2. Second, we take the AST and convert it to higher-level concepts, while verifying that the code is semantically valid. For example, we build up a Class structure for each defined GObject class, and annotate it with the methods and signals that the user defined for it. This stage is the High-level Internal Representation (HIR).

  3. Third, we generate Rust code from the validated HIR. For each class, we write out the boilerplate needed to register it against the GObject type system. For each virtual method we write a trampoline to let the C code call into the Rust implementation, and then write out the actual Rust impl that the user wrote. For each signal, we register it against the GObjectClass, and write the appropriate trampolines both to invoke the signal's default handler and any Rust callbacks for signal handlers.

For this project, you definitely need to have written GObject code in C in the past. You don't need to know the GObject internals; just know that there are things like type registration, signal creation, argument marshalling, etc.

You don't need to know about compiler internals.

You don't need to have written Rust procedural macros; you can learn as you go. The code has enough infrastructure right now that you can cut&paste useful bits to get started with new features. You should definitely be comfortable with the Rust borrow checker and simple lifetimes - again, you can cut&paste useful code already, and I'm happy to help with those.

This project demands a little patience. Working on the implementation of procedural macros is not the smoothest experience right now (one needs to examine generated code carefully, and play some tricks with the compiler to debug things), but it's getting better very fast.

How to apply as an intern

Details for Outreachy

Details for Summer of Code

13 Mar 2018 1:00am GMT

12 Mar 2018

feedPlanet GNOME

Michael Meeks: 2018-03-12 Monday

12 Mar 2018 9:00pm GMT

Marco Barisione: Karton 1.0

After more than a year using Karton regularly, I released version 1.0 with the last few features I was missing for my use case.

Karton is a tool which can transparently run Linux programs on a different Linux distribution, on macOS, or on a different architecture.
By using Docker, Karton manages semi-persistent containers with easy to use automatic folder sharing and lots of small details which make the experience smooth. You shouldn't notice you are using command line programs from a different OS or distro.

Karton logo

If you are interested, check the Karton website.

12 Mar 2018 3:06pm GMT

11 Mar 2018

feedPlanet GNOME

Michael Meeks: 2018-03-11 Sunday

11 Mar 2018 9:00pm GMT