20 Mar 2018

feedPlanet GNOME

Sebastian Dröge: GStreamer Rust bindings 0.11 / plugin writing infrastructure 0.2 release

Following the GStreamer 1.14 release and the new round of gtk-rs releases, there are also new releases for the GStreamer Rust bindings (0.11) and the plugin writing infrastructure (0.2).

Thanks also to all the contributors for making these releases happen and adding lots of valuable changes and API additions.

GStreamer Rust Bindings

The main changes in the Rust bindings were the update to GStreamer 1.14 (which brings in quite some new API, like GstPromise), a couple of API additions (GstBufferPool specifically) and the addition of the GstRtspServer and GstPbutils crates. The former allows writing a full RTSP server in a couple of lines of code (with lots of potential for customizations), the latter provides access to the GstDiscoverer helper object that allows inspecting files and streams for their container format, codecs, tags and all kinds of other metadata.

The GstPbutils crate will also get other features added in the near future, like encoding profile bindings to allow using the encodebin GStreamer element (a helper element for automatically selecting/configuring encoders and muxers) from Rust.

But the biggest changes in my opinion is some refactoring that was done to the Event, Message and Query APIs. Previously you would have to use a view on a newly created query to be able to use the type-specific functions on it

let mut q = gst::Query::new_position(gst::Format::Time);
if pipeline.query(q.get_mut().unwrap()) {
    match q.view() {
        QueryView::Position(ref p) => Some(p.get_result()),
        _ => None,
} else {

Now you can directly use the type-specific functions on a newly created query

let mut q = gst::Query::new_position(gst::Format::Time);
if pipeline.query(&mut q) {
} else {

In addition, the views can now dereference directly to the event/message/query itself and provide access to their API, which simplifies some code even more.

Plugin Writing Infrastructure

While the plugin writing infrastructure did not see that many changes apart from a couple of bugfixes and updating to the new versions of everything else, this does not mean that development on it stalled. Quite the opposite. The existing code works very well already and there was just no need for adding anything new for the projects I and others did on top of it, most of the required API additions were in the GStreamer bindings.

So the status here is the same as last time, get started writing GStreamer plugins in Rust. It works well!

20 Mar 2018 11:52am GMT

19 Mar 2018

feedPlanet GNOME

Carlos Soriano: GitLab + Flatpak – GNOME’s full flow

In this post I will explain how GitLab, CI, Flatpak and GNOME apps come together into, in my opinion, a dream-come-true full flow for GNOME, a proposal to be implemented by all GNOME apps.

Needless to say I enjoy seeing a plan that involves several moving pieces from different initiatives and people being put together into something bigger, I definitely had a good time ✌.

Generated Flatpak for every work in progress

The biggest news: From now on designers, testers and people with curiosity can install any work in progress (a.k.a 'merge request') in an automated way with a simple click and a few minutes. With the integrated GitLab CI now we generate a Flatpak file for every merge request in Nautilus!

In case you are not familiar with Flatpak, this technology allows anyone using different Linux distributions to install an application that will use exactly the same environment as the developers are using, providing a seamless synchronized experience.

For example, do you want to try out the recent work done by Nikita that makes Nautilus views distribute the space between icons? Simply click here or download the artifacts of any merge request pipeline. It's also possible to browse other artifacts, like build and test logs:


Notes: Due to a recent bug in Software you might need to install the 3.28 Flatpak Platform & Sdk manually; this usually happen automatically. In the meantime install the current master development Flatpak Nautilus with a single click here. In Ubuntu you might need to install Flatpak first.

Parallel installation

Now, a way to quickly test latest works in progress in Nautilus it's a considerable improvement, but a user probably don't want to mess up with the system installation of Nautilus or other GNOME projects installation, specially since it's a system component. So we have worked on a way to make possible a full parallel installation and full parallel run of Nautilus versions alongside the system installation. We have also provided support for this setup in the UI to make it easily recognizable and ensure the user is not confused about what version of Nautilus is looking at. This is how it looks after installing any of the Flatpak files mentioned above:

Screenshot from 2018-03-19 21-35-41.png

We can see Nautilus system installation and the developer preview running at the same time, the unstable version has a blue color in the header bar and a icon with gears. As a side note you can also see the work of Nikita I mentioned before, the developer version of the views now distribute the space of the icons.

It's possible to install more versions and run them all at the same time, you can see here how the different installed versions are found in the search of GNOME Shell where I also have the stable Flatpak Nautilus installed:

Screenshot from 2018-03-19 21-37-24

Another positive note is that this also removes the need to close the system instance of the app when contributing to GNOME, it was one of the most reported confusing steps of our newcomers guide.

Issues templates

One of the biggest difficulties we have for people reporting issues is that they either have an outdated application, the application is modified downstream, or the environment is completely different as the one the developers are using, making the experience difficult and frustrating for both the reporter and the developer. Needless to say all of us had to deal with 'worksforme' issues…

With Flatpak, GitLab and the work explained before we can fix this and boost considerably our success with bugs.

We have created a "bug" template where reporters are instructed to download the Flatpaked application in order to test and reproduce in the exact same environment and version as the developers, testers, and everyone involved is using. Here's part of how the issue template looks like:


When created, the issue renders as:


Which is considerably clearer.

Notes: The plan is to provide the stable app Flatpak too.

Full continuous integration

The last step to close this plan is to make sure that GNOME projects build in all the major distributors. After all, most of us are working both upstream in GNOME and downstream in a Linux distribution. For that, we have made a full array of builds that runs weekly:


Which also fixes another issue we have experience for years: Distribution packagers delivering some GNOME applications different than intended, causing subtle but also sometimes major issues. Now we can point out to this graph that contains the commands to build the application as exact documentation on how to package GNOME projects, directly from the maintainer.

'How to' for GNOME maintainers

For the full CI and Flatpak file generation take a look at Nautilus GitLab CI. For the cross distro weekly array additionally create an scheduled pipeline like this. It's also possible to do more regularly the weekly array of CI, however keep in mind the resources are limited and that the important part is that every MR is buildable and that the tests passes. Otherwise it can be confusing to contributors if the pipeline is failing for a one of the jobs and not for others. For non apps projects, you can pick a single distribution you are comfortable with, other ideas are welcome.

A more complex CI is possible, take a look at the magic work of Jordan Petridis in librsvg. I heard Jordan will do a blog post soon about more CI magic, which will be interesting to read.

For parallel installation, it's mainly this MR for master and this commit for the stable version; however there has been a couple of commits on top of each, follow them up to today's date (19-03-2018).

For issues templates, take a look at the templates folder. We were discussing here a default template to be used for GNOME projects,however there was not much input in there so for now I imagined better to experiment with this in Nautilus. Also, this will make more sense once we can put a default template in place, this is something GitLab will probably work on soon.

Finishing up…

On the last 4 days Ernestas Kulik, Jordan Petridis, and me have been working trying to time box this effort and come with a complete proposal by today, each of us working in a part of the plan, and I think we can say we achieved it. Alex Larsson and other people around in #flatpak provided us with valuable help. Work by Florian Mullner and Christian Hergert were an inspiration for us too. Andrea Veri and Javier Jardon put a considerable amount of their time into setting up an AWS instance for CI so we can have fast builds. Big thanks to all of them.

As you may guess, this CI setup for an organization like GNOME with more than 500 projects is quite resource consuming. Good news is that we have some help from sponsors happening, many thanks to them! Stay tuned for the announcements.

Hope you like the direction GNOME is going, for me it's exciting to modernize and make more dynamic how GNOME development happens, I can see we have come a long way since a year ago. If you have any thoughts, comments or ideas let any of us know!

Advertisements &b &b

19 Mar 2018 11:37pm GMT


Alan Coopersmith: One SMF Service to Monitor the Rest!

Contributed by: Thejaswini Kodavur

Have you ever wondered if there was a single service that monitors all your other services and makes administration easier? If yes then "SMF goal services", a new feature of Oracle Solaris 11.4, is here to provide a single, unambiguous, and well-defined point where one can consider the system up and running. You can choose your customized, mission critical services and link them together into a single SMF service in one step. This SMF service is called a goal service. It can be used to monitor the health of your system upon booting up. This makes administration much easier as monitoring each of the services individually is no longer required!

There are two ways in which you can make your services part of a goal service.

1. Using the supplied Goal Service

By default Oracle Solaris 11.4 system provides you a goal service called "svc:/milestone/goals:default". This goal service has a dependency on the service "svc:/milestone/multi-user-server:default" by default.

You can set your mission critical service to the default goal service as below:

# svcadm goals system/my-critical-service-1:default

Note: This is a set/clear interface. Therefore the above command will clear the dependency from "svc:/milestone/multi-user-server:default".

In order to set the dependency on both the services use:

# svcadm goals svc:/milestone/multi-user-server:default \ system/my-critical-service-1:default 2. Creating you own Goal Service

Oracle Solaris 11.4 allows you to create your own goal service and set your mission critical services as dependent services. Follow the below steps to create and use a goal service.

# svcbundle -o new-gs.xml -s service-name=milestone/new-gs -s start-method=":true" # cp new-gs.xml /lib/svc/manifest/site/new-gs.xml # svccfg validate /lib/svc/manifest/site/new-gs.xml # svcadm restart svc:/system/manifest-import # svcs new-gs STATE STIME FMRI online 6:03:36 svc:/milestone/new-gs:default

# svcadm disable svc:/milestone/new-gs:default # svccfg -s svc:/milestone/new-gs:default setprop general/goal-service=true # svcadm enable svc:/milestone/new-gs:default

# svcadm goals -g svc:/milestone/new-gs:default system/critical-service-1:default \ system/critical-service-2:default

Note: By omitting the -g option without specifying a goal service, you will set the dependency to the system provided default goal service, i.e svc:/milestone/multi-user-server:default.

# svcs -d milestone/new-gs STATE STIME FMRI disabled 5:54:31 svc:/system/critical-service-2:default online Feb_19 svc:/system/critical-service-1:default # svcs milestone/new-gs STATE STIME FMRI maintenance 5:54:30 svc:/milestone/new-gs:default

Note: You can use -d option of svcs(1) to check the dependencies on your goal service.

# svcs -d milestone/new-gs STATE STIME FMRI online Feb_19 svc:/system/critical-service-1:default online 5:56:39 svc:/system/critical-service-2:default # svcs milestone/new-gs STATE STIME FMRI online 5:56:39 svc:/milestone/new-gs:default

Note: For more information refer to "Goal Services" in smf(7) and subcommand goal in svcadm(8).

The goal service "milestone/new-gs" is your new single SMF service with which you can monitor all of your other mission critical services!

Thus, Goals Service acts as the headquarters that monitors the rest of your services.

19 Mar 2018 5:00pm GMT

feedPlanet GNOME

Philippe Normand: GStreamer’s playbin3 overview for application developers

Multimedia applications based on GStreamer usually handle playback with the playbin element. I recently added support for playbin3 in WebKit. This post aims to document the changes needed on application side to support this new generation flavour of playbin.

So, first of, why is it named playbin3 anyway? The GStreamer 0.10.x series had a playbin element but a first rewrite (playbin2) made it obsolete in the GStreamer 1.x series. So playbin2 was renamed to playbin. That's why a second rewrite is nicknamed playbin3, I suppose :)

Why should you care about playbin3? Playbin3 (and the elements it's using internally: parsebin, decodebin3, uridecodebin3 among others) is the result of a deep re-design of playbin2 (along with decodebin2 and uridecodebin) to better support:

This work was carried on mostly by Edward Hervey, he presented his work in detail at 3 GStreamer conferences. If you want to learn more about this and the internals of playbin3 make sure to watch his awesome presentations at the 2015 gst-conf, 2016 gst-conf and 2017 gst-conf.

Playbin3 was added in GStreamer 1.10. It is still considered experimental but in my experience it works already very well. Just keep in mind you should use at least the latest GStreamer 1.12 (or even the upcoming 1.14) release before reporting any issue in Bugzilla. Playbin3 is not a drop-in replacement for playbin, both elements share only a sub-set of GObject properties and signals. However, if you don't want to modify your application source code just yet, it's very easy to try playbin3 anyway:

$ USE_PLAYBIN3=1 my-playbin-based-app

Setting the USE_PLAYBIN environment variable enables a code path inside the GStreamer playback plugin which swaps the playbin element for the playbin3 element. This trick provides a glance to the playbin3 element for the most lazy people :) The problem is that depending on your use of playbin, you might get runtime warnings, here's an example with the Totem player:

$ USE_PLAYBIN3=1 totem ~/Videos/Agent327.mp4
(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'video-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'audio-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'text-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'video-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'audio-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'text-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'
sys:1: Warning: g_object_get_is_valid_property: object class 'GstPlayBin3' has no property named 'n-audio'
sys:1: Warning: g_object_get_is_valid_property: object class 'GstPlayBin3' has no property named 'n-text'
sys:1: Warning: ../../../../gobject/gsignal.c:3492: signal name 'get-video-pad' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

As mentioned previously, playbin and playbin3 don't share the same set of GObject properties and signals, so some changes in your application are required in order to use playbin3.

If your application is based on the GstPlayer library then you should set the GST_PLAYER_USE_PLAYBIN3 environment variable. GstPlayer already handles both playbin and playbin3, so no changes needed in your application if you use GstPlayer!

Ok, so what if your application relies directly on playbin? Some changes are needed! If you previously used playbin stream selection properties and signals, you will now need to handle the GstStream and GstStreamCollection APIs. Playbin3 will emit a stream collection message on the bus, this is very nice because the collection includes information (metadata!) about the streams (or tracks) the media asset contains. In playbin this was handled with a bunch of signals (audio-tags-changed, audio-changed, etc), properties (n-audio, n-video, etc) and action signals (get-audio-tags, get-audio-pad, etc). The new GstStream API provides a centralized and non-playbin-specific access point for all these informations. To select streams with playbin3 you now need to send a select_streams event so that the demuxer can know exactly which streams should be exposed to downstream elements. That means potentially improved performance! Once playbin3 completed the stream selection it will emit a streams selected message, the application should handle this message and potentially update its internal state about the selected streams. This is also the best moment to update your UI regarding the selected streams (like audio track language, video track dimensions, etc).

Another small difference between playbin and playbin3 is about the source element setup. In playbin there is a source read-only GObject property and a source-setup GObject signal. In playbin3 only the latter is available, so your application should rely on source-setup instead of the notify::source GObject signal.

The gst-play-1.0 playback utility program already supports playbin3 so it provides a good source of inspiration if you consider porting your application to playbin3. As mentioned at the beginning of this post, WebKit also now supports playbin3, however it needs to be enabled at build time using the CMake -DUSE_GSTREAMER_PLAYBIN3=ON option. This feature is not part of the WebKitGTK+ 2.20 series but should be shipped in 2.22. As a final note I wanted to acknowledge my favorite worker-owned coop Igalia for allowing me to work on this WebKit feature and also our friends over at Centricular for all the quality work on playbin3.

19 Mar 2018 7:13am GMT

18 Mar 2018


Alyssa Rosenzweig: Midgard Shaders with the Free NIR Compiler

In my last update on the Panfrost project, I showed an assembler and disassembler pair for Midgard, the shader architecture for Mali Txxx GPUs. Unfortunately, Midgard assembly is an arcane, unwieldly language, understood by Connor Abbott, myself, and that's about it besides engineers bound by nondisclosure agreements. You can read the low-level details of the ISA if you're interested.

In any case, what any driver really needs is not just an assembler but a compiler. Ideally, such a compiler would live in Mesa itself, capable of converting programs written in high level GLSL into an architecture-specific binary.

Such a mammoth task ought to be delayed until after we begin moving the driver into Mesa, through the Gallium3D infrastructure. In any event, back in January I had already begun such a compiler, ingesting NIR, an intermediate representation coincidentally designed by Connor himself. The past few weeks were spent improving and debugging this compiler until it produced correct, reasonably efficient code for both fragment and vertex shaders.

As of last night, I have reached this milestone for simple shaders!

As an example, an input fragment shader written in GLSL might look like:

uniform vec4 uni4;

void main() {
    gl_FragColor = clamp(
        vec4(1.3, 0.2, 0.8, 1.0) - vec4(uni4.z),
        0.0, 1.0);

Through the fully free compiler stack, passed through the free diaassembler for legibility, this yields:

vadd.fadd.sat r0, r26, -r23.zzzz
br_cond.write +0
fconstants 1.3, 0.2, 0.8, 1

vmul.fmov r0, r24.xxxx, r0
br_cond.write -1

This is the optimal compilation for this particular shader; the majority of that shader is the standard fragment epilogue which writes the output colour to the framebuffer.

For some background on the assembly, Midgard is a Very Long Instruction Word (VLIW) architecture. That is, multiple instructions are grouped together in blocks. In the disassembly, this is represented by spacing. Each line is an instruction, and blank lines delimit blocks.

The first instruction contains the entirety of the shader logic. Reading it off, it means "using the vector addition unit, perform the saturated floating point addition of the attached constants (register 26) and the negation of the z component of the uniform (register 23), storing the result into register 0". It's very compact, but comparing with the original GLSL, it should be clear where this is coming from. The constants are loaded at the end of the block with the fconstants meta instruction.

The other four instructions are the standard fragment epilogue. We're not entirely sure why it's so strange - framebuffer writes are fixed from the result of register 0, and are accomplished with a special loop using branching instruction. We're also not sure why the redundant move is necessary; Connor and I suspect there may be a hardware limitation or errata preventing a br_cond.write instruction from standing alone in a block. Thankfully, we do understand more or less what's going on, and they appear to be fixed. The compiler is able to generate it just fine, including optimising the code to write into register 0.

As for vertex shaders, well, fragment shaders are simpler than vertex shaders. Whereas the former merely has the aforementioned weird instruction sequence, vertex epilogues need to handle perspective division and viewport scaling, operations which are not implemented in hardware on this embedded GPU. When this is fully implemented, it will be quite a bit more difficult-to-optimise code in the output, although even the vendor compiler does not seem to optimise it. (Perhaps in time our vertex shaders could be faster than the vendor's compiled shader due to a smarter epilogue!)

Without further ado, an example vertex shader looks like:

attribute vec4 vin;
uniform vec4 u;

void main() {
    gl_Position = (vin + u.xxxx * vec4(0.01, -0.02, 0.0, 0.0)) * (1.0 / u.x);

Through the same stack and a stub vertex epilogue which assumes there is no perspective division needed (that the input is normalised device coordinates) and that the framebuffer happens to be the resolution 400x240, the compiler emits:

vmul.fmov r1, r24.xxxx, r26
fconstants 0, 0, 0, 0

ld_attr_32 r2, 0, 0x1E1E

vmul.fmul r4, r23.xxxx, r26
vadd.fadd r5, r2, r4
fconstants 0.01, -0.02, 0, 0

lut.frcp r6.x, r23.xxxx, #2.61731e-39
fconstants 0.01, -0.02, 0, 0

vmul.fmul r7, r5, r6.xxxx

vmul.fmul r9, r7, r26
fconstants 200, 120, 0.5, 0

vadd.fadd r27, r26, r9
fconstants 200, 120, 0.5, 1

st_vary_32 r1, 0, 0x1E9E

There is a lot of room for improvement here, but for now, the important part is that it does work! The transformed vertex (after scaling) must be written to the special register 27. Currently, a dummy varying store is emitted to workaround what appears to be yet another hardware quirk. (Are you noticing a trend here? GPUs are funky.). The rest of the code should be more or less intelligible by looking at the ISA notes. In the future, we might improve the disassembler to hide some of the internal encoding peculiarities, such as the dummy r24.xxxx and #0 arguments for fmov and frcp instructions respectively.

All in all, the compiler is progressing nicely. It is currently using a simple SSA-based intermediate representation which maps one-to-one with the hardware, minus details about register allocation and VLIW. This architecture will enable us to optimise our code as needed in the future, once we write a register allocators and instruction scheduler. A number of arithmetic (ALU) operations are supported, and although there is much work left to do - including generating texture instructions, which were only decoded a few weeks ago - the design is sound, clocking in at a mere 1500 lines of code.

The best part, of course, is that this is no standalone compiler; it is already sitting in our fork of mesa, using mesa's infrastructure. When the driver is written, it'll be ready from day 1. Woohoo!

Source code is available; get it while it's hot!

Getting the shader compiler to this point was a bigger time sink than anticipated. Nevertheless, we did do a bit of code cleanup in the meanwhile. On the command stream side, I began passing memory-resident structures by name rather than by address, slowly rolling out a basic watermark allocator. This step is revealing potential issues in the understanding of the command stream, preparing us for proper, non-replay-based driver development. Textures still remain elusive, unfortunately. Aside from that, however, much of - if not most - of the command stream is well-understood now. With the help of the shader compiler, basic 3D tests like test-triangle-msoothed are now almost entirely understood and for the most part devoid of magic.

Lyude Paul has been working on code clean-up specifically regarding the build systems. Her goal is to let new contributors play with GPUs, rather than fight with meson and CMake. We're hoping to attract some more people with low-level programming knowledge and some spare time to pitch in. (Psst! That might mean you! Join us on IRC!)

On a note of administrivia, the project name has been properly changed to Panfrost. For some history, over the summer two driver projects were formed: chai, by me, for Midgard; and BiOpenly, by Lyude et al, for Bifrost. Thanks to Rob Clark's matchmaking, we found each other and quickly realised that the two GPU architectures had identical command streams; it was only the shader cores that were totally redesigned and led to the rename. Thus, we merged to join efforts, but the new name was never officially decided.

We finally settled on the name "Panfrost", and our infrastructure is being changed to reflect this. The IRC channel, still on Freenode, now redirects to #panfrost. Additionally Freedesktop.org rolled out their new GitLab CE instance, of which we are the first users; you can find our repositories at the Panfrost organisation on the fd.o GitLab.

On Monday, our project was discussed in Robert Foss's talk "Progress in the Embedded GPU Ecosystem". Foss predicted the drivers would not be ready for another three years.

Somehow, I have a feeling it'll be much sooner!

18 Mar 2018 7:00am GMT

13 Mar 2018


Alan Coopersmith: Oracle Solaris 11.4 beta progress on LP64 conversion

Back in 2014, I posted Moving Oracle Solaris to LP64 bit by bit describing work we were doing then. In 2015, I provided an update covering Oracle Solaris 11.3 progress on LP64 conversion.

Now that we've released the Oracle Solaris 11.4 Beta to the public you can see the ratio of ILP32 to LP64 programs in /usr/bin and /usr/sbin in the full Oracle Solaris package repositories has dramatically shifted in 11.4:

Release 32-bit 64-bit total Solaris 11.0 1707 (92%) 144 (8%) 1851 Solaris 11.1 1723 (92%) 150 (8%) 1873 Solaris 11.2 1652 (86%) 271 (14%) 1923 Solaris 11.3 1603 (80%) 379 (19%) 1982 Solaris 11.4 169 (9%) 1769 (91%) 1938

That's over 70% more of the commands shipped in the OS which can use ADI to stop buffer overflows on SPARC, take advantage of more registers on x86, have more address space available for ASLR to choose from, are ready for timestamps and dates past 2038, and receive the other benefits of 64-bit software as described in previous blogs.

And while we continue to provide more features for 64-bit programs, such as making ADI support available in the libc malloc, we aren't abandoning 32-bit programs either. A change that just missed our first beta release, but is coming in a later refresh of our public beta will make it easier for 32-bit programs to use file descriptors > 255 with stdio calls, relaxing a long held limitation of the 32-bit Solaris ABI.

This work was years in the making, and over 180 engineers contributed to it in the Solaris organization, plus even more who came before to make all the FOSS projects we ship and the libraries we provide be 64-bit ready so we could make this happen. We thank all of them for making it possible to bring this to you now.

13 Mar 2018 5:37am GMT