21 Mar 2019

feedPlanet GNOME

Philip Withnall: Metered data hackfest

tl;dr: Please fill out this survey about metered data connections, regardless of whether you run GNOME or often use metered data connections.

We're now into the second day of the metered data hackfest in London. Yesterday we looked at Endless' existing metered data implementation, which is restricted to OS and application updates, and discussed how it could be reworked to fit in with the new control centre design, and which applications would benefit from scheduling their large downloads to avoid using metered data unnecessarily (and hence costing the user money).

The conclusion was that the first step is to draw up a design for the control centre integration, which determines when to allow downloads on metered connections, and which connections are actually metered. Then to upstream the integration of metered data with gnome-software, so that app and OS updates adhere to the policy. Integration with other applications which do large downloads (such as podcasts, file syncing, etc.) can then follow.

While looking at metered data, however, we realised we don't have much information about what types of metered data connections people have. For example, do connections commonly limit people to a certain amount of downloads per month, or per day? Do they have a free period in the middle of the night? We've put together a survey for anyone to take (not just those who use GNOME, or who use a metered connection regularly) to try and gather more information. Please fill it out!

Today, the hackfest is winding down a bit, with people quietly working on issues related to parental controls or metered data, or on upstream development in general. Richard and Kalev are working on gnome-software issues. Georges and Florian are working on gnome-shell issues.

21 Mar 2019 11:06am GMT

Peter Hutterer: Using hexdump to print binary protocols

I had to work on an image yesterday where I couldn't install anything and the amount of pre-installed tools was quite limited. And I needed to debug an input device, usually done with libinput record. So eventually I found that hexdump supports formatting of the input bytes but it took me a while to figure out the right combination. The various resources online only got me partway there. So here's an explanation which should get you to your results quickly.

By default, hexdump prints identical input lines as a single line with an asterisk ('*'). To avoid this, use the -v flag as in the examples below.

hexdump's format string is single-quote-enclosed string that contains the count, element size and double-quote-enclosed printf-like format string. So a simple example is this:


$ hexdump -v -e '1/2 "%d\n"'
-11643
23698
0
0
-5013
6
0
0

This prints 1 element ('iteration') of 2 bytes as integer, followed by a linebreak. Or in other words: it takes two bytes, converts it to int and prints it. If you want to print the same input value in multiple formats, use multiple -e invocations.


$ hexdump -v -e '1/2 "%d "' -e '1/2 "%x\n"'
-11568 d2d0
23698 5c92
0 0
0 0
6355 18d3
1 1
0 0

This prints the same 2-byte input value, once as decimal signed integer, once as lowercase hex. If we have multiple identical things to print, we can do this:


$ hexdump -v -e '2/2 "%6d "' -e '" hex:"' -e '4/1 " %x"' -e '"\n"'
-10922 23698 hex: 56 d5 92 5c
0 0 hex: 0 0 0 0
14879 1 hex: 1f 3a 1 0
0 0 hex: 0 0 0 0
0 0 hex: 0 0 0 0
0 0 hex: 0 0 0 0

Which prints two elements, each size 2 as integers, then the same elements as four 1-byte hex values, followed by a linebreak. %6d is a standard printf instruction and documented in the manual.

Let's go and print our protocol. The struct representing the protocol is this one:


struct input_event {
#if (__BITS_PER_LONG != 32 || !defined(__USE_TIME_BITS64)) && !defined(__KERNEL__)
struct timeval time;
#define input_event_sec time.tv_sec
#define input_event_usec time.tv_usec
#else
__kernel_ulong_t __sec;
#if defined(__sparc__) && defined(__arch64__)
unsigned int __usec;
#else
__kernel_ulong_t __usec;
#endif
#define input_event_sec __sec
#define input_event_usec __usec
#endif
__u16 type;
__u16 code;
__s32 value;
};

So we have two longs for sec and usec, two shorts for type and code and one signed 32-bit int. Let's print it:


$ hexdump -v -e '"E: " 1/8 "%u." 1/8 "%06u" 2/2 " %04x" 1/4 "%5d\n"' /dev/input/event22
E: 1553127085.097503 0002 0000 1
E: 1553127085.097503 0002 0001 -1
E: 1553127085.097503 0000 0000 0
E: 1553127085.097542 0002 0001 -2
E: 1553127085.097542 0000 0000 0
E: 1553127085.108741 0002 0001 -4
E: 1553127085.108741 0000 0000 0
E: 1553127085.118211 0002 0000 2
E: 1553127085.118211 0002 0001 -10
E: 1553127085.118211 0000 0000 0
E: 1553127085.128245 0002 0000 1

And voila, we have our structs printed in the same format evemu-record prints out. So with nothing but hexdump, I can generate output I can then parse with my existing scripts on another box.

21 Mar 2019 12:30am GMT

20 Mar 2019

feedPlanet GNOME

Michael Meeks: 2019-03-20 Wednesday

20 Mar 2019 9:58pm GMT

Andre Klapper: GNOME Bugzilla closed for new bug entry

As part of GNOME's ongoing migration from Bugzilla to Gitlab, from today on there are no products left in GNOME Bugzilla which allow the creation of new tickets.
The ID of the last GNOME Bugzilla ticket is 797430 (note that there are gaps between 173191-200000 and 274555-299999 as the 2xxxxx ID range was used for tickets imported from Ximian Bugzilla).

Since the year 2000, the Bugzilla software had served as GNOME's issue tracking system. As forges emerged which offer tight and convenient integration of issue tracking, code review of proposed patches, automated continuous integration testing, code repository browsing and hosting and further functionality, Bugzilla's shortcomings became painful obstacles for modern software development practices.

Nearly all products which used GNOME Bugzilla have moved to GNOME Gitlab to manage issues. A few projects (Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy) have moved to other places (such as freedesktop.org Gitlab, self-hosted Bugzilla instances, or Github) to track their issues.

Reaching this milestone required finding, contacting and discussing over the last months with project maintainers of mostly less active projects which had used GNOME Bugzilla for their issue tracking.
For convenience, there are redirects in place (for those websites out there which still directly link to Bugzilla's ticket creation page) to guide them to the new issue tracking venues.

Note that closing only refers to creating new tickets: There are still 189 products with 21019 open tickets in GNOME Bugzilla. IMO these tickets should either get migrated to Gitlab or mass-closed on a per-product basis, depending on maintainers' preferences. The long-term goal should be making GNOME Bugzilla completely read-only.

I also fixed the custom "Browse" product pages in GNOME Bugzilla to get displayed (the previous code expected products to be open for new bug entry). Should make it easier again for maintainers to potentially triage and clean up their old open tickets in Bugzilla.

Thanks to Carlos and Andrea and everyone involved for all their help!

PS: Big Thanks to Lenka and everyone who signed the postcard for me at FOSDEM 2019. Missed you too! :)

20 Mar 2019 3:51pm GMT

Philip Withnall: Parental controls hackfest

Various of us have been meeting in the Red Hat offices in London this week (thanks Red Hat!) to discuss parental controls and digital wellbeing. The first two days were devoted to this; today and tomorrow will be dedicated to discussing metered data (which is unrelated to parental controls, but the hackfests are colocated because many of the same people are involved in both).

Parental controls discussions went well. We've worked out a rough scope of what features we are interested in integrating into GNOME, and how parental controls relates to digital wellbeing. In this context, we're considering parental controls to be allowing parents to limit what their children can do on a computer, in terms of running different applications or games, or spending certain amounts of time on the computer.

Digital wellbeing is many of the same concepts - limiting time usage of the computer or applications, or access to certain websites - but applied in a way to give yourself 'speed bumps' to help your productivity by avoiding distractions at work.

Allan produced some initial designs for the control centre UI for parental controls and digital wellbeing, and we discussed various minor issues around them, and how to deal with the problem of allowing people to schedule times when apps, or whole groups of apps, are to be blocked; without making the UI too complex. There's some more work to do there.

On Tuesday evening, we joined some of the local GNOME developers in London for beers, celebrating the 3.32 GNOME release. ?

We're now looking at metered data, which is the idea that large downloads should be limited and scheduled according to the user's network tariff, which might limit what can be downloaded during a certain time period, or provide certain periods of the night when downloads are unmetered. More to come on that later.

For other write ups of what we've been doing, see Iain's detailed write up of the first two days, or the raw hackfest notes.

20 Mar 2019 12:36pm GMT

19 Mar 2019

feedPlanet GNOME

Neil McGovern: GNOME ED Update – February

Another update is now due from what we've been doing at the Foundation, and we've been busy!

As you may have seen, we've hired three excellent people over the past couple of months. Kristi Progri has joined us as Program Coordinator, Bartłomiej Piorski as a devops sysadmin, and Emmanuele Bassi as our GTK Core developer. I hope to announce another new hire soon, so watch this space…

There's been quite a lot of discussion around the Google API access, and GNOME Online Accounts. The latest update is that I submitted the application to Google to get GOA verified, and we've got a couple of things we're working through to get this sorted.

Events all round!

Although the new year's conference season is just kicking off, it's been a busy one for GNOME already. We were at FOSDEM in Brussels where we had a large booth, selling t-shirts, hoodies and of course, the famous GNOME socks. I held a meeting of the Advisory Board, and we had a great GNOME Beers event - kindly sponsored by Codethink.

We also had a very successful GTK Hackfest - moving us one step closer to GTK 4.0.

Coming up, we'll have a GNOME booth at:

If you're at any of these, please come along and say hi! We're also planning out events for the rest of the year. If anyone has any particularly exciting conferences we may not have heard of, please let us know.

Discourse

It hasn't yet been announced, but we're trialling an instance of Discourse for the GTK and Engagement teams. It's hopeful that this may replace mailman, but we're being quite careful to make sure that email integration continues to work. Expect more information about this in the coming month. If you want to go have a look, the instance is available at discourse.gnome.org

19 Mar 2019 10:53pm GMT

Michael Meeks: 2019-03-19 Tuesday

19 Mar 2019 9:00pm GMT

Michael Catanzaro: Epiphany Technology Preview Upgrade Requires Manual Intervention

Jan-Michael has recently changed Epiphany Technology Preview to use a separate app ID. Instead of org.gnome.Epiphany, it will now be org.gnome.Epiphany.Devel, to avoid clashing with your system version of Epiphany. You can now have separate desktop icons for both system Epiphany and Epiphany Technology Preview at the same time.

Because flatpak doesn't provide any way to rename an app ID, this means it's the end of the road for previous installations of Epiphany Technology Preview. Manual intervention is required to upgrade. Fortunately, this is a one-time hurdle, and it is not hard:

$ flatpak uninstall org.gnome.Epiphany

Uninstall the old Epiphany…

$ flatpak install gnome-apps-nightly org.gnome.Epiphany.Devel org.gnome.Epiphany.Devel.Debug

…install the new one, assuming that your remote is named gnome-apps-nightly (the name used locally may differ), and that you also want to install debuginfo to make it possible to debug it…

$ mv ~/.var/app/org.gnome.Epiphany ~/.var/app/org.gnome.Epiphany.Devel

…and move your personal data from the old app to the new one.

Then don't forget to make it your default web browser under System Settings -> Details -> Default Applications. Thanks for testing Epiphany Technology Preview!

19 Mar 2019 6:53pm GMT

Iain Lane: Parental controls & metered data hackfest: days 1 & 2

I'm currently at the Parental Controls & Metered Data hackfest at Red Hat's office in London. A bunch of GNOME people from various companies (Canonical, Endless, elementary, and Red Hat) have gathered to work out a plan to start implementing these two features in GNOME. The first two days have been dedicated to the parental control features. This is the ability for parents to control what children can do on the computer. For example, locking down access to certain applications or websites.

Day one began with presentations of the Endless OS implementation by Philip, followed by a demonstration of the Elementary version by Cassidy. Elementary were interested in potentially expanding this feature set to include something like Digital Wellbeing - we explored the distinction between this and parental controls. It turns out that these features are relatively similar - the main differences are whether you are applying restrictions to yourself or to someone else, and whether you have the ability to lift/ignore the restrictions. We've started talking about the latter of these as "speed bumps": you can always undo your own restrictions, so the interventions from the OS should be intended to nudge you towards the right behaviour.

After that we looked at some prior art (Android, iOS), and started to take the large list of potential features (in the image above) down to the ones we thought might be feasible to implement. Throughout all of this, one topic we kept coming back to was app lockdown. It's reasonably simple to see how this could be applied to containerised 📦 apps (e.g. Snap or Flatpak), but system applications that come from a deb or an rpm are much more difficult. It would probably be possible - but still difficult - to use an LSM like AppArmor or SELinux to do this by denying execute access to the application's binary. One obvious problem with that is that GNOME doesn't require one of these and different distributions have made different choices here… Another tricky topic is how to implement website white/blacklisting in a robust way. We discussed using DNS (systemd-resolved?) and ip/nftables implementations, but it might turn out that the most feasible way is to use a browser extension for this.

Adam Bieńkowski joined us to discuss the technical details of Elementary's implementation and some potential ideas for future improvements there. Thanks for that!

Today we've spent a fair bit of time discussing the technical details about how some of this might be implemented. Given that this is about locking down other users' accounts, the data ought to be stored somewhere at the system level - both so the admin can query/set it, and so that the user can't modify it. Endless's current implementation stores this in AccountsService, which feels reasonable to us, but doesn't extend well to storing the information required to implement activity tracking. Georges and Florian have been discussing writing a system daemon to do this, which the shell and (maybe) browser(s) would feed into.

More detailed notes taken by Philip are available here.

For the next two days we will move to talking about the second subject for this hackfest - data metering.

19 Mar 2019 6:24pm GMT

Alexander Larsson: Introducing flat-manager

A long time ago I wrote a blog post about how to maintain a Flatpak repository.

It is still a nice, mostly up to date, description of how Flatpak repositories work. However, it doesn't really have a great answer to the issue called syncing updates in the post. In other words, it really is more about how to maintain a repository on one machine.

In practice, at least on a larger scale (like e.g. Flathub) you don't want to do all the work on a single machine like this. Instead you have an entire build-system where the repository is the last piece.

Enter flat-manager

To support this I've been working on a side project called flat-manager. It is a service written in rust that manages Flatpak repositories. Recently we migrated Flathub to use it, and its seems to work quite well.

At its core, flat-manager serves and maintains a set of repos, and has an API that lets you push updates to it from your build-system. However, the way it is set up is a bit more complex, which allows some interesting features.

Core concept: a build

When updating an app, the first thing you do is create a new build, which just allocates an id that you use in later operations. Then you can upload one or more builds to this id.

This separation of the build creation and the upload is very powerful, because it allows you to upload the app in multiple operations, potentially from multiple sources. For example, in the Flathub build-system each architecture is built on a separate machine. Before flat-manager we had to collect all the separate builds on one machine before uploading to the repo. In the new system each build machine uploads directly to the repo with no middle-man.

Committing or purging

An important idea here is that the new build is not finished until it has been committed. The central build-system waits until all the builders report success before committing the build. If any of the builds fail, we purge the build instead, making it as if the build never happened. This means we never expose partially successful builds to users.

Once a build is committed, flat-manager creates a separate repository containing only the new build. This allows you to use Flatpak to test the build before making it available to users.

This makes builds useful even for builds that never was supposed to be generally available. Flathub uses this for test builds, where if you make a pull request against an app it will automatically build it and add a comment in the pull request with the build results and a link to the repo where you can test it.

Publishing

Once you are satisfied with the new build you can trigger a publish operation, which will import the build into the main repository and do all the required operations, like:

The publish operation is actually split into two steps, first it imports the build result in the repo, and then it queues a separate job to do all the updates needed for the repo. This way if multiple builds are published at the same time the update can be shared. This saves time on the server, but it also means less updates to the metadata which means less churn for users.

You can use whatever policy you want for how and when to publish builds. Flathub lets individual maintainers chose, but by default successful builds are published after 3 hours.

Delta generation

The traditional way to generate static deltas is to run flatpak build-update-repo --generate-static-deltas. However, this is a very computationally expensive operation that you might not want to do on your main repository server. Its also not very flexible in which deltas it generates.

To minimize the server load flat-manager allows external workers that generate the deltas on different machines. You can run as many of these as you want and the deltas will be automatically distributed to them. This is optional, and if no workers connect the deltas will be generated locally.

flat-manager also has configuration options for which deltas should be generated. This allows you to avoid generating unnecessary deltas and to add extra levels of deltas where needed. For example, Flathub no longer generates deltas for sources and debug refs, but we have instead added multiple levels of deltas for runtimes, allowing you to go efficiently to the current version from either one or two versions ago.

Subsetting tokens

flat-manager uses JSON Web Tokens to authenticate API clients. This means you can assign different permission to different clients. Flathub uses this to give minimal permissions to the build machines. The tokens they get only allow uploads to the specific build they are currently handling.

This also allows you to hand out access to parts of the repository namespace. For instance, the Gnome project has a custom token that allows them to upload anything in the org.gnome.Platform namespace in Flathub. This way Gnome can control the build of their runtime and upload a new version whenever they want, but they can't (accidentally or deliberately) modify any other apps.

Rust

I need to mention Rust here too. This is my first real experience with using Rust, and I'm very impressed by it. In particular, the sense of trust I have in the code when I got it past the compiler. The compiler caught a lot of issues, and once things built I saw very few bugs at runtime.

It can sometimes be a lot of work to express the code in a way that Rust accepts, which makes it not an ideal language for sketching out ideas. But for production code it really excels, and I can heartily recommend it!

Future work

Most of the initial list of features for flat-manager are now there, so I don't expect it to see a lot of work in the near future.

However, there is one more feature that I want to see; the ability to (automatically) create subset versions of the repository. In particular, we want to produce a version of Flathub containing only free software.

I have the initial plans for how this will work, but it is currently blocking on some work inside OSTree itself. I hope this will happen soon though.

19 Mar 2019 1:20pm GMT

16 Mar 2019

feedPlanet GNOME

Marcus Lundblad: Maps and GNOME 3.32

So, a couple of days ago the GNOME 3.32 release came out and I thought I should share something about the news on the Maps side of things, although I think most of this has been covered in previous posts.

First up we have gotten a new application icon as part of the major overhaul of the icon style.


Furthermore the application menu has been moved into a "hamburger menu" inside the main window, in-line with the other applications in the desktop. This goes hand-in-hand with the gnome-shell top bar application menu not showing this application-specified menu anymore, since it was considered not too intuitive and also few third-party apps utilized it. But I'm pleased to see that the icon of the currently focused app is still shown in the topbar, as I think this is a good visual cue there.






And the other notable UI-wise fix is showing live-updated thumbnails for the in the layer selection menu for the buttons to switch between map and aerial view (contributed by James Westman).


These screenshots also shows some glimpses of the new GTK theme, which I think is pretty sleek, so well done the designers!

There's also been some under-the-hood fixes for silencing some compiler warnings (for the C glue library) contributed by Debarshi Ray.

Looking forward I started work on an issue that has been laying around in the bug tracker since I registered it around two years ago (tagged with the "newcomers" tag in the hopes someone would take it on :) ) that is about the we use a GtkOffscreenWindow to render the output when generating printouts of a routing search. This was done by instantiating the same widgets used to render the route instructions in the routing sidebar and the attaching this to an offscreen window to render them to bitmaps. But as this method will not work with GTK 4 (due to a different rendering architecture) this has to eventually be rewritten. So I started rewriting this code to directly use Cairo and Pango to render the icons and text strings for the printed instructions. And there's some gotchas with layouting and right-to-left locales, but this far I think it's working out right for the turn-based routes as shown by the these screenshots.




The latter screenshot showing a rendition using a Farsi locale (being RTL, using the Arabic script).

That's it for now!

16 Mar 2019 3:04pm GMT

15 Mar 2019

feedPlanet GNOME

Georges Basile Stavracas Neto: GNOME 3.32 and other ramblings

GNOME 3.32 was released this week. For all intents and purposes, it is a fantastic release, and I am already using the packages provided by Arch Linux's gnome-unstable repository. Congratulations for everyone involved (including the Arch team for the super quick packaging!)

I have a few highlights, comments and thoughts I would like to share about this release, and since I own this blog, well, let me do it publicly! 🙂

Fast, Furiously Fast

The most promoted improvement in this release is the improved performance. Having worked or reviewed some these improvements myself, I found it a bit weird that some people were reporting enormous changes on performance. Of course, you should notice that GNOME Shell is smoother, and applications as well (when the compositor reliably sends frame ticks to applications, they also draw on time, and feel smoother as well.)

But people were telling me that these changes were game changing.

There is a grey line between the actual improvements, and people just happy and overly excited about it. And I thought the latter was the case.

But then I installed the non-debug packages from Arch repositories and this is actually a game changer release. I probably got used to using Mutter and GNOME Shell manually compiled with all the debug and development junk, and didn't really notice how better it became.

Better GNOME Music

Sweet, sweet GNOME Music

One of the applications that I enjoy the most in the GNOME core apps ecosytem is GNOME Music. In the past, I have worked on landing various performance improvements on it. Unfortunately, my contributions ceased last year, but I have been following the development of this pretty little app closely

A lot of effort was put into modernizing GNOME Music, and it is absolutely paying off. It is more stable, better, and I believe it has reached the point where adding new features won't drive contributors insane.

GNOME Web - a gem

In the past, I have tried making Web my main browser. Unfortunately, that did not work out very well, due to 2 big reasons:

Both issues seem to be fixed now! In fact, as you can see from the previous screenshot, I am writing this post from Web. Which makes me super happy.

Even though I cannot use it 100% of the time (mainly due to online banking and Google Meets), I will experiment making it my main browser for a few weeks and see how it goes.

GNOME Terminal + Headebars = 💖

Do I even need to say something?

Hackfests

As I write this, I am getting ready for next week's Parental Controls & Metered Data Hackfest in London. We will discuss and try to land in GNOME some downstream features available at Endless OS.

I'm also mentally preparing for the Content Apps Hackfest. And GUADEC. It is somewhat hard once you realize you have travel anxiety, and every week before traveling is a psychological war.

Other Thoughts

This was a peculiar release to me.

This is actually the first release where I spent serious time on Mutter and GNOME Shell. As I said in the past, it's a new passion of mine. Both are complex projects that encompasses many aspects of the user experience, and cleaning the code and improving it has been fantastic so far. As such, it was and still is a challenge to split my time in such a fragmented way (it's not like I don't maintain GNOME Settings, GNOME Calendar, and GNOME To Do already.)

Besides that, I am close to finishing moving to a new home! This is an ongoing process, slow and steady, it is becoming something I am growing to love and feel like home.

15 Mar 2019 11:53pm GMT

Matthias Clasen: Entries in GTK 4

One of the larger refactorings that recently landed in GTK master is re-doing the entry hierarchy. This post is summarizing what has changed, and why we think things are better this way.

Entries in GTK 3

Lets start by looking at how things are in GTK 3.

GtkEntry is the basic class here. It implements the GtkEditable interface. GtkSpinButton is a subclass of GtkEntry. Over the years, more things were added. GtkEntry gained support for entry completion, and for embedding icons, and for displaying progress. And we added another subclass, GtkSearchEntry.

Some problems with this approach are immediately apparent. gtkentry.c is more than 11100 lines of code. It it not only very hard to add more features to this big codebase, it is also hard to subclass it - and that is the only way to create your own entries, since all the single-line text editing functionality is inside GtkEntry.

The GtkEditable interface is really old - it has been around since before GTK 2. Unfortunately, it has not really been successful as an interface - GtkEntry is the only implementation, and it uses the interface functions internally in a confusing way.

Entries in GTK 4

Now lets look at how things are looking in GTK master.

The first thing we've done is to move the core text editing functionality of GtkEntry into a new widget called GtkText. This is basically an entry minus all the extras, like icons, completion and progress.

We've made the GtkEditable interface more useful, by adding some more common functionality (like width-chars and max-width-chars) to it, and made GtkText implement it. We also added helper APIs to make it easy to delegate a GtkEditable implementation to another object.

The 'complex' entry widgets (GtkEntry, GtkSpinButton, GtkSearchEntry) are now all composite widgets, which contain a GtkText child, and delegate their GtkEditable implementation to this child.

Finally, we added a new GtkPasswordEntry widget, which takes over the corresponding functionality that GtkEntry used to have, such as showing a Caps Lock warning

or letting the user peek at the content.

Why is this better?

One of the main goals of this refactoring was to make it easier to create custom entry widgets outside GTK.

In the past, this required subclassing GtkEntry, and navigating a complex maze of vfuncs to override. Now, you can just add a GtkText widget, delegate your GtkEditable implementation to it, and have a functional entry widget with very little effort.

And you have a lot of flexibility in adding fancy things around the GtkText component. As an example, we've added a tagged entry to gtk4-demo that can now be implemented easily outside GTK itself.

Will this affect you when porting from GTK 3?

There are a few possible gotcha's to keep in mind while porting code to this new style of doing entries.

GtkSearchEntry and GtkSpinButton are no longer derived from GtkEntry. If you see runtime warnings about casting from one of these classes to GtkEntry, you most likely need to switch to using GtkEditable APIs.

GtkEntry and other complex entry widgets are no longer focusable - the focus goes to the contained GtkText instead. But gtk_widget_grab_focus() will still work, and move the focus the right place. It is unlikely that you are affected by this.

The Caps Lock warning functionality has been removed from GtkEntry. If you were using a GtkEntry with visibility==FALSE for passwords, you should just switch to GtkPasswordEntry.

If you are using a GtkEntry for basic editing functionality and don't need any of the extra entry functionality, you should consider using a GtkText instead.

15 Mar 2019 8:52pm GMT

Federico Mena-Quintero: A Rust API for librsvg

After the librsvg team finished the rustification of librsvg's main library, I wanted to start porting the high-level test suite to Rust. This is mainly to be able to run tests in parallel, which cargo test does automatically in order to reduce test times. However, this meant that librsvg needed a Rust API that would exercise the same code paths as the C entry points.

At the same time, I wanted the Rust API to make it impossible to misuse the library. From the viewpoint of the C API, an RsvgHandle has different stages:

To ensure consistency, the public API checks that you cannot render an RsvgHandle that is not completely loaded yet, or one that resulted in a loading error. But wouldn't it be nice if it were impossible to call the API functions in the wrong order?

This is exactly what the Rust API does. There is a Loader, to which you give a filename or a stream, and it will return a fully-loaded SvgHandle or an error. Then, you can only create a CairoRenderer if you have an SvgHandle.

For historical reasons, the C API in librsvg is not perfectly consistent. For example, some functions which return an error will actually return a proper GError, but some others will just return a gboolean with no further explanation of what went wrong. In contrast, all the Rust API functions that can fail will actually return a Result, and the error case will have a meaningful error value. In the Rust API, there is no "wrong order" in which the various API functions and methods can be called; it tries to do the whole "make invalid states unrepresentable".

To implement the Rust API, I had to do some refactoring of the internals that hook to the public entry points. This made me realize that librsvg could be a lot easier to use. The C API has always forced you to call it in this fashion:

  1. Ask the SVG for its dimensions, or how big it is.
  2. Based on that, scale your Cairo context to the size you actually want.
  3. Render the SVG to that context's current transformation matrix.

But first, (1) gives you inadequate information because rsvg_handle_get_dimensions() returns a structure with int fields for the width and height. The API is similar to gdk-pixbuf's in that it always wants to think in whole pixels. However, an SVG is not necessarily integer-sized.

Then, (2) forces you to calculate some geometry in almost all cases, as most apps want to render SVG content scaled proportionally to a certain size. This is not hard to do, but it's an inconvenience.

SVG dimensions

Let's look at (1) again. The question, "how big is the SVG" is a bit meaningless when we consider that SVGs can be scaled to any size; that's the whole point of them!

When you ask RsvgHandle how big it is, in reality it should look at you and whisper in your ear, "how big do you want it to be?".

And that's the thing. The HTML/CSS/SVG model is that one embeds content into viewports of a given size. The software is responsible for scaling the content to fit into that viewport.

In the end, what we want is a rendering function that takes a Cairo context and a Rectangle for a viewport, and that's it. The function should take care of fitting the SVG's contents within that viewport.

There is now an open bug about exactly this sort of API. In the end, programs should just have to load their SVG handle, and directly ask it to render at whatever size they need, instead of doing the size computations by hand.

When will this be available?

I'm in the middle of a rather large refactor to make this viewport concept really work. So far this involves:

I want to make the Rust API available for the 2.46 release, which is hopefully not too far off. It should be ready for the next GNOME release. In the meantime, you can check out the open bugs for the 2.46.0 milestone. Help is appreciated; the deadline for the first 3.33 tarballs is approximately one month from now!

15 Mar 2019 7:36pm GMT

Peter Hutterer: libinput's internal building blocks

Ho ho ho, let's write libinput. No, of course I'm not serious, because no-one in their right mind would utter "ho ho ho" without a sufficient backdrop of reindeers to keep them sane. So what this post is instead is me writing a nonworking fake libinput in Python, for the sole purpose of explaining roughly how libinput's architecture looks like. It'll be to the libinput what a Duplo car is to a Maserati. Four wheels and something to entertain the kids with but the queue outside the nightclub won't be impressed.

The target audience are those that need to hack on libinput and where the balance of understanding vs total confusion is still shifted towards the latter. So in order to make it easier to associate various bits, here's a description of the main building blocks.

libinput uses something resembling OOP except that in C you can't have nice things unless what you want is a buffer overflow\n\80xb1001af81a2b1101. Instead, we use opaque structs, each with accessor methods and an unhealthy amount of verbosity. Because Python does have classes, those structs are represented as classes below. This all won't be actual working Python code, I'm just using the syntax.

Let's get started. First of all, let's create our library interface.


class Libinput:
@classmethod
def path_create_context(cls):
return _LibinputPathContext()

@classmethod
def udev_create_context(cls):
return _LibinputUdevContext()

# dispatch() means: read from all our internal fds and
# call the dispatch method on anything that has changed
def dispatch(self):
for fd in self.epoll_fd.get_changed_fds():
self.handlers[fd].dispatch()

# return whatever the next event is
def get_event(self):
return self._events.pop(0)

# the various _notify functions are internal API
# to pass things up to the context
def _notify_device_added(self, device):
self._events.append(LibinputEventDevice(device))
self._devices.append(device)

def _notify_device_removed(self, device):
self._events.append(LibinputEventDevice(device))
self._devices.remove(device)

def _notify_pointer_motion(self, x, y):
self._events.append(LibinputEventPointer(x, y))



class _LibinputPathContext(Libinput):
def add_device(self, device_node):
device = LibinputDevice(device_node)
self._notify_device_added(device)

def remove_device(self, device_node):
self._notify_device_removed(device)


class _LibinputUdevContext(Libinput):
def __init__(self):
self.udev = udev.context()

def udev_assign_seat(self, seat_id):
self.seat_id = seat.id

for udev_device in self.udev.devices():
device = LibinputDevice(udev_device.device_node)
self._notify_device_added(device)


We have two different modes of initialisation, udev and path. The udev interface is used by Wayland compositors and adds all devices on the given udev seat. The path interface is used by the X.Org driver and adds only one specific device at a time. Both interfaces have the dispatch() and get_events() methods which is how every caller gets events out of libinput.

In both cases we create a libinput device from the data and create an event about the new device that bubbles up into the event queue.

But what really are events? Are they real or just a fidget spinner of our imagination? Well, they're just another object in libinput.


class LibinputEvent:
@property
def type(self):
return self._type

@property
def context(self):
return self._libinput

@property
def device(self):
return self._device

def get_pointer_event(self):
if instanceof(self, LibinputEventPointer):
return self # This makes more sense in C where it's a typecast
return None

def get_keyboard_event(self):
if instanceof(self, LibinputEventKeyboard):
return self # This makes more sense in C where it's a typecast
return None


class LibinputEventPointer(LibinputEvent):
@property
def time(self)
return self._time/1000

@property
def time_usec(self)
return self._time

@property
def dx(self)
return self._dx

@property
def absolute_x(self):
return self._x * self._x_units_per_mm

@property
def absolute_x_transformed(self, width):
return self._x * width/ self._x_max_value

You get the gist. Each event is actually an event of a subtype with a few common shared fields and a bunch of type-specific ones. The events often contain some internal value that is calculated on request. For example, the API for the absolute x/y values returns mm, but we store the value in device units instead and convert to mm on request.

So, what's a device then? Well, just another I-cant-believe-this-is-not-a-class with relatively few surprises:


class LibinputDevice:
class Capability(Enum):
CAP_KEYBOARD = 0
CAP_POINTER = 1
CAP_TOUCH = 2
...

def __init__(self, device_node):
pass # no-one instantiates this directly

@property
def name(self):
return self._name

@property
def context(self):
return self._libinput_context

@property
def udev_device(self):
return self._udev_device

@property
def has_capability(self, cap):
return cap in self._capabilities

...

Now we have most of the frontend API in place and you start to see a pattern. This is how all of libinput's API works, you get some opaque read-only objects with a few getters and accessor functions.

Now let's figure out how to work on the backend. For that, we need something that handles events:


class EvdevDevice(LibinputDevice):
def __init__(self, device_node):
fd = open(device_node)
super().context.add_fd_to_epoll(fd, self.dispatch)
self.initialize_quirks()

def has_quirk(self, quirk):
return quirk in self.quirks

def dispatch(self):
while True:
data = fd.read(input_event_byte_count)
if not data:
break

self.interface.dispatch_one_event(data)

def _configure(self):
# some devices are adjusted for quirks before we
# do anything with them
if self.has_quirk(SOME_QUIRK_NAME):
self.libevdev.disable(libevdev.EV_KEY.BTN_TOUCH)


if 'ID_INPUT_TOUCHPAD' in self.udev_device.properties:
self.interface = EvdevTouchpad()
elif 'ID_INPUT_SWITCH' in self.udev_device.properties:
self.interface = EvdevSwitch()
...
else:
self.interface = EvdevFalback()


class EvdevInterface:
def dispatch_one_event(self, event):
pass

class EvdevTouchpad(EvdevInterface):
def dispatch_one_event(self, event):
...

class EvdevTablet(EvdevInterface):
def dispatch_one_event(self, event):
...


class EvdevSwitch(EvdevInterface):
def dispatch_one_event(self, event):
...

class EvdevFallback(EvdevInterface):
def dispatch_one_event(self, event):
...

Our evdev device is actually a subclass (well, C, *handwave*) of the public device and its main function is "read things off the device node". And it passes that on to a magical interface. Other than that, it's a collection of generic functions that apply to all devices. The interfaces is where most of the real work is done.

The interface is decided on by the udev type and is where the device-specifics happen. The touchpad interface deals with touchpads, the tablet and switch interface with those devices and the fallback interface is that for mice, keyboards and touch devices (i.e. the simple devices).

Each interface has very device-specific event processing and can be compared to the Xorg synaptics vs wacom vs evdev drivers. If you are fixing a touchpad bug, chances are you only need to care about the touchpad interface.

The device quirks used above are another simple block:


class Quirks:
def __init__(self):
self.read_all_ini_files_from_directory('$PREFIX/share/libinput')

def has_quirk(device, quirk):
for file in self.quirks:
if quirk.has_match(device.name) or
quirk.has_match(device.usbid) or
quirk.has_match(device.dmi):
return True
return False

def get_quirk_value(device, quirk):
if not self.has_quirk(device, quirk):
return None

quirk = self.lookup_quirk(device, quirk)
if quirk.type == "boolean":
return bool(quirk.value)
if quirk.type == "string":
return str(quirk.value)
...

A system that reads a bunch of .ini files, caches them and returns their value on demand. Those quirks are then used to adjust device behaviour at runtime.

The next building block is the "filter" code, which is the word we use for pointer acceleration. Here too we have a two-layer abstraction with an interface.


class Filter:
def dispatch(self, x, y):
# converts device-unit x/y into normalized units
return self.interface.dispatch(x, y)

# the 'accel speed' configuration value
def set_speed(self, speed):
return self.interface.set_speed(speed)

# the 'accel speed' configuration value
def get_speed(self):
return self.speed

...


class FilterInterface:
def dispatch(self, x, y):
pass

class FilterInterfaceTouchpad:
def dispatch(self, x, y):
...

class FilterInterfaceTrackpoint:
def dispatch(self, x, y):
...

class FilterInterfaceMouse:
def dispatch(self, x, y):
self.history.push((x, y))
v = self.calculate_velocity()
f = self.calculate_factor(v)
return (x * f, y * f)

def calculate_velocity(self)
for delta in self.history:
total += delta
velocity = total/timestamp # as illustration only

def calculate_factor(self, v):
# this is where the interesting bit happens,
# let's assume we have some magic function
f = v * 1234/5678
return f

So libinput calls filter_dispatch on whatever filter is configured and passes the result on to the caller. The setup of those filters is handled in the respective evdev interface, similar to this:


class EvdevFallback:
...
def init_accel(self):
if self.udev_type == 'ID_INPUT_TRACKPOINT':
self.filter = FilterInterfaceTrackpoint()
elif self.udev_type == 'ID_INPUT_TOUCHPAD':
self.filter = FilterInterfaceTouchpad()
...

The advantage of this system is twofold. First, the main libinput code only needs one place where we really care about which acceleration method we have. And second, the acceleration code can be compiled separately for analysis and to generate pretty graphs. See the pointer acceleration docs. Oh, and it also allows us to easily have per-device pointer acceleration methods.

Finally, we have one more building block - configuration options. They're a bit different in that they're all similar-ish but only to make switching from one to the next a bit easier.


class DeviceConfigTap:
def set_enabled(self, enabled):
self._enabled = enabled

def get_enabled(self):
return self._enabled

def get_default(self):
return False

class DeviceConfigCalibration:
def set_matrix(self, matrix):
self._matrix = matrix

def get_matrix(self):
return self._matrix

def get_default(self):
return [1, 0, 0, 0, 1, 0, 0, 0, 1]

And then the devices that need one of those slot them into the right pointer in their structs:


class EvdevFallback:
...
def init_calibration(self):
self.config_calibration = DeviceConfigCalibration()
...

def handle_touch(self, x, y):
if self.config_calibration is not None:
matrix = self.config_calibration.get_matrix

x, y = matrix.multiply(x, y)
self.context._notify_pointer_abs(x, y)

And that's basically it, those are the building blocks libinput has. The rest is detail. Lots of it, but if you understand the architecture outline above, you're most of the way there in diving into the details.

15 Mar 2019 6:15am GMT

14 Mar 2019

feedPlanet GNOME

Emmanuele Bassi: A little testing

Years ago I started writing Graphene as a small library of 3D transformation-related math types to be used by GTK (and possibly Clutter, even if that didn't pan out until Georges started working on the Clutter fork inside Mutter).

Graphene's only requirement is a C99 compiler and a decent toolchain capable of either taking SSE builtins or support vectorization on appropriately aligned types. This means that, unless you decide to enable the GObject types for each Graphene type, Graphene doesn't really need GLib types or API-except that's a bit of a lie.

As I wanted to test what I was doing, Graphene has an optional build time dependency on GLib for its test suite; the library itself may not use anything from GLib, but if you want to build and run the test suite then you need to have GLib installed.

This build time dependency makes testing Graphene on Windows a lot more complicated than it ought to be. For instance, I need to install a ton of packages when using the MSYS2 toolchain on the CI instance on AppVeyor, which takes roughly 6 minutes each for the 32bit and the 64bit builds; and I can't build the test suite at all when using MSVC, because then I'd have to download and build GLib as well-and just to access the GTest API, which I don't even like.


What's wrong with GTest

GTest is kind of problematic-outside of Google hijacking the name of the API for their own testing framework, which makes looking for it a pain. GTest is a lot more complicated than a small unit testing API needs to be, for starters; it was originally written to be used with a specific harness, gtester, in order to generate a very brief HTML report using gtester-report, including some timing information on each unit-except that gtester is now deprecated because the build system gunk to make it work was terrible to deal with. So, we pretty much told everyone to stop bothering, add a --tap argument when calling every test binary, and use the TAP harness in Autotools.

Of course, this means that the testing framework now has a completely useless output format, and with it, a bunch of default behaviours driven by said useless output format, and we're still deciding if we should break backward compatibility to ensure that the supported output format has a sane default behaviour.

On top of that, GTest piggybacks on GLib's own assertion mechanism, which has two major downsides:

To solve the first problem we added a lot of wrappers around g_assert(), like g_assert_true() and g_assert_no_error(), that won't be disabled depending on your build options and thus won't break your test suite-and if your test suite is still using g_assert(), you're strongly encouraged to port to the newer API. The second issue is still standing, and makes running GTest-based test suite under any harness a pain, but especially under a TAP harness, which requires listing the amount of tests you've run, or that you're planning to run.

The remaining issues of GTest are the convoluted way to add tests using a unique path; the bizarre pattern matching API for warnings and errors; the whole sub-process API that relaunches the test binary and calls a single test unit in order to allow it to assert safely and capture its output. It's very much the GLib test suite, except when it tries to use non-GLib API internally, like the command line option parser, or its own logging primitives; it's also sorely lacking in the GObject/GIO side of things, so you can't use standard API to create a mock GObject type, or a mock GFile.

If you want to contribute to GLib, then working on improving the GTest API would be a good investment of your time; since my project does not depend on GLib, though, I had the chance of starting with a clean slate.


A clean slate

For the last couple of years I've been playing off and on with a small test framework API, mostly inspired by BDD frameworks like Mocha and Jasmine. Behaviour Driven Development is kind of a buzzword, like test driven development, but I particularly like the idea of describing a test suite in terms of specifications and expectations: you specify what a piece of code does, and you match results to your expectations.

The API for describing the test suites is modelled on natural language (assuming your language is English, sadly):

  describe("your data type", function() {
    it("does something", () => {
      expect(doSomething()).toBe(true);
    });
    it("can greet you", () => {
      let greeting = getHelloWorld();
      expect(greeting).not.toBe("Goodbye World");
    });
  });

Of course, C is more verbose that JavaScript, but we can adopt a similar mechanism:

static void
something (void)
{
  expect ("doSomething",
    bool_value (do_something ()),
    to_be, true,
    NULL);
}

static void
{
  const char *greeting = get_hello_world ();

  expect ("getHelloWorld",
    string_value (greeting),
    not, to_be, "Goodbye World",
    NULL);
}

static void
type_suite (void)
{
  it ("does something", do_something);
  it ("can greet you", greet);
}


  describe ("your data type", type_suite);

If only C11 got blocks from Clang, this would look a lot less clunkier.

The value wrappers are also necessary, because C is only type safe as long as every type you have is an integer.

Since we're good C citizens, we should namespace the API, which requires naming this library-let's call it µTest, in a fit of unoriginality.

One of the nice bits of Mocha and Jasmine is the output of running a test suite:

$ ./tests/general 

  General
    contains at least a spec with an expectation
      ✓ a is true
      ✓ a is not false

      2 passing (219.00 µs)

    can contain multiple specs
      ✓ str contains 'hello'
      ✓ str contains 'world'
      ✓ contains all fragments

      3 passing (145.00 µs)

    should be skipped
      - skip this test

      0 passing (31.00 µs)
      1 skipped


Total
5 passing (810.00 µs)
1 skipped

Or, with colors:

Using colors means immediately taking this more seriously

The colours go automatically away if you redirect the output to something that is not a TTY, so your logs won't be messed up by escape sequences.

If you have a test harness, then you can use the MUTEST_OUTPUT environment variable to control the output; for instance, if you're using TAP you'll get:

$ MUTEST_OUTPUT=tap ./tests/general
# General
# contains at least a spec with an expectation
ok 1 a is true
ok 2 a is not false
# can contain multiple specs
ok 3 str contains 'hello'
ok 4 str contains 'world'
ok 5 contains all fragments
# should be skipped
ok 6 # skip: skip this test
1..6

Which can be passed through to prove to get:

$ MUTEST_OUTPUT=tap prove ./tests/general
./tests/general .. ok
All tests successful.
Files=1, Tests=6,  0 wallclock secs ( 0.02 usr +  0.00 sys =  0.02 CPU)
Result: PASS

I'm planning to add some additional output formatters, like JSON and XML.


Using µTest

Ideally, µTest should be used as a sub-module or a Meson sub-project of your own; if you're using it as a sub-project, you can tell Meson to build a static library that won't get installed on your system, e.g.:

mutest_dep = dependency('mutest-1',
  fallback: [ 'mutest', 'mutest_dep' ],
  default_options: ['static=true'],
  required: false,
  disabler: true,
)

# Or, if you're using Meson < 0.49.0
mutest_dep = dependency('mutest-1', required: false)
if not mutest_dep.found()
  mutest = subproject('mutest',
    default_options: [ 'static=true', ],
    required: false,
  )

  if mutest.found()
    mutest_dep = mutest.get_variable('mutest_dep')
  else
    mutest_dep = disabler()
  endif
endif

Then you can make the tests conditional on mutest_dep.found().

µTest is kind of experimental, and I'm still breaking its API in places, as a result of documenting it and trying it out, by porting the Graphene test suite to it. There's still a bunch of API that I'd like to land, like custom matchers/formatters for complex data types, and a decent want to skip a specification or a whole suite; plus, as I said above, some additional formatted output.

If you have feedback, feel free to open an issue-or a pull request wink wink nudge nudge.

14 Mar 2019 3:01pm GMT