19 Oct 2017

feedPlanet GNOME

Christian Schaller: Looking back at Fedora Workstation so far

So I have over the last few years blogged regularly about upcoming features in Fedora Workstation. Well I thought as we putting the finishing touches on Fedora Workstation 27 I should try to look back at everything we have achieved since Fedora Workstation was launched with Fedora 21. The efforts I highlight here are efforts where we have done significant or most development. There are of course a lot of other big changes that has happened over the last few years by the wider community that we leveraged and offer in Fedora Workstation, examples here include things like Meson and Rust. This post is not about those, but that said I do want to write a post just talking about the achievements of the wider community at some point, because they are very important and crucial too. And along the same line this post will not be speaking about the large number of improvements and bugfixes that we contributed to a long list of projects, like to GNOME itself. This blog is about taking stock and taking some pride in what we achieved so far and major hurdles we past on our way to improving the Linux desktop experience.
This blog is also slightly different from my normal format as I will not call out individual developers by name as I usually do, instead I will focus on this being a totality and thus just say 'we'.

I am sure I missed something, but this is at least a decent list of Fedora Workstation highlights for the last few years. Next onto working on my Fedora Workstation 27 blogpost :)

19 Oct 2017 6:50pm GMT

Sébastien Wilmet: List of GNOME-related projects fundraisers

I think it's useful to have a list of projects fundraisers in GNOME or at least GNOME-related. Ideally it would be nice to have that list on the gnome.org website, it looks to me an obvious thing to do, but after a discussion on the GNOME foundation-list, it seems unlikely to happen anytime soon.

So I've created this wiki page in the meantime. It explains the difference with donations made to the GNOME Foundation, and provides a list of individual projects fundraisers.

The list includes the GtkSourceView fundraiser that I launched last month. I plan to write regular updates on that front on this blog, for example every two months. Stay tuned, and thanks for your support :-)

19 Oct 2017 11:23am GMT

18 Oct 2017

feedPlanet GNOME

Christian Schaller: Fleet Commander ready for takeoff!

Alberto Ruiz just announced Fleet Commander as production ready! Fleet Commander is our new tool for managing large deployments of Fedora Workstation and RHEL desktop systems. So get our to Albertos Fleet Commander blog post for all the details.

18 Oct 2017 12:05pm GMT

Didier Roche: Ubuntu GNOME Shell in Artful: Day 16

All good things must come to an end, however, in that particular case, it's rather a beginning! We are indeed almost done in our road to Artful, which means that 17.10 is just around the corner: official Ubuntu 17.10 release is due tomorrow. Of course, it doesn't mean we stop right away working on it: you will have bug fixes and security updates for 9 months of support! It's thus time to close this series on Artful, and for this, we are going to tackle one topic we didn't get to yet, which is quite important approaching the release: upgrading from a previous Ubuntu release! For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 16: Flavors, upgrades and sessions!

Different kind of sessions

Any new Ubuntu installation will have two sessions available at most, whose base name is "Ubuntu":

Ubuntu default installation sessions

Those two sessions are available when you install the ubuntu-session package.

However, more sessions are available in the Ubuntu archives around GNOME technologies: the Unity and vanilla GNOME ones. The first one is available as soon as you install the unity-session binary package. The vanilla GNOME sessions simply appears once gnome-session is installed. After a reboot, GDM is presenting all of them for selection when login in.

All available sessions

Let's see how that goes on upgrades.

Upgrading from Ubuntu 17.04 or Ubuntu 16.04 (LTS)

People running today Ubuntu 17.04 or our last LTS, Ubuntu 16.04, are generally using Unity. As with every releases when upgrading people being on one default, we upgrade them to the next new default. It means that on upgrades, those users will reboot in our default Ubuntu GNOME Shell experience, having the "Ubuntu" and "Ubuntu on Xorg" sessions available. The "Ubuntu" session is the default and will lead you to our default and fully supported desktop:

Ubuntu GNOME Shell on 17.10

However, we don't remove packages that are still available in the distribution on upgrades. Those users will thus have an additional "Unity" session option, which they can use to continue running Unity 7 (and thus, on Xorg only). Indeed, Unity is still present, in universe (meaning we don't commit to strong maintenance or security updates), but we will continue to have a look at it on a best effort bases (at least until our next LTS). Some features are slightly modified to either don't collide with GNOME Shell experience or to follow more the upstream philosophy, like the headerbars I mentioned in my previous blog post. In a nutshell, don't expect the exact same experience that you used to have, but you will reach similar familiarity for the main concepts and components.

Unity on 17.10

Upgrading from Ubuntu GNOME 17.04 or Ubuntu GNOME 16.04

Those people were experiencing a more vanilla upstream GNOME experience than our Ubuntu session. It was a little bit 5050 in what to do for those users on upgrades as they were used to something different. The final experience that Ubuntu GNOME users will get is to have those 2 upstream vanilla GNOME session, ("GNOME" and "GNOME on Xorg"). Those will stay the default after upgrades.

Vanilla GNOME Shell session on 17.10

In addition to those sessions, we still want to give an easy option for our users to try our new default experience, and thus, the 2 "Ubuntu" sessions (Wayland & Xorg) are automatically installed as well on upgrade. The sessions are just around for user's convenience. :)

Fallback

I want to quickly mention and give kudos to Olivier who fixed a pet bug of mine, to ensure that Wayland to Xorg automatic fallbacking will always fallback to the correct sessions (Ubuntu will fallback to Ubuntu on Xorg and GNOME to GNOME on Xorg). His patches were discussed upstream and are now committed in the gdm tree. This will be quickly available as a stable release update conveniently as only impacting upgrades.

In a nutshell

To sum all that up:

And this is it for our "road to Artful" blog post long series! I hope you had as much fun reading it as I had writing and detailing the work done by the Ubuntu desktop team to make this transition, we hope, a success. It was really great as well to be able to interact and answers to the many comments that you posted on the dedicated section. Thanks to everyone participating there.

You can comment on the community HUB and participate and contribute from there! We will likely redo the same experiment and keep you posted on our technical advancement for the Ubuntu 18.04 LTS release. You should expect fewer posts as of course as the changes shouldn't be so drastic as they were this cycle. We will mostly focus on stabilizing, bug fixes and general polish!

Until then, enjoy the upcoming Ubuntu 17.10 release, watch the ubuntu.com website for the release announcement on desktop, servers, flavors, iot and clouds, join our community HUB… and see you soon around! :)

Didier

18 Oct 2017 10:24am GMT

Javier Martinez: Automatic LUKS volumes unlocking using a TPM2 chip

I joined Red Hat a few months ago, and have been working on improving the Trusted Platform Module 2.0 (TPM2) tooling, towards having a better TPM2 support for Fedora on UEFI systems.

For brevity I won't explain in this post what TPMs are and their features, but assume that readers are already familiar with trusted computing in general. Instead, I'll explain what we have been working on, the approach used and what you might expect on Fedora soon.

For an introduction to TPM, I recommend Matthew Garret's excellent posts about the topic, Philip Tricca's presentation about TPM2 and the official Trusted Computing Group (TCG) specifications. I also found "A Practical Guide to TPM 2.0" book to be much easier to digest than the official TCG documentation. The book is an open access one, which means that's freely available.

LUKS volumes unlocking using a TPM2 device

Encryption of data at rest is a key component of security. LUKS provides the ability to encrypt Linux volumes, including both data volumes and the root volume containing the OS. The OS can provide the crypto keys for data volumes, but something has to provide the key for the root volume to allow the system to boot.

The most common way to provide the crypto key to unlock a LUKS volume, is to have a user type in a LUKS pass-phase during boot. This works well for laptop and desktop systems, but is not well suited for servers or virtual machines since is an obstacle for automation.

So the first TPM feature we want to add to Fedora (and likely one of the most common use cases for a TPM) is the ability to bind a LUKS volume master key to a TPM2. That way the volume can be automatically unlocked (without typing a pass-phrase) by using the TPM2 to obtain the master key.

A key point here is that the actual LUKS master key is not present in plain text form on the system, it is protected by TPM encryption.

Also, by sealing the LUKS master key with a specific set of Platform Configuration Registers (PCR), one can make sure that the volume will only be unlocked if the system has not been tampered with. For example (as explained in this post), PCR7 is used to measure the UEFI Secure Boot policy and keys. So the LUKS master key can be sealed against this PCR, to avoid unsealing it if Secure Boot was disabled or the used keys were replaced.

Implementation details: Clevis

Clevis is a plugable framework for automated decryption that has a number of "pins", where each pin implements an {en,de}cryption support using a different backend. It also has a command line interface to {en,de}crypt data using these pins, create complex security policies and bind a pin to a LUKS volume to later unlock it.

Clevis relies on the José project, which is an C implementation of the Javascript Object Signing and Encryption (JOSE) standard. It also uses the LUKSMeta project to store a Clevis pin metadata in a LUKS volume header.

On encryption, a Clevis pin takes some data to encrypt and a JSON configuration to produce a JSON Web Encryption (JWE) content. This JWE has the data encrypted using a JSON Web KEY (JWK) and information on how to obtain the JWK for decryption.

On decryption, the Clevis pin obtains a JWK using the information provided by a JWE and decrypts the ciphertext also stored in the JWE using that key.

Each Clevis pin defines their own JSON configuration format, how the JWK is created, where is stored and how to retrieve it.

As mentioned, Clevis has support to bind a pin with a LUKS volume. This means that a LUKS master key is encrypted using a pin and the resulting JWE is stored in a LUKS volume meta header. That way Clevis is able to later decrypt the master key and unlock the LUKS volume. Clevis has dracut and udisks2 support to do this automatically and the next version of Clevis will also include a command line tool to unlock non-root (data) volumes.

Clevis TPM2 pin

Clevis provides a mechanism to automatically supply the LUKS master key for the root volume. The initial implementation of Clevis has support to obtain the LUKS master key from a network service, but we have extended Clevis to take advantage of a TPM2 chip, which is available on most servers, desktops and laptops.

By using a TPM, the disk can only be unlocked on a specific system - the disk will neither boot nor be accessed on another machine.

This implementation also works with UEFI Secure Boot, which will prevent the system from being booted if the firmware or system configuration has been modified or tampered with.

To make use of all the Clevis infrastructure and also be able to use the TPM2 as a part of more complex security policies, the TPM2 support was implemented as a clevis tpm2 pin.

On encryption the tpm2 pin generates a JWK, creates an object in the TPM2 with the JWK as sensitive data and binds the object (or seals if a PCR set is defined in the JSON configuration) to the TPM2.

The generated JWE contains both the public and wrapped sensitive portions of the created object, as well as information on how to unseal it from the TPM2 (hashing and key encryption algorithms used to recalculate the primary key, PCR policy for authentication, etc).

On decryption the tpm2 pin takes the JWE that contains both the sealed object and information on how to unseal it, loads the object into the TPM2 by using the public and wrapped sensitive portions and unseals the JWK to decrypt the ciphertext stored in the JWE.

The changes haven't been merged yet, since the pin is using features from tpm2-tools master so we have to wait for the next release of the tools. And also there are still discussions on the pull request about some details, but it should be ready to land soon.

Usage

The Clevis command line tools can be used to encrypt and decrypt data using a TPM2 chip. The tpm2 pin has reasonable defaults but one can configure most of its parameters using the pin JSON configuration (refer to the Clevis tpm2 pin documentation for these), e.g:

$ echo foo | clevis encrypt tpm2 '{}' > secret.jwe

And then the data can later be decrypted with:

$ clevis decrypt < secret.jwe
foo

To seal data against a set of PCRs:

$ echo foo | clevis encrypt tpm2 '{"pcr_ids":"8,9"}' > secret.jwe

And to bind a tpm2 pin to a LUKS volume:

$ clevis luks bind -d /dev/sda3 tpm2 '{"pcr_ids":"7"}'

The LUKS master key is not stored in raw format, but instead is wrapped with a JWK that has the same entropy than the LUKS master key. It's this JWK that is sealed with the TPM2.

Since Clevis has both dracut and udisks2 hooks, the command above is enough to have the LUKS volume be automatically unlocked using the TPM2.

The next version of Clevis also has a clevis-luks-unlock command line tool, so a LUKS volume could be manually unlocked with:

$ clevis luks unlock -d /dev/sda3

Using the TPM2 as a part of more complex security policies

One of Clevis supported pins is the Shamir Shared Secret (SSS) pin, that allows to encrypt a secret using a JWK that is then split into different parts. Each part is then encrypted using another pin and a threshold is chose to decide how many parts are needed to reconstruct the encryption key, so the secret can be decrypted.

This allows for example to split the JWK used to wrap the LUKS mater key in two parts. One part of the JWK could be sealed with the TPM2 and another part be stored in a remote server. By sealing a JWK that's only one part of the needed key to decrypt the LUKS master key, an attacker obtaining the data sealed in the TPM won't be able to unlock the LUKS volume.

The Clevis encrypt command for this particular example would be:

$ clevis luks bind -d /dev/sda3 sss '{"t": 2, "pins": \
  {"http":{"url":"http://server.local/key"}, "tpm2": \
  {"pcr_ids":"7"}}}'

Limitations of this approach

One problem with the current implementation is that Clevis is a user-space tool and so it can't be used to unlock a LUKS volume that has an encrypted /boot directory. The boot partition still needs to remain unencrypted so the bootloader is able to load a Linux kernel and an initramfs that contains Clevis, to unlock the encrypted LUKS volume for the root partition.

Since the initramfs is not signed on a Secure Boot setup, an attacker could replace the initramfs and unlock the LUKS volume. So the threat model meant to protect is for an attacker that can get access to the encrypted volume but not to the trusted machine.

There are different approaches to solve this limitation. The previously mentioned post from Matthew Garret suggests to have a small initramfs that's built into the signed Linux kernel. The only task for this built-in initramfs would be to unseal the LUKS master key, store it into the kernel keyring and extend PCR7 so the key can't be unsealed again. Later the usual initramfs can unlock the LUKS volume by using the key already stored in the Linux kernel.

Another approach is to also have the /boot directory in an encrypted LUKS volume and provide support for the bootloader to unseal the master key with the TPM2, for example by supporting the same JWE format in the LUKS meta header used by Clevis. That way only a signed bootloader would be able to unlock the LUKS volume that contains /boot, so an attacker won't be able to tamper the system by replacing the initramfs since it will be in an encrypted partition.

But there is work to be done for both approaches, so it will take some time until we have protection for this threat model.

Still, having an encrypted root partition that is only automatically unlocked on a trusted machine has many use cases. To list a few examples:

Acknowledgements

I would like to thanks Nathaniel McCallum and Russell Doty for their feedback and suggestions for this article.


18 Oct 2017 9:09am GMT

16 Oct 2017

feedPlanet GNOME

Havoc Pennington: Build a workbench in 2 years

(This post has nothing to do with software, move along if you aren't into woodworking…)

Here's my somewhat different workbench design (based on Christopher Schwarz's "Knockdown Nicholson" plans which became this article), thinking someone out there might be interested. Also it's a lesson in how to make simple plans ten times more complicated - whether you'd like to avoid or imitate my example, I'll leave to you.

Here's my finished workbench. (Someone actually skilled could build something prettier! But I think my bench will work well for decades if I'm fortunate.)

alt

And here's a tweet with the pile of wood I started with two and a half years ago:

~400 pounds of maple that I'm feeling a bit intimidated by. will be a new workbench, after a lot of work. pic.twitter.com/L6aiOs8DEB

- Havoc Pennington (@havocp) March 8, 2015

The Popular Woodworking article suggests "With $100 in lumber and two days, you can build this sturdy stowaway bench" to which I say, hold my beer. I can spend a lot more, and take way more time!

I have many excuses: my bench project is 2.5 years old, and my daughter is just over 2 years old. I started a new job and then left that to start a company. I probably had a few hours of shop time on average per week, and I worked on a lot of shop projects that weren't the workbench itself.

Excuses only account for the calendar time though, not the shop time. This was closer to a 200-hour project for me than a two-day project.

How do you make a workbench project take ten times longer? (Other than "be an inexperienced hobbyist woodworker"?)

1. Use hardwood rather than southern yellow pine

I bought a pile of soft maple thinking it'd be a mostly-cosmetic choice. I'm very glad the finished bench is maple, but it slowed me down in several ways:

When I started this, I'd just finished a couple of sawbenches and some bookshelves made of pine, and I was sick of the stuff; it's horrible. (Bias I will admit to: I have childhood memories of walking through stands of loblolly pine trees in 95-degree Georgia heat, getting ticks and sweating; loblolly pine offers no shade to speak of. Loblolly "forests" are the worst.)

2. Make it 8 feet long

Unfortunately, both jointer planes and powered jointers are designed for up to 6′ boards. 8′ boards not only have more surface area, they are also too long for the jointers. 8′ seems like it should be 33% more work than 6′, but it isn't linear like that because the required skill level goes up.

I started this project to solve a too-short-bench problem. My old bench is based on these Ana White plans. I fixed that one up to be coplanar on the front, added a vise, and added a bunch of weight; it's hideous but it does permit handwork with my modifications… as long as your boards are no longer than about 2.5 feet. The first time I tried to make a project with longer boards, I discovered I'd need a new bench.

My bench isn't only longer than the original plans; everything is larger-scale. Most of the 8/4 lumber came out about 1-3/4″ rather than 1-1/2″ like construction lumber. The legs are 8″ wide rather than 5-1/2″ wide, and the top is 2-1/4″ thick all the way to the edge.

3. No power saws

I started to do this entirely with hand tools; after a while I caved and got a nice jointer/planer machine. Milling these boards completely by hand was beyond my hobbyist pay grade. That said, every part of the bench still required significant hand-planing, and I didn't use power saws or routers. I'd guess I spent hours just cleaning up ends with a shooting board.

happy about success keeping this 8' rip cut straight and square the whole way. practice paying off 🙂 pic.twitter.com/s6BXuZbwFV

- Havoc Pennington (@havocp) March 26, 2016

If I built this bench again, I'd probably get a track saw, which would save a little sawing time and a LOT of cleanup-planing time.

4. Attach the top to the aprons rather than the leg assemblies

After I started the project, I realized that the original Knockdown Nicholson design doesn't allow for much wood movement. Southern yellow pine doesn't move too much, and I was worried maple would cause a problem. Maybe it would have, maybe not, I don't know.

Rather than bolt the top to the leg assemblies, I used dowel nuts (the large 3/8-16 Veritas knockdown variety) to bolt "joists" between the aprons, and then lag-screwed the top to those joists.

Instagram Photo

There are advantages to the way I did it:

There are also disadvantages:

5. Build the leg assemblies with giant dovetails

Giant dovetails turn out to be much more time-consuming than regular dovetails. I started on this path because I didn't have enough lumber to make the large screwed-on "plate" in the original plans.

I sawed most of the tails at least a little bit off square; squaring them up wasn't easy at all, since they were wider than any chisel I owned. Similarly, the sockets were deeper than any router plane I had would go, with sides and bottom too large for my chisels. If you have timber-framing tools you might be able to do this more quickly than I did. This was another consequence of using the rough-sawn maple rather than construction lumber. Tools tend to top out at 1-1/2″ widths and depths, while the maple was more like 1-3/4″.

6. Overkill the tolerances

With more skill, I'd have known how to cut more corners. Instead, I made things as perfect as I could make them. This was still far from perfect - I could point out flaws in the workbench for hours!

To build a bench really quickly I think you'd want to avoid milling or planing construction lumber at all. But gosh it'd have some huge gaps. (My old Ana-White-style workbench is like this, because I owned neither plane nor planer… I pulled everything square with clamps, then Kreg-screwed it in place.)

7. Build a workbench without a workbench

While building a workbench, I often thought "this sure would be easier if I had a workbench."

Planing boards on sawbenches sucks. Hello back pain! My old workbench is only 3′ wide, so it wasn't an option (that's why I built the new bench in the first place). It'd almost be worth building a terrible-but-full-size Kreg-screwed temporary bench, purely to build the final bench on, and then burning the temporary bench. Or perhaps some sort of bench-height sawhorses-and-plywood contraption.

What went well

The bench works very well - everything that made it take more time, had at least some payoff. I'm glad I have an 8′ maple bench instead of a 6′ pine bench. I'm glad it's as good as I knew how to make it. The obnoxious-to-build joists made the top prettier and flatter, and the giant dovetails made the leg assemblies rock solid.

It breaks down into 5 parts, just like Christopher Schwarz's original, and the McMaster-Carr mounting plates work great.

I love the Benchcrafted swing-away seat, it gives me somewhere to sit down that takes up zero floor space when not in use. (Of course I overkilled attaching it, with a bolt all the way through the bench leg, and thick square washers.)

Lessons learned

Ordering a workbench from Plate 11 or Lie-Nielsen makes total sense and their prices are a bargain!

If you do build something, consider sticking to the simple plan.

And I'm now a whole lot better at planing, sawing, drilling, sharpening, and all sorts of other skills than I was when I started. The next project I make might go a little bit faster.

16 Oct 2017 10:35pm GMT

Michael Meeks: 2017-10-16 Monday.

16 Oct 2017 9:00pm GMT

Gustavo Noronha Silva: Who knew we still had low-hanging fruits?

Earlier this month I had the pleasure of attending the Web Engines Hackfest, hosted by Igalia at their offices in A Coruña, and also sponsored by my employer, Collabora, Google and Mozilla. It has grown a lot and we had many new people this year.

Fun fact: I am one of the 3 or 4 people who have attended all of the editions of the hackfest since its inception in 2009, when it was called WebKitGTK+ hackfest \o/

20171002_204405

It was a great get together where I met many friends and made some new ones. Had plenty of discussions, mainly with Antonio Gomes and Google's Robert Kroeger, about the way forward for Chromium on Wayland.

We had the opportunity of explaining how we at Collabora cooperated with igalians to implemented and optimise a Wayland nested compositor for WebKit2 to share buffers between processes in an efficient way even on broken drivers. Most of the discussions and some of the work that led to this was done in previous hackfests, by the way!

20171002_193518

The idea seems to have been mostly welcomed, the only concern being that Wayland's interfaces would need to be tested for security (fuzzed). So we may end up going that same route with Chromium for allowing process separation between the UI and GPU (being renamed Viz, currently) processes.

On another note, and going back to the title of the post, at Collabora we have recently adopted Mattermost to replace our internal IRC server. Many Collaborans have decided to use Mattermost through an Epiphany Web Application or through a simple Python application that just shows a GTK+ window wrapping a WebKitGTK+ WebView.

20171002_101952

Some people noticed that when the connection was lost Mattermost would take a very long time to notice and reconnect - its web sockets were taking a long, long time to timeout, according to our colleague Andrew Shadura.

I did some quick searching on the codebase and noticed WebCore has a NetworkStateNotifier interface that it uses to get notified when connection changes. That was not implemented for WebKitGTK+, so it was likely what caused stuff to linger when a connection hiccup happened. Given we have GNetworkMonitor, implementation of the missing interfaces required only 3 lines of actual code (plus the necessary boilerplate)!

screenshot-from-2017-10-16-11-13-39

I was surprised to still find such as low hanging fruit in WebKitGTK+, so I decided to look for more. Turns out WebCore also has a notifier for low power situations, which was implemented only by the iOS port, and causes the engine to throttle some timers and avoid some expensive checks it would do in normal situations. This required a few more lines to implement using upower-glib, but not that many either!

That was the fun I had during the hackfest in terms of coding. Mostly I had fun just lurking in break out sessions discussing the past, present and future of tech such as WebRTC, Servo, Rust, WebKit, Chromium, WebVR, and more. I also beat a few challengers in Street Fighter 2, as usual.

I'd like to say thanks to Collabora, Igalia, Google, and Mozilla for sponsoring and attending the hackfest. Thanks to Igalia for hosting and to Collabora for sponsoring my attendance along with two other Collaborans. It was a great hackfest and I'm looking forward to the next one! See you in 2018 =)

16 Oct 2017 6:37pm GMT

Richard Hughes: Shaking the tin for LVFS: Asking for donations!

tl;dr: If you feel like you want to donate to the LVFS, you can now do so here.

Nearly 100 million files are downloaded from the LVFS every month, the majority being metadata to know what updates are available. Although each metadata file is very small it still adds up to over 1TB in transfered bytes per month. Amazon has kindly given the LVFS a 2000 USD per year open source grant which more than covers the hosting costs and any test EC2 instances. I really appreciate the donation from Amazon as it allows us to continue to grow, both with the number of Linux clients connecting every hour, and with the number of firmware files hosted. Before the grant sometimes Red Hat would pay the bandwidth bill, and other times it was just paid out my own pocket, so the grant does mean a lot to me. Amazon seemed very friendly towards this kind of open source shared infrastructure, so kudos to them for that.

At the moment the secure part of the LVFS is hosted in a dedicated Scaleway instance, so any additional donations would be spent on paying this small bill and perhaps more importantly buying some (2nd hand?) hardware to include as part of our release-time QA checks.

I already test fwupd with about a dozen pieces of hardware, but I'd feel a lot more comfortable testing different classes of device with updates on the LVFS.

One thing I've found that also works well is taking a chance and buying a popular device we know is upgradable and adding support for the specific quirks it has to fwupd. This is an easy way to get karma from a previously Linux-unfriendly vendor before we start discussing uploading firmware updates to the LVFS. Hardware on my wanting-to-buy list includes a wireless network card, a fingerprint scanner and SSDs from a couple of different vendors.

If you'd like to donate towards hardware, please donate via LiberaPay or ask me for PayPal/BACS details. Even if you donate €0.01 per week it would make a difference. Thanks!

16 Oct 2017 3:50pm GMT

Michael Meeks: 2017-10-09 Monday.

16 Oct 2017 11:42am GMT

Gabriel - Cristian Ivașcu: Safe Browsing in Epiphany

I am pleased to announce that Epiphany users will now benefit from a safe browsing support which is capable to detect and alert users whenever they are visiting a potential malicious website. This feature will be shipped in GNOME 3.28, but those who don't wish to wait that long can go ahead and build Epiphany from master to benefit from it.

The safe browsing support is enabled by default in Epiphany, but you can always disable it from the preferences dialog by toggling the checkbox under General -> Web Content -> Try to block dangerous websites.

Safe browsing is implemented with the help of Google's Safe Browsing Update API v4. How this works: the URL's hash prefix is tested against a local database of unsafe hash prefixes, and if a match is found then the full hash is further requested from the Google Safe Browsing server to be compared to the URL's full hash. If the full hashes are equal, then the URL is considered unsafe. Of course, all hash prefixes and full hashes are cached for a certain amount of time, in order to minimize the number of requests sent to the server. Needless to say that working only with URL hashes brings a big privacy bonus since Google never knows the actual URLs that clients browse. The whole description of the API can be found here.


16 Oct 2017 10:54am GMT

Gabriel - Cristian Ivașcu: GUADEC 2017

This year's GUADEC came a bit unexpectedly for me. I wasn't really planning to attend it because of my school and work, but when Iulian suggested that we should go, I didn't have to think twice and agreed immediately. And I was not disappointed! Travelling to Manchester proved to be a great vacation where I could not only enjoy a few days off but also learn things and meet new and old friends.

Much like last year's GUADEC, I attended some of the talks during the core days where I got to find out more things about new technologies such as Flatpak, Meson, BuildStream (I'm really looking forward to seeing how this one turns out in the future) etc, and also about the GNOME history and future prospects.

One of this year's social events was GNOME's 20th anniversary party held Saturday night at Museum of Science and Industry. I have to thank the organization team for arranging such a great party and taking care of everything. This was definitely the highlight of this year!

As usual, I'm gonna lay a few words about the location that we were in - Manchester. I found Manchester a nice and cozy city, packed with everything: universities, museums, parks, and restaurants of all kinds for all tastes. The weather was not the best that you can get, with all the rainy and sunny sessions that alternate on an hourly basis, but I guess that's typical for UK. Overall, I think that Manchester is an interesting city where one would not ever get bored.

Thanks again to the GUADEC team and to GNOME for hosting such an awesome event!

sponsored-badge-shadow


16 Oct 2017 10:54am GMT

Christian Hergert: Builder gains multi-touch gestures

If you're running Wayland and have a touchpad capable of multi-touch, Builder (Nightly) now lets you do fun stuff like the following video demonstrates.

Just three-finger-swipe left or right to move the document. Content is sticky-to-fingers, which is my expectation when using gestures.

It might also work on a touchscreen, but I haven't tried.

16 Oct 2017 10:37am GMT

Didier Roche: Ubuntu GNOME Shell in Artful: Day 15

Since the Ubuntu Rally in New York, the Ubuntu desktop team is full speed ahead on the latest improvements we can make to our 17.10 Ubuntu release, Artful Aardvark. Last Thursday was our Final Freeze and I think it's a good time to reflect some of the changes and fixes that happened during the rally and the following weeks. This list isn't exhaustive at all, of course, and only cover partially changes in our default desktop session, featuring GNOME Shell by default. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 15: Final desktop polish before 17.10 is out

GNOME 3.26.1

Most of you would have noticed already, but most of GNOME modules have been updated to their 3.26.1 release. This means that Ubuntu 17.10 users will be able to enjoy the latest and greatest from the GNOME project. It's been fun to follow again the latest development release, report bugs, catch up regressions and following new features.

GNOME 3.26.1 introduces in addition to many bug fixes, improvements, documentation and translation, updates resizeable tiling support, which is a great feature that many people will surely take advantage of! Here is the video that Georges has done and blogged about while developing the feature for those who didn't have a look yet:

A quick Ubuntu Dock fix rampage

I've already praised here many times the excellent Dash to Dock upstream for their responsiveness and friendliness. A nice illustration of this occurred during the Rally. Nathan grabbed me in the Desktop room and asked if a particular dock behavior was desired (scrolling on the edge switching between workspaces). First time I was hearing that feature and finding the behavior being possibly confusing, I pointed him to the upstream bug tracker where he filed a bug report. Even before I pinged upstream about it, they noticed the report and engaged the discussion. We came to the conclusion the behavior is unpredictable for most users and the fix was quickly in, which we backported in our own Ubuntu Dock as well with some other glitch fixes.

The funny part is that Chris witnessed this, and reported that particular awesome cooperation effort in a recent Linux Unplugged show.

Theme fixes and suggested actions

With our transition to GNOME Shell, we are following thus more closely GNOME upstream philosophy and dropped our headerbar patches. Indeed, as we previously, for Unity vertical space optimizations paradigm with stripping the title bar and menus for maximized applications, distro-patched a lot of GNOME apps to revert the large headerbar. This isn't the case anymore. However, it created a different class of issues: action buttons are generally now on the top and not noticeable with our Ambiance/Radiance themes.

Enabled suggested action button (can't really notice it)

We thus introduced some styling for the suggested action, which will consequently makes other buttons noticeable on the top button (this is how upstream Adwaita theme implements it as well). After a lot of discussions on what color to use (we tied of course different shades of orange, aubergine…), working with Daniel from Elementary (proof!), Matthew suggested to use the green color from the retired Ubuntu Touch color palette, which is the best fit we could ever came up with ourself. After some gradient work to make it match our theme, and some post-upload fixes for various states (thanks to Amr for reporting some bugs on them so quickly which forced me to fix them during my flight back home :p). We hope that this change will help users getting into the habit to look for actions in the GNOME headerbars.

Enabled suggested action button

Disabled suggested action button

But That's not all on the theme front! A lot of people were complaining about the double gradient between the shell and the title bar. We just uploaded for the final freeze some small changes by Marco making them looking a little bit better for both titlebars, headerbars and gtk2 applications, when focused on unfocused, having one or no menus. Another change was made in GNOME Shell css to make our Ubuntu font appear a little bit less blurry than it was under Wayland. A long-term fix is under investigation by Daniel.

Headerbar on focused application before theme change

Headerbar on focused application with new theme fix

Title bar on focused application before theme change

Title bar on focused application with new theme fix

Title bar on unfocused application with new theme fix

Settings fixes

The Dock settings panel evolved quite a lot since its first inception.

First shot at Dock settings panel

Bastien, who had worked a lot on GNOME Control Center upstream, was kindly enough to give a bunch of feedbacks. While some of them were too intrusive so late in the cycle, we implemented most of his suggestions. Of course, even if we live less than 3 kms away from each other, we collaborated as proper geeks over IRC ;)

Here is the result:

Dock settings after suggestions

One of the best advice was to move the background for list to white (we worked on that with Sébastien), making them way more readable:

Settings universal access panel before changes

Settings universal access panel after changes

Settings search Shell provider before changes

Settings search Shell provider after changes

i18n fixes in the Dock and GNOME Shell

Some (but not all!) items accessible via right clicking on applications in the Ubuntu Dock, or even in the upstream Dash in the vanilla session weren't translated.

Untranslated desktop actions

After a little bit of poking, it appeared that only the Desktop Actions were impacted (what we called "static quicklist" in the Unity world). Those were standardized some years after we introduced it in Unity in Freedesktop spec revision 1.1.

Debian, like Ubuntu is extracting translations from desktop files to include them in langpacks. Glib is thus distro-patched to load correctly those translations. However, the patch was never updated to ensure action names were returning localized strings, as few people are using those actions. After a little bit of debugging, I fixed the patch in Ubuntu and proposed back in the Debian Bug Tracking System. This is now merged for the next glib release there (as the bug impacts both Ubuntu, Debian and all its derivatives).

We weren't impacted by this bug previously as when we introduced this in Unity, the actions weren't standardized yet and glib wasn't supporting it. Unity was thus directly loading the actions itself. Nice now to have fixed that bug so that other people can benefit from it, using Debian and vanilla GNOME Shell on Ubuntu or any other combinations!

Translated desktop actions

Community HUB

Alan announced recently the Ubuntu community hub when we can exchange between developers, users and new contributors.

When looking at this at the sprint, I decided that it could be a nice place for the community to comment on those blog posts rather than creating another silo here. Indeed, the current series of blog post have more than 600 comments, I tried to be responsive on most of them requiring some answers, but I can't obviously scale. Thanks to some of the community who already took the time to reply to already answered questions there! However, I think our community hub is a better place for those kind of interactions and you should see below, an automated created topic on the Desktop section of the hub corresponding to this blog post (if all goes well. Of course… it worked when we tested it ;)). This is read-only, embedded version and clicking on it should direct you to the corresponding topic on the discourse instance where you can contribute and exchange. I really hope that can foster even more participation inside the community and motivate new contributors!

(edit: seems like there is still some random issues on topic creation, for the time being, I created a topic manually and you can comment on here)

Other highlights

We got some multi-monitor fixes, HiDPI enhancements, indicator extension improvements and many others… Part of the team worked with Jonas from Red Hat on mutter and Wayland on scaling factor. It was a real pleasure to meet him and to have him tagging along during the evenings and our numerous walks throughout Manhattan as well! It was an excellent sprint followed by nice follow-up weeks.

If you want to get a little bit of taste of what happened during the Ubuntu Rally, Chris from Jupiter Broadcasting recorded some vlogs from his trip to getting there, one of them being on the event itself:

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what's cooking!

Now, it's almost time to release 17.10 (few days ahead!), but I will probably blog about the upgrade experience in my next and last - for this cycle - report on our Ubuntu GNOME Shell transition!

Edit: As told before, feel free to comment on our community HUB as the integration below doesn't work for now.

16 Oct 2017 9:24am GMT

15 Oct 2017

feedPlanet GNOME

Murray Cumming: Google App Engine: Using subdomains

Separating Frontend and Backend App Engine Services into Separate Subdomains

I'm in the middle of re-implementing my bigoquiz.com website into separate frontend (client) and backend (server) projects, which I'm deploying to separate App Engine "services". Previously both the front end HTML and JavaScript, and the backend Java, were all in one (GWT) project, deployed together as one default App Engine service. I chose to separate the two services into their own subdomains. So the frontend is currently served from beta.bigoquiz.com and its JavaScript code uses the REST API provided by the backend, served from betaapi.bigoquiz.com. This will later be bigoquiz.com and api.bigoquiz.com.

This seemed wise at the time and I thought it was necessary but in hindsight it was an unnecessary detour. I would have been better off just mapping different paths to the services. I didn't see how to do this at first with Google AppEngine, but it's very easy. You just provide a dispatch.yaml file (also mentioned below) to map URL prefixes to the different services. For instance, /api/* would map to the backend service, and everything else would map to the frontend service. Larger systems might do this with a service gateway such as Netflix's Zuul.

Nevertheless, this post mentions some of the issues you'll have if you choose to split your frontend and backend into separate subdomains. It mentions App Engine, Go, and Angular (TypeScript), but the ideas are generally useful.

App Engine Service Names

Each service that you deploy to App Engine has an app.yaml file. This can have a line to name the service, which would otherwise just be called "default". For instance, the first line of my frontend's app.yaml file currently looks like this:

service: beta

The first line of my backend's app.yaml file currently looks like this:

service: betaapi

When I deploy these services, via "gcloud app deploy" on the command line, I can then see them listed in the Google Cloud console, in the App Engine section, in the Services section.

Mapping Subdomains to App Engine Services

In the Google Cloud console, in the App Engine section, in the settings section, in the "custom domains" tab, you should add each subdomain. You'll need to verify your ownership of each subdomain by adding DNS entries for Google to check.

App Engine very recently added the "Managed Security" feature, which automatically creates and manages (LetsEncrypt) SSL certificates, letting you serve content via HTTPS. Using this currently makes using subdomains more complicated, because it doesn't yet support wildard SSL certificates. That's likely to be possible soon when LetsEncrypt starts providing wildcards SSL certificates, so this section might become outdated.

Without Managed Security

If you aren't using Managed Security yet, mapping subdomains to services is quite simple. Google's documentation suggests that you just add a wildcard CNAME entry to the DNS for your domain, like so:

Record: *
 Type: CNAME
 Value: ghs.googlehosted.com

All subdomains will then be served by google. App Engine will try to map a subdomain to a service of the same name. So foo.example.com will map to a service named foo.

With Managed Security

However, if you are using Managed Security, you'll need to tell App Engine about each individual subdomain so each subdomain can have its own SSL certificate. Unfortunately, you'll then need to add the same 8 A and AAAA records to your DNS for your subdomain too.

Although App Engine does automatic subdomain-to-service mapping for wildcard domains, this doesn't happen with specifically-listed subdomains. You'll need to specify the mapping manually using a dispatch.yaml file, like so:

dispatch:
  - url: "beta.bigoquiz.com/*"
    service: beta
  - url: "betaapi.bigoquiz.com/*"
    service: betaapi

You can then deploy the dispatch.yaml file from the command line like so:

$ gcloud app deploy dispatch.yaml

I wish there was some way to split these dispatch rules up into separate files, so I could associate them with the separate codebases for the separate services. For now, I've put the dispatch.yaml file for all the services in the repository for my frontend code and I manually deploy it.

CORS (Cross Origin Resource Sharing)

By default, modern browsers will not allow Javascript that came from www.example.com (or example.com) to make HTTP requests to another domain or subdomain, such as api.example.com. This same-origin policy prevents malicious pages from accessing data on other sites, possibly via your authentication token.

If you try to access a subdomain from JavaScript served from a different subdomain, you'll see error messages about this in the browser console, such as:

No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'beta.example.com' is therefore not allowed access.

For instance, JavaScript served from foo.example.com cannot normally access content at bar.example.com.

However, you can allow calls to a subdomain by using the CORS system. The browser will attempt the call, but will only provide the response back to the JavaScript if the response has the appropriate Allowed* headers. The browser may first attempt a separate CORS "pre-flight" request before actually issuing the request, depending on the request details.

If you configure your server to reply to a CORS requests with appropriate AllowedOrigins and AllowedMethods HTTP headers, you can tell the browser to allow the JavaScript to make the HTTP requests and receive the responses.

For instance, in Go, I've used rs/cors to respond to CORS requests, like so, passing in the original julienschmidt/httprouter that I'm using.

c := cors.New(cors.Options{
        AllowedOrigins: []string{"example.com},
        AllowedMethods: []string{"GET", "POST", "OPTIONS"},     
})

handler := c.Handler(router)
http.Handle("/", handler)

I also did this in my original Java code by adding a ContainerResponseFilter, annotated with @Provider.

Cookies: CORS

Even when the server responds to CORS requests with AllowedOrigins and AllowedMethods headers, by default the browser will not allow Javascript to send cookies when it sends HTTP requests to other domains (or subdomains). But you can allow this by adding an AllowCredentials header to the server's CORS response. For instance, I added the AllowCredentials header in Go on the server side, like so:

c := cors.New(cors.Options{
        ...
        AllowCredentials: true,})

You might need to specify this on the client-side too, because the underlying XMLHttpRequest defaults to not sending cookies with cross-site requests. For instance, I specify withCredentials in Angular (Typescript) calls to http.get(), like so:

this.http.get(url, {withCredentials: true})

Note Angular's awful documentation for the withCredentials option, though Mozilla's documentation for the XMLHttpRequest withCredentials option is clearer.

Cookies: Setting the Domain

To use a cookie across subdomains, for instance to send a cookie to a domain other than the one that provided the cookie, you may need to set the cookie's domain, which makes the cookie available to all subdomains in the domain. Otherwise, the cookie will be available only to the specific subdomain that set it.

I didn't need to do this because I had just one service on one subdomain. This subdomain sets the cookie in responses, the cookie is then stored by the browser, and the browser provides the cookie in subsequent requests to the same subdomain.

OAuth2 Callback

If your subdomain implements endpoints for oauth2 login and callback, you'll need to tell App Engine about the subdomain. In the Google Cloud console, in the "APIs & Services" section, go to the Credentials section. Here you should enter the subdomain for your web page under "Authorized JavaScript origins", and enter the subdomain for your oauth2 login and callback subdomain under "Authorized redirect URIs".

The subdomain will then be listed appropriately in the configuration file that you download via the "Download JSON" link, which you can parse in your code, so that the oauth2 request specifies your callback URL. For instance, I parse the downloaded config .json file in Go using google.ConfigFromJSON() from the golang.org/x/oauth2/google package, like so:

func GenerateGoogleOAuthConfig(r *http.Request) *oauth2.Config {
        c := appengine.NewContext(r)

        b, err := ioutil.ReadFile(configCredentialsFilename)
        if err != nil {
                log.Errorf(c, "Unable to read client secret file (%s): %v", configCredentialsFilename, err)
                return nil
        }

        config, err := google.ConfigFromJSON(b, credentialsScopeProfile, credentialsScopeEmail)
        if err != nil {
                log.Errorf(c, "Unable to parse client secret file (%) to config: %v", configCredentialsFilename, err)
                return nil
        }

        return config
}

15 Oct 2017 7:05pm GMT

Adrien Plazas: retro-gtk: Renaissance

This is the second article in a small series about retro-gtk, I recommend you to read the first one, retro-gtk: Postmortem, before this one.

In the previous article I listed some problems I encountered while developing and using retro-gtk; in this one I will present some solutions I implemented to fix them! ☺ All that is presented in this article is part of the newly-released retro-gtk 0.13.1, which is the first version of the 0.14 development cycle.

Changing the Scope

The Libretro API is tricky: lots of little details need to be handled properly and it isn't always very clear how to do so. By mimicking this API, retro-gtk inherited its complexity, making it way more complex than it should be as there aren't many different ways for a Libretro frontend to handle the cores correctly. retro-gtk was forwarding the complexity of the Libretro API to its users rather than abstracting it.

About a year ago I decided to slowly change the scope of the library. In the previous article, I described retro-gtk as "a GObject-based plugin system based on the Libretro plugin definition", and this still holds true, what changed is how its users will handle the cores. By taking inspiration from how it was used by the GamesRetroRunner class of Games, I slowly moved retro-gtk away from a Libretro reimplementation for frontends and turned it into a Libretro frontend as a library, offering higher level building block and taking care of most of the complexity of the original API internally.

Do you remember the pseudo-code example from the first article, implementing a load_game() function? It was overly complicated compared to what we actually wanted to do, wasn't it? Well, here is how to implement it in C with the new simplified API.


void load_game (RetroCore *core,
const gchar * const *media_uris)
{
GError *error = NULL;

retro_core_set_medias (core, media_uris);
retro_core_boot (core, &error);
if (error != NULL) {
g_debug ("Couldn't boot the Libretro core: %s", error->message);
g_error_free (error);
}
}

With the new API, even C code with GError handling is shorter than the previous pseudo-code example!

As you can see that's much simpler to use, most of the complexity is now handled internally by retro-gtk. Instead of having to use the many components inherited from the Libretro API, you now simply give the medias to the core prior to booting it, booting up the core will take care of its initialization and of loading the game. This also means that retro-gtk doesn't have to expose the game info or disk interface types, making the API smaller and hence simpler to understand.

Many other similar changes were implemented all around the API, way too many to list. Many features that were implemented as complex to use classes tied to RetroCore have been merged into it, removing lot's of the artificially introduced complexity in-between them.

A noteworthy improvement is the introduction of the RetroCoreView widget. Previously, the video output of the core was handled by RetroCairoDisplay, the audio output by RetroPaPlayer and widget specific input devices - forwarding keyboard and mouse inputs to the core or using the keyboard as a gamepad - were handled by input device objects taking a GtkWidget and listening to its events to implement a specific Libretro input device. It worked somewhat well but demanded lots of code to the user, and interaction between these objects was more complex than it should be.

RetroCoreView implement all these features in a single GTK+ widget with a simple API. There are two main functions to this widget. The first one is to allow you to set a core it should handle, it will display its video and play its audio without requiring you to take care of how to do so. The second one is to allow you to simply access the user inputs it receives by exposing it as controllers of the desired RetroControllerType.

Having all of this inside one widget avoid the user to deal with multiple layers of widgets and objects, rendering the video or capturing events. Handling the video output under the cover gives us more freedom on how to implement it. For example when a hardware accelerated renderer will be introduced, we should be able to change it without breaking the ABI and the users should automatically benefit from it, with no change to their codes or their binaries. This also allows to handle very easily interdependencies between the controllers and the outputs, for example a pointer controller is dependent on where the events are happening on the rendered video. All of this makes this widget simpler to use but also simpler to maintain as lot's of the code became way simpler in the transformation proccess.

For you curiosity, here is a slightly simplified version of RetroCoreView in the generated VAPI.


public class Retro.CoreView : Gtk.EventBox {
public CoreView ();
public Retro.Controller as_controller (Retro.ControllerType controller_type);
public uint64 get_controller_capabilities ();
public int16 get_input_state (Retro.ControllerType controller_type, uint index, uint id);
public void set_core (Retro.Core? core);
public void set_filter (Retro.VideoFilter filter);
public bool can_grab_pointer { get; set; }
public Gdk.Pixbuf pixbuf { get; set; }
public bool snap_pointer_to_borders { get; set; }
}

There are a few things I'm not sure how to handle properly yet:

Porting retro-gtk to C

Porting retro-gtk to C comes with downsides:

But this port also comes with advantages:

Now, retro-gtk is a C library, it uses GTK-Doc to support documentation and introspection, it is introspectable via GObject Introspection and is usable in Vala by compiling a VAPI from the GIR file.

Emergent Advantages

The combination of these two changes, offering a higher level API which doesn't expose too much of the inner working of the library and developing retro-gtk with the same language as its dependencies, allows more room for the devs to work inside the library and to move things around without hitting and breaking the API ceilling. To continue on the room analogy, writing the API directly instead of compiling it from Vala allows to perfectly control what is part of it and what isn't, there are less risks for unexpected cables to hang from the API ceilling, cables we could hit while working. All of this should allow us to have a more stable API.

With the ability to control the API, and now that I am somewhat happy about it I want to change the stability promise of retro-gtk a bit: we will keep refactoring some bits of the API during the 0.14 development cycle, but after that we will try to keep it somewhat stable. What it means is that if we break it, we will try to keep the break as small as possible and we will document and advertise this explicit API break. If this not totally unstable state doesn't frighten you, you can start using retro-gtk for your own software with 0.14!

To celebrate this important milestone in the life of retro-gtk, the library just got its first logo!

So… What Now‽ 😃

So far I explained what was retro-gtk and what were its major problems in the first article, how I solved some of them but more importantly how I prepared the terrain to solve bigger ones in this article… but what's coming next? Now that the library can be improved upon more freely, the next article will detail the plans for its evolution, introducing shiny new big features to make it rock-solid!

15 Oct 2017 2:01pm GMT