17 Dec 2017

feedPlanet GNOME

Michael Catanzaro: Epiphany Stable Flatpak Releases

The latest stable version of Epiphany is now available on Flathub. Download it here. You should be able to double click the flatpakref to install it in GNOME Software, if you use any modern GNOME operating system not named Ubuntu. But, in my experience, GNOME Software is extremely buggy, and it often as not does not work for me. If you have trouble, you can use the command line:

flatpak install --from https://flathub.org/repo/appstream/org.gnome.Epiphany.flatpakref

This has actually been available for quite a while now, but I've delayed announcing it because some things needed to be fixed to work well under Flatpak. It's good now.

I've also added a download link to Epiphany's webpage, so that you can actually, you know, download and install the software. That's a useful thing to be able to do!

Benefits

The obvious benefit of Flatpak is that you get the latest stable version of Epiphany (currently 3.26.5) and WebKitGTK+ (currently 2.18.3), no matter which version is shipped in your operating system.

The other major benefit of Flatpak is that the browser is protected by Flatpak's top-class bubblewrap sandbox. This is, of course, a UI process sandbox, which is different from the sandboxing model used in other browsers, where individual browser tabs are sandboxed from each other. In theory, the bubblewrap sandbox should be harder to escape than the sandboxes used in other major browsers, because the attack surface is much smaller: other browsers are vulnerable to attack whenever IPC messages are sent between the web process and the UI process. Such vulnerabilities are mitigated by a UI process sandbox. The disadvantage of this approach is that tabs are not sandboxed from each other, as they would be with a web process sandbox, so it's easier for a compromised tab to do bad things to your other tabs. I'm not sure which approach is better, but clearly either way is much better than having no sandbox at all. (I still hope to have a web process sandbox working for use when WebKit is used outside of Flatpak, but that's not close to being ready yet.)

Problems

Now, there are a couple of loose ends. We do not yet have desktop notifications working under Flatpak, and we also don't block the screen from turning off when you're watching fullscreen video, so you'll have to wiggle your mouse every five minutes or so when you're watching YouTube to keep the lights on. These should not be too hard to fix; I'll try to get them both working soon. Also, drag and drop does not work. I'm not nearly brave enough to try fixing that, though, so you'll just have to live without drag and drop if you use the Flatpak version.

Also, unfortunately the stable GNOME runtimes do not receive regular updates. So while you get the latest version of Epiphany, most everything else will be older. This is not good. I try to make sure that WebKit gets updated, so you'll have all the latest security updates there, but everything else is generally stuck at older versions. For example, the 3.26 runtime uses, for the most part, whatever software versions were current at the time of the 3.26.1 release, and any updates newer than that are just not included. That's a shame, but the GNOME release team does not maintain GNOME's Flatpak runtimes: we have three other other redundant places to store the same build information (JHBuild, GNOME Continuous, BuildStream) that we need to take care of, and adding yet another is not going to fly. Hopefully this situation will change soon, though, since we should be able to use BuildStream to replace the current JSON manifest that's used to generate the Flatpak runtimes and keep everything up to date automatically. In the meantime, this is a problem to be aware of.

17 Dec 2017 7:24pm GMT

Julita Inca: #PeruRumboGSoC2018 – Session 5

Today we have celebrated another session for the #PeruRumboGSoC2018 program at CCPP UNI. It was one of the longest sessions we have experienced.

We were able to cope with different packages and versions to work with WebKit and GTK on Fedora 26 (one of the students have a 32 bit arch), and on Fedora 27One of the accomplishes today was coding a Language Selector using GtkListBox and GtkLinkButton with GTK and Python, here in detail.

Newcomers bug list on gitlab was also checked today, specially of the applications gnome-todo and gnome-music. As well Fedora Docs Dev,system-config-language and the implementation of Elastic Search were evaluated and discussed as possible GSoC 2018 proposals for Fedora. Thanks @zodiacfireworkThis is the final chart of the effort of the participants. In this picture we have Cristian (18) as @pystudent1913 and Fiorella (21) as @aweba. They are the top two! 🙂We have shared a lunch and some food at the afternoon. Thanks again to our sponsors: GNOME, Fedora & Linux Foundation for the support of this challenge!gogogo PeruRumboGSoC2018!


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: #PeruRumboGSoC2018, fedora, GNOME, GSoC, GSoC Fedora proposal, GSoC GNOME gitlab, GTK 3, gtk.py, Julita Inca, Julita Inca Chiroque, LinuXatUNI, PeruGSoC, Python

17 Dec 2017 5:09am GMT

Jim Hall: Top ten in 2017 (part 1 of 2)

It's been a great year in the usability of open source software. As I look back on the last twelve months, I thought it would be interesting to highlight a few of my favorite blog posts from the year. And here they are, in no particular order:

1. The importance of the press kit

In December 2016, we released FreeDOS 1.2. Throughout December and into January, FreeDOS was the subject of many many articles in tech press, magazines, websites, and journals. I credit our press kit, which made it really easy for anyone to write an article about FreeDOS. In this essay, I explain how we created our press kit, and the features your open source software press kit should contain so journalists can write about you.

2. Good usability but poor user experience

I like to remind people that usability is not the same as user experience. They really are two different concepts. Usability is about making software that real people can use to do real tasks in a reasonable amount of time. User experience is more about the emotional response users have when they use the software. Often, programs that have good usability will have a positive user experience, and vice versa. But it's possible for a program to have good usability and a negative user experience. Here's one example.

3. I can't read your website

The current trend in website design seems to be grey text on a light background. That's really hard for most people to read. And your small font sizes aren't helping, either. Here are a few examples of the same stanza of text in different styles. See also Calculating contrast ratios of text and The readability of DOS applications as interesting followup.

4. Screencasts for usability testing

When you're moderating a usability test, you may ask your testers to use the "speak aloud" protocol, where they say out loud what they are trying to do. If they are looking for a Print button, they should say "I'm looking for a Print button." As the tester works through the usability test, you might take notes about what the tester is doing, and where they are "looking" with their mouse. An easier way to track this is to record a screencast, for later playback. Here's an example in GNOME.

5. How many testers do you need?

Usability testing in open source software isn't that hard. But when I talk about how to do a usability test in open source software, most people ask me "How many testers do I need?" It turns out that you don't need that many people. The short answer is "about five."


I'll share part two of my top-ten list next week!

17 Dec 2017 12:09am GMT

16 Dec 2017

feedPlanet GNOME

Federico Mena-Quintero: Librsvg 2.40.20 is released

Today I released librsvg 2.40.20. This will be the last release in the 2.40.x series, which is deprecated effectively immediately.

People and distros are strongly encouraged to switch to librsvg 2.41.x as soon as possible. This is the version that is implemented in a mixture of C and Rust. It is 100% API and ABI compatible with 2.40.x, so it is a drop-in replacement for it. If you or your distro can compile Firefox 57, you can probably build librsvg-2.41.x without problems.

Some statistics

Here are a few runs of loc - a tool to count lines of code - when run on librsvg. The output is trimmed by hand to only include C and Rust files.

This is 2.40.20:
-------------------------------------------------------
 Language      Files   Lines   Blank   Comment    Code
-------------------------------------------------------
 C                41   20972    3438      2100   15434
 C/C++ Header     27    2377     452       625    1300
This is 2.41.latest (the master branch):
-------------------------------------------------------
 Language      Files   Lines   Blank   Comment    Code
-------------------------------------------------------
 C                34   17253    3024      1892   12337
 C/C++ Header     23    2327     501       624    1202
 Rust             38   11254    1873       675    8706
And this is 2.41.latest *without unit tests*, 
just "real source code":
-------------------------------------------------------
 Language      Files   Lines   Blank   Comment    Code
-------------------------------------------------------
 C                34   17253    3024      1892   12337
 C/C++ Header     23    2327     501       624    1202
 Rust             38    9340    1513       610    7217

Summary

Not counting blank lines nor comments:

As for the integration tests:

The Rust-and-C version supports a few more SVG features, and it is A LOT more robust and spec-compliant with the SVG features that were supported in the C-only version.

The C sources in librsvg are shrinking steadily. It would be incredibly awesome if someone could run some git filter-branch magic with the loc tool and generate some pretty graphs of source lines vs. commits over time.

16 Dec 2017 12:31am GMT

15 Dec 2017

feedPlanet GNOME

Christian Schaller: Some predictions for 2018

So I spent a few hours polishing my crystal ball today, so here are some predictions for Linux on the Desktop in 2018. The advantage of course for me to publish these now is that I can then later selectively quote the ones I got right to prove my brilliance and the internet can selectively quote the ones I got wrong to prove my stupidity :)

Prediction 1: Meson becomes the defacto build system of the Linux community

Meson has been going from strength to strength this year and a lot of projects
which passed on earlier attempts to replace autotools has adopted it. I predict this
trend will continue in 2018 and that by the end of the year everyone agrees that Meson
has replaced autotools as the Linux community build system of choice. That said I am not
convinced the Linux kernel itself will adopt Meson in 2018.

Prediction 2: Rust puts itself on a clear trajectory to replace C and C++ for low level programming

Another rising start of 2017 is the programming language Rust. And while its pace of adoption
will be slower than Meson I do believe that by the time 2018 comes to a close the general opinion is
that Rust is the future of low level programming, replacing old favorites like C and C++. Major projects
like GNOME and GStreamer are already adopting Rust at a rapid pace and I believe even more projects will
join them in 2018.

Prediction 3: Apples decline as a PC vendor becomes obvious

Ever since Steve Jobs died it has become quite clear in my opinion that the emphasis
on the traditional desktop is fading from Apple. The pace of hardware refreshes seems
to be slowing and MacOS X seems to be going more and more stale. Some pundits have already
started pointing this out and I predict that in 2018 Apple will be no longer consider the
cool kid on the block for people looking for laptops, especially among the tech savvy crowd.
Hopefully a good opportunity for Linux on the desktop to assert itself more.

Prediction 4: Traditional distro packaging for desktop applications
will start fading away in favour of Flatpak

From where I am standing I think 2018 will be the breakout year for Flatpak as a replacement
for gettings your desktop applications as RPMS or debs. I predict that by the end of 2018 more or
less every Linux Desktop user will be at least running 1 flatpak on their system.

Prediction 5: Linux Graphics competitive across the board

I think 2018 will be a breakout year for Linux graphics support. I think our GPU drivers and API will be competitive with any other platform both in completeness and performance. So by the end of 2018 I predict that you will see Linux game ports by major porting houses
like Aspyr and Feral that perform just as well as their Windows counterparts. What is more I also predict that by the end of 2018 discreet graphics will be considered a solved problem on Linux.

Prediction 6: H265 will be considered a failure

I predict that by the end of 2018 H265 will be considered a failed codec effort and the era of royalty bearing media codecs will effectively start coming to and end. H264 will be considered the last successful royalty bearing codec and all new codecs coming out will
all be open source and royalty free.

15 Dec 2017 9:05pm GMT

Bastien Nocera: More Bluetooth (and gaming) features

In the midst of post-release bug fixing, we've also added a fair number of new features to our stack. As usual, new features span a number of different components, so integrators will have to be careful picking up all the components when, well, integrating.

PS3 clones joypads support

Do you have a PlayStation 3 joypad that feels just a little bit "off"? You can't find the Sony logo anywhere on it? The figures on the face buttons look like barbed wire? And if it were a YouTube video, it would say "No copyright intended"?


Bingo. When plugged in via USB, those devices advertise themselves as SHANWAN or Gasia, and implement the bare minimum to work when plugged into a PlayStation 3 console. But as a Linux computer would behave slightly differently, we need to fix a couple of things.

The first fix was simple, but necessary to be able to do any work: disable the rumble motor that starts as soon as you plug the pad through USB.

Once that's done, we could work around the fact that the device isn't Bluetooth compliant, and hard-code the HID service it's supposed to offer.

Bluetooth LE Battery reporting

Bluetooth Low Energy is the new-fangled (7-year old) protocol for low throughput devices, from a single coin-cell powered sensor, to input devices. What's great is that there's finally a standardised way for devices to export their battery statuses. I've added support for this in BlueZ, which UPower then picks up for desktop integration goodness.

There are a number of Bluetooth LE joypads available for pickup, including a few that should be firmware upgradeable. Look for "Bluetooth 4" as well as "Bluetooth LE" when doing your holiday shopping.

gnome-bluetooth work

Finally, this is the boring part. Benjamin and I reworked code that's internal to gnome-bluetooth, as used in the Settings panel as well as the Shell, to make it use modern facilities like GDBusObjectManager. The overall effect of this is, less code, less brittle and more reactive when Bluetooth adapters come and go, such as when using airplane mode.

Apart from the kernel patch mentioned above (you'll know if you need it :), those features have been integrated in UPower 0.99.7 and in the upcoming BlueZ 5.48. And they will of course be available in Fedora, both in rawhide and as updates to Fedora 27 as soon as the releases have been done and built.

GG!

15 Dec 2017 3:57pm GMT

14 Dec 2017

feedPlanet GNOME

Sebastian Dröge: A GStreamer Plugin like the Rec Button on your Tape Recorder – A Multi-Threaded Plugin written in Rust

As Rust is known for "Fearless Concurrency", that is being able to write concurrent, multi-threaded code without fear, it seemed like a good fit for a GStreamer element that we had to write at Centricular.

Previous experience with Rust for writing (mostly) single-threaded GStreamer elements and applications (also multi-threaded) were all quite successful and promising already. And in the end, this new element was also a pleasure to write and probably faster than doing the equivalent in C. For the impatient, the code, tests and a GTK+ example application (written with the great Rust GTK bindings, but the GStreamer element is also usable from C or any other language) can be found here.

What does it do?

The main idea of the element is that it basically works like the rec button on your tape recorder. There is a single boolean property called "record", and whenever it is set to true it will pass-through data and whenever it is set to false it will drop all data. But different to the existing valve element, it

The multi-threading aspect here comes from the fact that in GStreamer each stream usually has its own thread, so in this case the video stream and the audio stream(s) would come from different threads but would have to be synchronized between each other.

The GTK+ example application for the plugin is playing a video with the current playback time and a beep every second, and allows to record this as an MP4 file in the current directory.

How did it go?

This new element was again based on the Rust GStreamer bindings and the infrastructure that I was writing over the last year or two for writing GStreamer plugins in Rust.

As written above, it generally went all fine and was quite a pleasure but there were a few things that seem noteworthy. But first of all, writing this in Rust was much more convenient and fun than writing it in C would've been, and I've written enough similar code in C before. It would've taken quite a bit longer, I would've had to debug more problems in the new code during development (there were actually surprisingly few things going wrong during development, I expected more!), and probably would've written less exhaustive tests because writing tests in C is just so inconvenient.

Rust does not prevent deadlocks

While this should be clear, and was also clear to myself before, this seems like it might need some reiteration. Safe Rust prevents data races, but not all possible bugs that multi-threaded programs can have. Rust is not magic, only a tool that helps you prevent some classes of potential bugs.

For example, you can't just stop thinking about lock order if multiple mutexes are involved, or that you can carelessly use condition variables without making sure that your conditions actually make sense and accessed atomically. As a wise man once said, "the safest program is the one that does not run at all", and a deadlocking program is very close to that.

The part about condition variables might be something that can be improved in Rust. Without this, you can easily end up in situations where you wait forever or your conditions are actually inconsistent. Currently Rust's condition variables only require a mutex to be passed to the functions for waiting for the condition to be notified, but it would probably also make sense to require passing the same mutex to the constructor and notify functions to make it absolutely clear that you need to ensure that your conditions are always accessed/modified while this specific mutex is locked. Otherwise you might end up in debugging hell.

Fortunately during development of the plugin I only ran into a simple deadlock, caused by accidentally keeping a mutex locked for too long and then running into conflict with another one. Which is probably an easy trap if the most common way of unlocking a mutex is to let the mutex lock guard fall out of scope. This makes it impossible to forget to unlock the mutex, but also makes it less explicit when it is unlocked and sometimes explicit unlocking by manually dropping the mutex lock guard is still necessary.

So in summary, while a big group of potential problems with multi-threaded programs are prevented by Rust, you still have to be careful to not run into any of the many others. Especially if you use lower-level constructs like condition variables, and not just e.g. channels. Everything is however far more convenient than doing the same in C, and with more support by the compiler, so I definitely prefer writing such code in Rust over doing the same in C.

Missing API

As usual, for the first dozen projects using a new library or new bindings to an existing library, you'll notice some missing bits and pieces. That I missed relatively core part of GStreamer, the GstRegistry API, was surprising nonetheless. True, you usually don't use it directly and I only need to use it here for loading the new plugin from a non-standard location, but it was still surprising. Let's hope this was the biggest oversight. If you look at the issues page on GitHub, you'll find a few other things that are still missing though. But nobody needed them yet, so it's probably fine for the time being.

Another part of missing APIs that I noticed during development was that many manual (i.e. not auto-generated) bindings didn't have the Debug trait implemented, or not in a too useful way. This is solved now, as otherwise I wouldn't have been able to properly log what is happening inside the element to allow easier debugging later if something goes wrong.

Apart from that there were also various other smaller things that were missing, or bugs (see below) that I found in the bindings while going through all these. But those seem not very noteworthy - check the commit logs if you're interested.

Bugs, bugs, bgsu

I also found a couple of bugs in the bindings. They can be broadly categorized in two categories

Generally I was quite happy with the lack of bugs though, the bindings are really ready for production at this point. And especially, all the bugs that I found are things that are unfortunately "normal" and common when writing code in C, while Rust is preventing exactly these classes of bugs. As such, they have to be solved only once at the bindings layer and then you're free of them and you don't have to spent any brain capacity on their existence anymore and can use your brain to solve the actual task at hand.

Inconvenient API

Similar to the missing API, whenever using some rather new API you will find things that are inconvenient and could ideally be done better. The biggest case here was the GstSegment API. A segment represents a (potentially open-ended) playback range and contains all the information to convert timestamps to the different time bases used in GStreamer. I'm not going to get into details here, best check the documentation for them.

A segment can be in different formats, e.g. in time or bytes. In the C API this is handled by storing the format inside the segment, and requiring you to pass the format together with the value to every function call, and internally there are some checks then that let the function fail if there is a format mismatch. In the previous version of the Rust segment API, this was done the same, and caused lots of unwrap() calls in this element.

But in Rust we can do better, and the new API for the segment now encodes the format in the type system (i.e. there is a Segment<Time>) and only values with the correct type (e.g. ClockTime) can be passed to the corresponding functions of the segment. In addition there is a type for a generic segment (which still has all the runtime checks) and functions to "cast" between the two.

Overall this gives more type-safety (the compiler already checks that you don't mix calculations between seconds and bytes) and makes the API usage more convenient as various error conditions just can't happen and thus don't have to be handled. Or like in C, are simply ignored and not handled, potentially leaving a trap that can cause hard to debug bugs at a later time.

That Rust requires all errors to be handled makes it very obvious how many potential error cases the average C code out there is not handling at all, and also shows that a more expressive language than C can easily prevent many of these error cases at compile-time already.

14 Dec 2017 10:54pm GMT

Christian Kellner: Introducing bolt: Thunderbolt 3 security levels for GNU/Linux

Thunderbolt icon by jimmac


Thunderbolt 3 security levels

Thunderbolt is an I/O technology that can be used to connect external peripherals to a computer - similar to USB and FireWire. It works by bridging PCIe between the controllers on each end of the connection, which in turn means that devices connected via Thunderbolt are ultimately connected via PCIe. Therefore thunderbolt can achieve very high connection speeds, fast enough to even drive external graphics cards. The downside is that it also makes certain attacks possible (e.g. Thunderstrike, DMA attack).
To mitigate these security problems, the latest version - known as Thunderbolt 3 - supports different security levels:

The active security level can normally be selected prior boot via a BIOS option, but it is interesting to note that in the future the none option is likely to go away. This of course means connected thunderbolt devices wont work at all unless they are authorized by the user from with the running operating system.

Intel has added support for the different security levels to the kernel and starting with Linux 4.13. The interface to interact with the devices is via files in sysfs. Since July we have been working on the userspace bits to make Thunderbolt 3 support "just work" 😉. The UX design design was drafted by Jimmac. The solution that we came up with to implement it consists of two parts: a generic system daemon and for GNOME a (new) component in gnome-shell. The latter will use the daemon to automatically authorize new devices. This will happen if and only if the currently active user is an administrator and the session is not locked.

bolt 0.1 Accidentally Working

Today I released the first version 0.1 (aka "Accidentally Working") of bolt, a system daemon that manages Thunderbolt 3 devices. It provides a D-Bus API to list devices, enroll them (authorize and store them in the local database) and forget them again (remove previously enrolled devices). It also emits signals if new devices are connected (or removed). During enrollment devices can be set to be automatically authorized as soon as they are connected. A command line tool, called boltctl, can be used to control the daemon and perform all the above mentioned tasks (see the man page of boltctl(1) for details).

I hope that other desktop environments will also find that daemon useful and use it. As this is not a stable release yet, we still have room for API changes, so feedback is welcome.

botlctl example output


New software needs testers, so everybody who has a computer with Thunderbolt 3 and feels courageous enough is welcome to give it a try. I created a copr with builds for Fedora 27 & rawhide and Jaroslav created a PKGBUILD file so Arch users can find it already in the AUR. As this is very fresh software it will contain bugs and those can be filed at the issue tracker of the github repo.

what's next: gnome-shell integration

I am locally running a Proof-of-Concept gnome-shell extension that implements the user session bits to complete the aforementioned : It uses bolt's D-Bus interface and listens for new Thunderbolt devices and then enrolls them, if the user is logged in. Since it can take a while until all the devices that are attached via thunderbolt are properly connected the daemon has a Probing property that is used to display a little icon as a way to inform the user that something is happening on the thunderbolt bus. All of this is already working quite well here on my test machine. In the next few days (weeks?) I will be working on integrating that code into gnome-shell. There are a few open UX questions that need to be addressed, but all in all things looking good.


thunderbolt activity indicator (aka cable snake)


Special thanks to Alberto Ruiz, Benjamin Berg, Hans de Goede, Harald Hoyer, Javier Martinez Canillas, Jaroslav Lichtblau, Jakub Steiner, Richard Hughes who all helped and supported this project during the last few months! ❤️

14 Dec 2017 1:05pm GMT

13 Dec 2017

feedPlanet GNOME

Federico Mena-Quintero: Librsvg moves to Gitlab

Librsvg now lives in GNOME's Gitlab instance. You can access it here.

Gitlab allows workflows similar to Github: you can create an account there, fork the librsvg repository, file bug reports, create merge requests... Hopefully this will make it nicer for contributors.

In the meantime, feel free to take a look!

This is a huge improvement for GNOME's development infrastructure. Thanks to Carlos Soriano, Andrea Veri, Philip Chimento, Alberto Ruiz, and all the people that made the move to Gitlab possible.

13 Dec 2017 8:09pm GMT

12 Dec 2017

feedPlanet GNOME

Jehan Pagès: New format in GIMP: HGT

Lately a recurrent contributor to the GIMP project (Massimo Valentini) contributed a patch to support HGT files. From this initial commit, since I found this data quite cool, I improved the support a bit (auto-detection of the variants and special-casing in particular, as well as making an API for scripts).

So what is HGT? That's topography data basically just containing elevation in meters of various landscape (HGT stands for "height"), gathered by the Shuttle Radar Topography Mission (SRTM) run by various space agencies (NASA, National Geospatial-Intelligence Agency, German and Italian space agencies…). To know more, you can read here and there.
HGT download source: https://dds.cr.usgs.gov/srtm/version2_1/
(go inside SRTM1/ and SRTM3/ directories for respectively 1 arc-second and 3 arc-seconds sampled data)
You probably won't find other links of interest since not everyone can do such data (not everyone has satellites!).

Here is what it can look like after processing: left is an image obtained from NASA PDF, and right is the same data imported in GIMP followed by a gradient mapping.

So the support is not perfect yet because to get a nice looking result, you need to do it in several steps and that involves likely a bunch of tweaking. My output above is not that good (colors look a bit radioactive compared to the NASA one!) but that's mostly because I didn't take the time to tweak more.

And so that's why I am writing this blog post. Someone trying to import HGT files in GIMP may be a bit disconcerted at first (so I'm hoping you'll find this blog post to go further). At first you'd likely get a nearly uniform-looking grey image and may think that HGT import is broken. It is not.

What's happening? Why is the imported HGT all uniform grey?

GIMP by default will convert the HGT data into greyscale. That is not a problem by itself since we can have very well contrasted greys. But that doesn't happen for HGT import. Why?

HGT contains elevation data as signed 16-bit integers representing meters. In other words, it represents elevation from -32767 m to 32767 m (with an exception for -32768 which means "void", i.e. invalid data; since that's raw data with minimum processing, it can indeed contain errors). Therefore once mapped to [0; 1] range, color 0 (pure black) is invalid, ]0; 0.5] is anything under water level and [0.5; 1] is above water elevation.

Considering that on earth, the highest point is Mount Everest at 8848m, when mapped to our [0; 1] range, we see it has value 0.635. So you can see the problem: most things on earth will be represented with greys really close to 0.5 and that's why there is no contrast.

How to get nice colors and contrast?

There are several solutions, but the one proposed by the contributor was to use the "Gradient Map" plug-in. That's a good idea. Basically you remap your greys from 0 to 1 into color gradients.
Now you can try to create a gradient by setting random stops through the GUI, but that will most likely be quite a challenge. A better idea is to do it a bit more "scientifically" (i.e. to use numbers, which you can also do through the GUI by using the new blend tool, though not as accurately as I'd like with only 2 decimal places). This is what did Massimo here by creating a gradient file which would map "magenta for invalid data, blue below zero, green to 1000 m, yellow to 2000m, and gray to white above". From this base, I added a bit of random tweaking because I was trying to get an output similar to the NASA document (just for the sake of it), so you can get a look at how my own gradient file looks like. But if you are looking to, say, create a relief map with accurate elevation/color mapping, you'd prefer to stick by the number-only approach.

Then once you get your gradient "code", copy it in a file with the extension .ggr inside the gradients/ folder of your GIMP config, and just use it when running "Gradient Map" filter.

Just to explain a bit the format: for each line, you get the startpoint, midpoint and endpoint coordinates (in the [0; 1] range), followed by 4 values for RGBA (also in [0; 1] range) for the startpoint then again 4 values for RGBA endpoint color. Then you get an integer for the blending mode (you likely want to keep it linear, i.e. 0, for a relief map), then the coloring value (leave it to 0 as well, which is RGB). Finally the last 2 integers are whether the startpoint and endpoint must be fixed colors, or correspond to foreground, background, etc. You will likely want to keep them as fixed colors (0).

So basically a line like this:

0.500000 0.507633 0.515267 0.000000 1.000000 0.000000 1.000000 0.000000 0.500000 0.000000 1.000000 0 0 0 0

means: gradient from 0 meter (0.5) to 1000 m ((0.515267 - 0.5) × 216 ≈ 1000) is a linear gradient from RGBA 0-1-0-1 (green) to RGBA 0-0.5-0-1. That is:

start mid end Rs Gs Bs As Re Ge Be Ae 0 0 0 0

where start is the start elevation and end the end elevation in [0; 1] range; and RsGsBsAs and ReGeBeAe are respectively the start and end gradient colors.

That's how you can easily map the elevation into colors! I hope that's clear! 🙂

Can't we have nicer support with a GUI?

Yes of course. This was fun and cool to review then improve this feature, and we should not let quality patches rot in our bugtracker, but that's not my priority (as you know) so I stopped improving the feature (if I don't stop myself from all these funny stuff out there, when would I work on ZeMarmot?!).
I gladfully accept new patches to improve the support and have left myself 2 bug reports to leave ideas about how to improve the current tools:

In the meantime, I leave this blog post so that the format is at least understandable and HGT import usable to moderately technical people. 🙂

That's it! Hopefully this post will be useful to someone needing to process HGT files with GIMP and willing to understand how this works, until we get more intuitive support.

Reminder: my Free Software coding can be funded on
Liberapay, Patreon or Tipeee through ZeMarmot project.

12 Dec 2017 4:09am GMT

11 Dec 2017

feedPlanet GNOME

Julita Inca: #PeruRumboGSoC2018 – Session 4

We celebrated yesterday another session of the local challenge 2017-2 "PeruRumboGSoC2018". It was held at the Centro Cultural Pedro Paulet of FIEE UNI. GTK on C was explained during the fisrt two hours of the morning based on the window* exercises from my repo to handle some widgets such as windows, label and buttons.Before he scheduled lunch, we were able to program a Language Selector using grid with GTK on C. These are some of the students git: Fiorella, Cris, Alex, Johan Diego & GiohannyWe've shared a deliciuos Pollo a la Brasa, and a tasty Inca Cola to drink during our lunch.

Martin Vuelta helped us to make our miniapplication works by clicking a single or multiple languages in our language selector to open a useful links to learn those programming languages. We needed to install webkit4 packages on Fedora 27.Thank you so much Martin for supporting the group with your expertise and good sense of humor! We are going to have three more sessions this week to finish the program.


Filed under: Education, Events, FEDORA, GNOME, τεχνολογια :: Technology, Programming Tagged: #PeruRumboGSoC2018, apply GSoC, fedora, Fedora + GNOME community, Fedora Lima, gnome 3, GSoC 2018, GSoC Peru Preparation, Julita Inca, Julita Inca Chiroque, Lima, Peru Rumbo al GSoC 2018

11 Dec 2017 11:23pm GMT

Jehan Pagès: ZeMarmot project got a Liberapay account!

We were asked a few times about trying out Liberapay. It is different from other recurring funding platforms (such as Patreon and Tipeee) that it is managed by a non-profit and have lesser fees (from what I understand, there are payment processing fees, but they don't add their own fees) since they fund themselves on their own platform (also the website itself is Free Software).

Though we liked the concept, until now we were a bit reluctant for a single reason: ZeMarmot is already on 2 platforms (we started before even knowing Liberapay), and more platforms mean more time spent to manage them all, time we prefer using to hack Free Software and draw/animate Libre Animation.

Nevertheless recently Patreon made fee policy changes which seem to anger the web (just search for it on your favorite web search engine, there are dozens of articles on the topic) and as most Patreon projects, we lost many patrons (at time of writing, 20 patrons for $80.63 of pledge left in 4 days, more than 10% the patronage received from this platform!).

So we decided to give Liberapay a try. If you like ZeMarmot project, and our contributions to GIMP, then feel free to fund us at:

» ZeMarmot Liberapay page «
https://liberapay.com/ZeMarmot/

Main difference with other platforms:

Now this is just another platform, we are not abandoning Patreon and Tipeee. Don't feel like you have to move your patronage to Liberapay if you don't want to. From now on, this will just be one more option.

Finally we remind that ZeMarmot project is fully managed by a non-profit registered in France, LILA. This means there are also other means to support the project, for instance with direct bank transfers (most European banks allow monthly bank transfers without any fees, so if you are in the EU, this may be the best solution), or Paypal (though for very small amounts, fees are quite expensive, they are quite ok for most donations), etc. To see the full list of ways to fund LILA, hence ZeMarmot and GIMP: https://libreart.info/en/donate

11 Dec 2017 5:54pm GMT

Richard Hughes: CSR devices now supported in fwupd

On Friday I added support for yet another variant of DFU. This variant is called "driverless DFU" and is used only by BlueCore chips from Cambridge Silicon Radio (now owned by Qualcomm). The driverless just means that it's DFU like, and routed over HID, but it's otherwise an unremarkable protocol. CSR is a huge ODM that makes most of the Bluetooth audio chips in vendor hardware. The hardware vendor can enable or disable features on the CSR microcontroller depending on licensing options (for instance echo cancellation), and there's even a little virtual machine to do simple vendor-specific things. All the CSR chips are updatable in-field, and most vendors issue updates to fix sound quality issues or to add support for new protocols or devices.

The BlueCore CSR chips are used everywhere. If you have a "wireless" speaker or headphones that uses Bluetooth there is a high probability that it's using a CSR chip inside. This makes the addition of CSR support into fwupd a big deal to access a lot of vendors. It's a lot easier to say "just upload firmware" rather than "you have to write code" so I think it's useful to have done this work.

The vendor working with me on this feature has been the awesome AIAIAI who make some very nice modular headphones. A few minutes ago we uploaded the H05 v1.5 firmware to the LVFS testing stream and v1.6 will be coming soon with even more bug fixes. To update the AIAIAI H05 firmware you just need to connect the USB cable and press and hold the top and bottom buttons on the headband until the LED goes out. You can then update the firmware using fwupdmgr update or just using GNOME Software. The big caveat is that you have to be running fwupd >= 1.0.3 which isn't scheduled to be released until after Christmas.

I've contacted some more vendors I suspect are using the CSR chips. These include:

If you know of any other "wireless speaker" companies that have issued at least one firmware update to users, please let me know in a comment here or in an email. I will follow up all suggestions and put the status on the Naughty&Nice vendorlist so please check that before suggesting a company. It would also be really useful to know the contact details (e.g. the web-form URL, or the email address) and also the model name of the device that might be updatable, although I'm happy to google myself if required. Thanks as always to Red Hat for allowing me to work on this stuff.

11 Dec 2017 12:51pm GMT

Christian Hergert: Grow your skills with GNOME

Another year of GNOME development is coming to a close so it's time to look back as we forge into 2018. This is going to be more verbose than I generally write. I hope you'll have a warm drink and take the time to read through because I think this is important.

Twenty years of GNOME is a monumental achievement. We know so much more about software development than we did when we started. We regularly identify major shortcomings and try to address them. That is a part of our shared culture I enjoy greatly.

GNOME contributors have a wide variety of interests and direction when it comes to a computer's role in our lives. That naturally creates an ever-expanding set of goals. As our goals expand we must become more organized if our quality is to maintain or improve.

Traditionally, we have a very loosely organized project. People spend their time on things that interest them, which does not put the focus on the end product. I intend to convince you that is now holding us back. We're successful not because of our engineering focus but despite it. This results in overworking our contributors and we can do better.


Those that have not worked in larger engineering companies may be less familiar with some of the types of roles there are in software development. So let's take a moment to describe these roles so everyone is on the same page.

Programmers are responsible for the maintenance of the code-base and implementing new features. All of us familiar with this role in GNOME because it's what a large number of our contributors do.

Designers are responsible for thinking through the current and planned features to find improved ways for users to solve their problems.

Graphic Designers can often overlap with Design, but not necessarily. They're responsible for creating the artwork used in the given project.

Quality Assurance ensures that you don't ship a product that is broken. You don't wait until the freezes to do this, but do it as features are developed so that the code is fresh in the programmers minds while addressing the issues. The sooner you catch issues, the less likely a code or design failure reaches users.

User Support is your front-line defense to triage incoming issues by your users. Without users your project is meaningless. Finding good people for this role can have a huge impact on keeping your users happy and your developers less stressed. If your bug tracker is also your user support, you might want to ask yourself if you really have user support. When you have a separate support system and bug-tracker, user support is responsible for converting user issues into detailed bug reports.

Security Engineers look for trust, privacy, and other safety issues in products and infrastructure. They take responsibility to ensure issues are fixed in a timely manner and work with others when planning features to help prevent issues in the first place.

User and Developer Advocates are liaisons between your team and the people using (or developing third-party tools with) your product. They amplify the voices of those speaking important truths.

User Testing is responsible for putting your product in-front of users and seeing how well they can perform the given tasks. Designers use this information to refine and alter designs.

Tech writers are responsible for writing technical documentation and help guides. They also help refine programmer authored API documentation. This role often fulfills the editor role to ensure a unified voice to your project's written word.

Build engineers ensure that your product can be packaged, built reliably, and distributed to users.

Operations and "DevOps" ensure that your product is working day-to-day. They provide and facilitate the tooling that these roles need to do their jobs well.

Internationalization and localization ensure that your software is available to a group of users who might otherwise not be able to use your software. It enables your software to have global impact.

Release management is your final check-point for determining when and what you release based on the goals of the project. They don't necessarily determine road-maps, but they do keep you honest.

Product managers are responsible for taking information and feedback from all these roles and converting that into a coherent project direction and forward looking vision. They analyze common issues, bug velocity, and features to ensure reasonable milestones that keeps the product functional as it transforms into it's more ideal state. Most importantly, this role is leadership.

There are other roles involved in the GNOME project. If I didn't include your role here, it is by no means of any lesser value.


For the past 3 years I've been working very hard because I fulfill a number of these roles for Builder. It's exhausting and unsustainable. It contributes to burnout and hostile communication by putting too much responsibility on too few people's shoulders.

I believe that communication breakdown is a symptom of a greater problem, not the problem itself.

To improve the situation, we need to encourage more people to join us in non-programming roles. That doesn't mean that you can't program, but simply a recognition that these other roles are critical to a functioning and comprehensive software project.


There are a few strategies used by companies with how to structure teams. But they can be generalized into three forms, as follows.

Teams based on product contain the aforementioned roles, but are assembled in a tight-knit group where people are focused on a single product. This model can excel at ensuring all of the members are focused on a single vision.

Teams based on role contain the aforementioned roles, but are assembled by the role. Members of the team work on different projects. This model can accel in cross-training newer team members.

A hybrid approach tries to balance the strengths of both team-based and role-based so that your team members get long-term mentorship but stick around in a project long enough to benefit from contextual knowledge.

To some degree we have teams based on role even though it's very informal. I think we could really gain from increasing our contributors in these roles and taking a hybrid approach. For the hybrid approach to work, there needs to be strong mentor-ship for each role.

My current opinion is that with a strong focus on individual products, we can improve our depth of quality and address many outstanding user issues.

Due to how loosely assembled our teams are I think it is very difficult for someone to join GNOME and provide one of these non-programming roles in an existing project. That is because they not only need to fulfill the role but also define what that role should be and how it would contribute to the project. Then they need to convince the existing team members it's needed.


With stronger inclusion of these roles into our software process we can begin to think about the long-term skill development of our contributors.

A good manager shepherds their team by ensuring they refine existing skills while expanding to new areas of interest.

I want people to know that by joining GNOME they can feel assured that they will be part of something greater than themselves. They will both refine and develop new skills. In many ways, we provide an accelerator for career development. We can provide an opportunity that might otherwise be unapproachable.


If contributing to GNOME in one or more of these roles sounds interesting to you, then please come join us. We need to learn to rely on each other in new ways. For that to happen, self-organization to fulfill these roles must become a priority.

https://www.gnome.org/get-involved/

11 Dec 2017 2:22am GMT

10 Dec 2017

feedPlanet GNOME

Jim Hall: The art of the usability interview

During a usability test, it's important to understand what the tester is thinking. What were they looking for when they couldn't find a button or menu item? During the usability test, I recommend that you try to observe, take notes, capture as much data as you can about what the tester is doing. Only after the tester is finished with a scenario or set of scenarios should you ask questions.

But how do you ask questions in a way to gain the most insight? Asking the right questions can sometimes be an art form; it certainly requires practice. A colleague shared with me a few questions she uses in her usability interviews, and I am sharing them here for your usability interviews:

Before starting a scenario or set of scenarios:


After finishing a set of scenarios:



The goal is to avoid leading questions, or any questions that suggests a "right" and "wrong" answer.

10 Dec 2017 10:49pm GMT

09 Dec 2017

feedPlanet GNOME

Sebastian Pölsterl: scikit-survival 0.5 released

Today, I released a new version of scikit-survival. This release adds support for the latest version of scikit-learn (0.19) and pandas (0.21). In turn, support for Python 3.4, scikit-learn 0.18 and pandas 0.18 has been dropped.

Many people are confused about the meaning of predictions. Often, they assume that predictions of a survival model should always be non-negative since the input is the time to an event. However, this not always the case. In general, predictions are risk scores of arbitrary scale. In particular, survival models usually do not predict the exact time of an event, but the relative order of events. If samples are ordered according to their predicted risk score (in ascending order), one obtains the sequence of events, as predicted by the model. A more detailed explanation is available in the Understanding Predictions in Survival Analysis section of the documentation.

Download

You can install the latest version via Anaconda (Linux, OSX and Windows):

conda install -c sebp scikit-survival

or via pip:

pip install -U scikit-survival

09 Dec 2017 11:37am GMT