19 Jan 2020

feedPlanet KDE

This week in KDE: Plasma 5.18 is the release you’ve been waiting for

A ton of features, bugfixes, and and user interfaces have landed for Plasma 18, and a beta release is now available for adventurous types to test out ahead of the release next month.

I think this is the Plasma release that a lot of people have been waiting for. It's faster, more stable, more beautiful, and more feature-filled than any previous release, and it comes with a Long Term Support guarantee. Nothing is perfect, and we can always do better (and always strive to), but I think Plasma 5.18 is going to hit a sweet spot for a lot of people. Look for it in the next LTS releases of Kubuntu and openSUSE Leap, as well as all the rolling release distros of course.

New Features

Bugfixes & Performance Improvements

User Interface Improvements

How You Can Help

Please test the Plasma 5.18 beta release! You can find out how here. If you find bugs, file them-especially if they are regressions from Plasma 5.17. We want to get as much testing, bug filing, and bugfixing as possible during the one-month beta period so that the official release is as smooth as possible.

More generally, have a look at https://community.kde.org/Get_Involved and find out more ways to help be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don't have to already be a programmer, either. I wasn't when I got started. Try it, you'll like it! We don't bite!

Finally, consider making a tax-deductible donation to the KDE e.V. foundation.

19 Jan 2020 5:05am GMT

18 Jan 2020

feedPlanet KDE

Itinerary extraction in Nextcloud Hub

Nextcloud announced their latest release and among the many new features is itinerary extraction from emails. That's using KDE's extraction engine, the same that powers similar features in KMail as well.

Nextcloud Hub

Yesterday Nextcloud turned 10 years, so that was a good date to announce a big new release, Nextcloud Hub (which I erroneously called Nextcloud 18 on Twitter). Nextcloud Hub now has email support built-in, and with it support for extracting booking information from train, bus or flight tickets as well as hotel and event reservations. Besides an easy to read summary of the booking data on top of the mail, there's also the ability to add corresponding calendar entries.

Frank presenting itinerary extraction in Nextcloud Mail. Frank presenting itinerary extraction in Nextcloud Mail.

Those are very nice and useful features of course, but obviously I'm particularly happy about this using the same technology we implemented over the past two years for KMail and KDE Itinerary, thanks to a collaboration started at FOSDEM 2019.

Integration

How to get a C++ extraction library and a PHP web application together isn't entirely straightforward though. We ended up doing this via a separate command line extractor tool, similar to how the Plasma Browser Integration interfaces with the extractor as well.

During the Nextcloud Hackweek last week we also extend the command line tool to produce iCal output, to avoid some code duplication for the calendar integration and ensure compatibility with the KDE Itinerary app. These changes didn't make it into the current release packages, but should become available with the next update.

That leaves the question of deployment, for PHP applications that's usually just unpacking an archive, but for native executables things are a bit more complicated. The installation packages therefore contain a full static build of the extractor. As a side-effect of this itinerary extraction is currently only supported on 64bit x86 platforms.

A Deutsche Bahn train ticket display in Nextcloud Mail (screenshot by Nextcloud). Nextcloud Mail showing a Deutsche Bahn train booking.

Using the same technology everywhere of course also means improvements benefit everyone. So I'm very much looking forward to the increased user base resulting in more data sample donations and contributions in general :)

FOSDEM 2020

If you are visiting FOSDEM in two weeks, there will be plenty of opportunity to learn more about all this, for example by visiting Jos' Nextcloud talk, my KDE Itinerary talk, or by dropping by the KDE stand in building K and the Nextcloud stand in building H. See you in Brussels!

18 Jan 2020 10:45am GMT

feedPlanet GNOME

Julian Sparber: Digitizing a analog water meter

For a University project a spent some time working on a project to digitally track the water consumption in my shared flat. Since nowadays everything is about data collection, I wanted to give this idea a shot. In my flat we have a simple analog water meter in my house.

Sadly, my meter is really dirt under the glass and i couldn't manage to clean it. This will cause problems down the road.

The initial idea was easy, add a webcam on top of the meter and read the number on the upper half it. But I soon realized that the project won't be that simple. The number shows only the use of 1m^3 (1000 liters), this means that I would have a change only every couple of days, which is useless and boring. So, I had to read the analog gauges, which show the fraction in 0.0001, 0.001, 0.01 and 0.1 m^3. This discovery blocked me, and I was like "this is way to complicated".

I have no idea how I found or what reminded me of OpenCV, but that was the solution. OpenCV is an awesome tool for computer vision, it has many features like Facial recognition, Gesture recognition … and also shape recognition. What's a analog gauge? It's just a circle with an triangular arrow indicating the value.

Let's jump in to the project

I'm using a Raspberry Pi 1, a Logitech webcam, a juice bottle and some leds out of a bicycle light.

You need to find a juice bottle which fits nicely over the water meter. Cut of the top and bottom of the bottle and replace one side with a cardboard or wood with a hole in the middle. Attach the webcam centered over the hole and place a led on each side of the webcam to illuminate the water meter (you my need to cover them with paper to reduce reflection on the plastic of the meter).

First step is to set up the Raspberry Pi 1 (it doesn't have to be a RPI, any computer running linux should work fine). You have to install a Linux Distro on the device, I used Archlinux. You can find a guide to install it on a Raspberry Pi 1 here.

After the initial setup you need to install git, python3 and opencv:
sudo pacman -S python3 git opencv
Clone the needed code to a know location:
git clone https://github.com/jsparber/water-meter-code.git
You need to create a new git repository to store the data and clone it to /home/alarm/water-meter-data/. If you want to use a different name or location you need to modify the name in measure.sh

On the RPI I have a cronjob which runs a script every minute. The script turns on the the led and then takes a picture then it turns the led off again, to save some energy.
With crontab -e you can modify the cron jobs, add * * * * * /home/alarm/code/take_photo.sh to run the take_photo.sh every minute, you may need to adjust the path depending on where you cloned the git repo.

After the picture is taken it calls a second script which then uses OpenCV to read the gauges and it appends the found values to a file which then is pushed to git repo. I had a issue with the webcam. After some time my script couldn't access the webcam anymore, I solved it by rebooting the RPI when it wasn't possible to take a picture. (I did a quick search on the internet, most people solved this issue by changing the cam)

A nice optional feature is the home made switch connected to the RPI on the above picture. The schematics are really simple it's only a 1KOhm resistor, a transistor and a USB extension cable. The transistor is switched on via the GPIO pin 18 of the Raspberry PI and gives power to the connected USB device. In this case I used it to connect the Leds.

Inside the USB extension cable there should be 4 different colored cables. We need to cut only the red one and connect it the same way as the schematics above show it, where the red_in goes to the male connector and the red_out to the female side of the cable. The GND needs to be connected to the ground pin of the Raspberry Pi, if you need to power something which requires more then 500mA you should connect the ground directly to the power source the same way as you did with the +5V red cable. You need to use the same power source for the switch and the RPI or it may not work.

And now the OpenCv part

First my code finds the circles of the right size on the image, and uses the two most left ones as gauges for 0.1 m^3 and 0.01 m^3 (Sadly since my meter is so dirty I can't reliably read the other two values).

The input image. The found circles of the right size

As the second step I create a mask which filters out everything what's not red (remember the arrows are read). I take the contour of the mask which encloses the center of the circle I want to read. Then it finds the fairest point from the center of the circle which is the tip of the arrow. The software then creates a virtual line between the center and the tip, which is then used to calculate the angle which is basically the value shown on the gauge. The same thing is repeated for the other gauges.

The mask with only red areas showing. The arrows found on the source image. This lines are used to calculate the angle.

This system sounds extremely simple, but to make everything work well together it isn't that easy. OpenCV requires a lot of tuning, e.g selecting the right red color so that it detects it well but stays working even with light changes.

Conclusions

I learned a lot during this project, especially about OpenCV which i never used before. Sadly my water meter was really dirty so a couldn't read all values and get also some wrong readings. So far I didn't decide for what i want to use the collected data therefore I didn't spend much time on finding a solution for read errors and problems I have when the gauges make a full turn. A easy solution would be to just keep an internal count of the water. And when we are unsure about a value we can go back to the memorized value.

The final plot can be found here. All values are saved directly without filter this causes the plot to have quite some noise but it allows to change the function used to filter later and adapt it to future needs.

My code is published on github:

Some sources which helped me a lot, many thanks to them:

18 Jan 2020 10:22am GMT

17 Jan 2020

feedPlanet GNOME

Tobias Bernard: Doing Things That Scale

There was a point in my life when I ran Arch, had an elaborate personalized terminal prompt, and my own custom icon theme. I stopped doing all these things at various points for different reasons, but underlying them all is a general feeling that it's taken me some time to figure out how to articulate: I no longer want to invest time in things that don't scale.

What I mean by that in particular is things that

  1. Only fix a problem for myself (and maybe a small group of others)
  2. Have to be maintained in perpetuity (by me)

Not only is it highly wasteful for me to come up with a custom solution to every problem, but in most cases those solutions would be worse than ones developed in collaboration with others. It also means nobody will help maintain these solutions in the long run, so I'll be stuck with extra work, forever.

Conversely, things that scale

  1. Fix the problem in way that will just work™ for most people, most of the time
  2. Are developed, used, and maintained by a wider community

A few examples:

I used to have an Arch GNU/Linux setup with tons of tweaks and customizations. These days I just run vanilla Fedora. It's not perfect, but for actually getting things done it's way better than what I had before. I'm also much happier knowing that if something goes seriously wrong I can reinstall and get to a usable system in half an hour, as opposed to several hours of tedious work for setting up Arch. Plus, this is a setup I can actually install for friends and relatives, because it does a decent job at getting people to update when I'm not around.

Until relatively recently I always set a custom monospace font in my editor and terminal when setting up a new machine. At some point I realized that I wouldn't have to do that if the default was nicer, so I just opened an issue. A discussion ensued, a better default was agreed upon, and voilà - my problem was solved. One less thing to do after every install. And of course, everyone else now gets a nicer default font too!

I also used to use ZSH with a configuration framework and various plugins to get autocompletion, git status, a fancy prompt etc. A few years ago I switched to fish. It gives me most of what I used to get from my custom ZSH thing, but it does so out of the box, no configuration needed. Of course ideally we'd have all of these things in the default shell so everyone gets these features for free, but that's hard to do unfortunately (if you're interested in making it happen I'd love to talk!).

Years ago I used to maintain my own extension set to the Faenza icon theme, because Faenza didn't cover every app I was using. Eventually I realized that trying to draw a consistent icon for every single third party app was impossible. The more icons I added, the more those few apps that didn't have custom icons stuck out. Nowadays when I see an app with a poor icon I file an issue asking if the developer would like help with a nicer one. This has worked out great in most cases, and now I probably have more consistent app icons on my system than back when I used a custom theme. And of course, everyone gets to enjoy the nicer icons, not only me.

Some other things that don't scale (in no particular order):

The free software community tends to celebrate custom, hacky solutions to problems as something positive ("It's so flexible!"), even when these hacks are only necessary because things are broken by default. It's nice that people with a lot of time and technical skills can fix their own problems, but the benefits from that don't automatically trickle down to everybody else.

If we want ethical technology to become accessible to more people, we need to invest our (very limited) time and energy in solutions that scale. This means good defaults instead of endless customization, apps instead of scripts, "it just works" instead of "read the fucking manual". The extra effort to make proper solutions that work for everyone, rather than hacks just for ourselves can seem daunting, but is always worth it in the long run. Just as with accessibility and commenting your code, the person most likely to benefit from it is you, in the future.

17 Jan 2020 9:45pm GMT

feedPlanet KDE

Learning about our users

In a product like Plasma, knowing the kind of things our existing users care about and use sheds light on what needs polishing or improving. At the moment, the input we have is either the one from the loudest most involved people or outright bug reports, which lead to a confirmation bias.

What do our users like about Plasma? On which hardware do people use Plasma? Are we testing Plasma on the same kind of hardware Plasma is being used for?

Some time ago, Volker Krause started up the KUserFeedback framework with two main features. First, allowing to send information about application's usage depending on certain users' preferences and include mechanisms to ask users for feedback explicitly. This has been deployed into several products already, like GammaRay and Qt Creator, but we never adopted it in KDE software.

The first step has been to allow our users to tune how much information Plasma products should be telling KDE about the systems they run on.

This mechanism is only integrated into Plasma and Discover right now, but I'd like to extend this to others like System Settings and KWin in the future too.

Privacy

We very well understand how this is related to privacy. As you can see, we have been careful about only requesting information that is important for improving the software, and we are doing so while making sure this information is as unidentifiable and anonymous as possible.

In the end, I'd say we all want to see Free Software which is respectful of its users and that responds to people rather than the few of us working from a dark (or bright!) office.

In case you have any doubts, you can see KDE's Applications Privacy Policy and specifically the Telemetry Policy.

Plasma 5.18

This will be coming in really soon in the next Plasma release early next February 2020. This is all opt-in, you will have to enable it. And please do so, let it be another way how you get to contribute to Free Software. 🙂

If you can't find the module, please tell your distribution. The feature is very new and if the KUserFeedback framework isn't present it won't be built.

17 Jan 2020 6:06pm GMT

feedplanet.freedesktop.org

Hans de Goede: Plug and play support for (Gaming) keyboards with a builtin LCD panel

A while ago as a spin-off of my project to improve support for Logitech wireless keyboards and mice I have also done some work on improving support for (Gaming) keyboards with a builtin LCD panel.

Specifically if you have a Logitech MX5000, G15, G15 v2 or G510 and you want the LCD panel to show something somewhat useful then on Fedora 31 you can now install the lcdproc package and it will automatically recognize the keyboard and show "top" like information on it. No need to manually write an LCDd.conf or anything, this works fully plug and play:

sudo dnf install lcdproc
sudo udevadm trigger


If you have a MX5000 and you do not want the LCD panel to show "top" like info, you may still want to install the mx5000tools package, this will automatically send the system time to the keyboard, after which it will display the time.

Once the 5.5 kernel becomes available as an update for Fedora you will also be able to use the keys surrounding the LCD panel to control the lcdproc menu-s on the LCD panel. The 5.5 kernel will also export key backlight brightness control through the standardized /sys/class/leds API, so that you can control it from e.g. the GNOME control-center's power-settings and you get a nice OSD when toggling the brightnesslevel using the key on the keyboard.

The 5.5 kernel will also make the "G" keys send standard input-events (evdev events), once userspace support for the new key-codes these send has landed, this will allow e.g. binding them to actions in GNOME control-center's keyboard-settings. But only under Wayland as the new keycodes are > 255 and X11 does not support this.

17 Jan 2020 1:39pm GMT

feedPlanet GNOME

Hans de Goede: Plug and play support for (Gaming) keyboards with a builtin LCD panel

A while ago as a spin-off of my project to improve support for Logitech wireless keyboards and mice I have also done some work on improving support for (Gaming) keyboards with a builtin LCD panel.

Specifically if you have a Logitech MX5000, G15, G15 v2 or G510 and you want the LCD panel to show something somewhat useful then on Fedora 31 you can now install the lcdproc package and it will automatically recognize the keyboard and show "top" like information on it. No need to manually write an LCDd.conf or anything, this works fully plug and play:

sudo dnf install lcdproc
sudo udevadm trigger


If you have a MX5000 and you do not want the LCD panel to show "top" like info, you may still want to install the mx5000tools package, this will automatically send the system time to the keyboard, after which it will display the time.

Once the 5.5 kernel becomes available as an update for Fedora you will also be able to use the keys surrounding the LCD panel to control the lcdproc menu-s on the LCD panel. The 5.5 kernel will also export key backlight brightness control through the standardized /sys/class/leds API, so that you can control it from e.g. the GNOME control-center's power-settings and you get a nice OSD when toggling the brightnesslevel using the key on the keyboard.

The 5.5 kernel will also make the "G" keys send standard input-events (evdev events), once userspace support for the new key-codes these send has landed, this will allow e.g. binding them to actions in GNOME control-center's keyboard-settings. But only under Wayland as the new keycodes are > 255 and X11 does not support this.

17 Jan 2020 1:39pm GMT

feedplanet.freedesktop.org

Iago Toral: Raspberry Pi 4 V3D driver gets OpenGL ES 3.1 conformance

So continuing with the news, here is a fairly recent one: as the tile states, I am happy to announce that the Raspberry Pi 4 is now an OpenGL ES 3.1 conformant product!. This means that the Mesa V3D driver has successfully passed a whole lot of tests designed to validate the OpenGL ES 3.1 feature set, which should be a good sign of driver quality and correctness.

It should be noted that the Raspberry Pi 4 shipped with a V3D driver exposing OpenGL ES 3.0, so this also means that on top of all the bugfixes that we implemented for conformance, the driver has also gained new functionality! Particularly, we merged Eric's previous work to enable Compute Shaders.

All this work has been in Mesa master since December (I believe there is only one fix missing waiting for us to address review feedback), and will hopefully make it to Raspberry Pi 4 users soon.

17 Jan 2020 10:02am GMT

Iago Toral: Raspberry Pi 4 V3D driver gets Geometry Shaders

I actually landed this in Mesa back in December but never got to announce it anywhere. The implementation passes all the tests available in the Khronos Conformance Tests Suite (CTS). If you give this a try and find any bugs, please report them here with the V3D tag.

This is also the first large feature I land in V3D! Hopefully there will be more coming in the future.

17 Jan 2020 9:45am GMT