18 Jan 2020

feedPlanet GNOME

Julian Sparber: Digitizing a analog water meter

For a University project a spent some time working on a project to digitally track the water consumption in my shared flat. Since nowadays everything is about data collection, I wanted to give this idea a shot. In my flat we have a simple analog water meter in my house.

Sadly, my meter is really dirt under the glass and i couldn't manage to clean it. This will cause problems down the road.

The initial idea was easy, add a webcam on top of the meter and read the number on the upper half it. But I soon realized that the project won't be that simple. The number shows only the use of 1m^3 (1000 liters), this means that I would have a change only every couple of days, which is useless and boring. So, I had to read the analog gauges, which show the fraction in 0.0001, 0.001, 0.01 and 0.1 m^3. This discovery blocked me, and I was like "this is way to complicated".

I have no idea how I found or what reminded me of OpenCV, but that was the solution. OpenCV is an awesome tool for computer vision, it has many features like Facial recognition, Gesture recognition … and also shape recognition. What's a analog gauge? It's just a circle with an triangular arrow indicating the value.

Let's jump in to the project

I'm using a Raspberry Pi 1, a Logitech webcam, a juice bottle and some leds out of a bicycle light.

You need to find a juice bottle which fits nicely over the water meter. Cut of the top and bottom of the bottle and replace one side with a cardboard or wood with a hole in the middle. Attach the webcam centered over the hole and place a led on each side of the webcam to illuminate the water meter (you my need to cover them with paper to reduce reflection on the plastic of the meter).

First step is to set up the Raspberry Pi 1 (it doesn't have to be a RPI, any computer running linux should work fine). You have to install a Linux Distro on the device, I used Archlinux. You can find a guide to install it on a Raspberry Pi 1 here.

After the initial setup you need to install git, python3 and opencv:
sudo pacman -S python3 git opencv
Clone the needed code to a know location:
git clone https://github.com/jsparber/water-meter-code.git
You need to create a new git repository to store the data and clone it to /home/alarm/water-meter-data/. If you want to use a different name or location you need to modify the name in measure.sh

On the RPI I have a cronjob which runs a script every minute. The script turns on the the led and then takes a picture then it turns the led off again, to save some energy.
With crontab -e you can modify the cron jobs, add * * * * * /home/alarm/code/take_photo.sh to run the take_photo.sh every minute, you may need to adjust the path depending on where you cloned the git repo.

After the picture is taken it calls a second script which then uses OpenCV to read the gauges and it appends the found values to a file which then is pushed to git repo. I had a issue with the webcam. After some time my script couldn't access the webcam anymore, I solved it by rebooting the RPI when it wasn't possible to take a picture. (I did a quick search on the internet, most people solved this issue by changing the cam)

A nice optional feature is the home made switch connected to the RPI on the above picture. The schematics are really simple it's only a 1KOhm resistor, a transistor and a USB extension cable. The transistor is switched on via the GPIO pin 18 of the Raspberry PI and gives power to the connected USB device. In this case I used it to connect the Leds.

Inside the USB extension cable there should be 4 different colored cables. We need to cut only the red one and connect it the same way as the schematics above show it, where the red_in goes to the male connector and the red_out to the female side of the cable. The GND needs to be connected to the ground pin of the Raspberry Pi, if you need to power something which requires more then 500mA you should connect the ground directly to the power source the same way as you did with the +5V red cable. You need to use the same power source for the switch and the RPI or it may not work.

And now the OpenCv part

First my code finds the circles of the right size on the image, and uses the two most left ones as gauges for 0.1 m^3 and 0.01 m^3 (Sadly since my meter is so dirty I can't reliably read the other two values).

The input image. The found circles of the right size

As the second step I create a mask which filters out everything what's not red (remember the arrows are read). I take the contour of the mask which encloses the center of the circle I want to read. Then it finds the fairest point from the center of the circle which is the tip of the arrow. The software then creates a virtual line between the center and the tip, which is then used to calculate the angle which is basically the value shown on the gauge. The same thing is repeated for the other gauges.

The mask with only red areas showing. The arrows found on the source image. This lines are used to calculate the angle.

This system sounds extremely simple, but to make everything work well together it isn't that easy. OpenCV requires a lot of tuning, e.g selecting the right red color so that it detects it well but stays working even with light changes.

Conclusions

I learned a lot during this project, especially about OpenCV which i never used before. Sadly my water meter was really dirty so a couldn't read all values and get also some wrong readings. So far I didn't decide for what i want to use the collected data therefore I didn't spend much time on finding a solution for read errors and problems I have when the gauges make a full turn. A easy solution would be to just keep an internal count of the water. And when we are unsure about a value we can go back to the memorized value.

The final plot can be found here. All values are saved directly without filter this causes the plot to have quite some noise but it allows to change the function used to filter later and adapt it to future needs.

My code is published on github:

Some sources which helped me a lot, many thanks to them:

18 Jan 2020 10:22am GMT

17 Jan 2020

feedPlanet GNOME

Tobias Bernard: Doing Things That Scale

There was a point in my life when I ran Arch, had an elaborate personalized terminal prompt, and my own custom icon theme. I stopped doing all these things at various points for different reasons, but underlying them all is a general feeling that it's taken me some time to figure out how to articulate: I no longer want to invest time in things that don't scale.

What I mean by that in particular is things that

  1. Only fix a problem for myself (and maybe a small group of others)
  2. Have to be maintained in perpetuity (by me)

Not only is it highly wasteful for me to come up with a custom solution to every problem, but in most cases those solutions would be worse than ones developed in collaboration with others. It also means nobody will help maintain these solutions in the long run, so I'll be stuck with extra work, forever.

Conversely, things that scale

  1. Fix the problem in way that will just work™ for most people, most of the time
  2. Are developed, used, and maintained by a wider community

A few examples:

I used to have an Arch GNU/Linux setup with tons of tweaks and customizations. These days I just run vanilla Fedora. It's not perfect, but for actually getting things done it's way better than what I had before. I'm also much happier knowing that if something goes seriously wrong I can reinstall and get to a usable system in half an hour, as opposed to several hours of tedious work for setting up Arch. Plus, this is a setup I can actually install for friends and relatives, because it does a decent job at getting people to update when I'm not around.

Until relatively recently I always set a custom monospace font in my editor and terminal when setting up a new machine. At some point I realized that I wouldn't have to do that if the default was nicer, so I just opened an issue. A discussion ensued, a better default was agreed upon, and voilà - my problem was solved. One less thing to do after every install. And of course, everyone else now gets a nicer default font too!

I also used to use ZSH with a configuration framework and various plugins to get autocompletion, git status, a fancy prompt etc. A few years ago I switched to fish. It gives me most of what I used to get from my custom ZSH thing, but it does so out of the box, no configuration needed. Of course ideally we'd have all of these things in the default shell so everyone gets these features for free, but that's hard to do unfortunately (if you're interested in making it happen I'd love to talk!).

Years ago I used to maintain my own extension set to the Faenza icon theme, because Faenza didn't cover every app I was using. Eventually I realized that trying to draw a consistent icon for every single third party app was impossible. The more icons I added, the more those few apps that didn't have custom icons stuck out. Nowadays when I see an app with a poor icon I file an issue asking if the developer would like help with a nicer one. This has worked out great in most cases, and now I probably have more consistent app icons on my system than back when I used a custom theme. And of course, everyone gets to enjoy the nicer icons, not only me.

Some other things that don't scale (in no particular order):

The free software community tends to celebrate custom, hacky solutions to problems as something positive ("It's so flexible!"), even when these hacks are only necessary because things are broken by default. It's nice that people with a lot of time and technical skills can fix their own problems, but the benefits from that don't automatically trickle down to everybody else.

If we want ethical technology to become accessible to more people, we need to invest our (very limited) time and energy in solutions that scale. This means good defaults instead of endless customization, apps instead of scripts, "it just works" instead of "read the fucking manual". The extra effort to make proper solutions that work for everyone, rather than hacks just for ourselves can seem daunting, but is always worth it in the long run. Just as with accessibility and commenting your code, the person most likely to benefit from it is you, in the future.

17 Jan 2020 9:45pm GMT

feedplanet.freedesktop.org

Hans de Goede: Plug and play support for (Gaming) keyboards with a builtin LCD panel

A while ago as a spin-off of my project to improve support for Logitech wireless keyboards and mice I have also done some work on improving support for (Gaming) keyboards with a builtin LCD panel.

Specifically if you have a Logitech MX5000, G15, G15 v2 or G510 and you want the LCD panel to show something somewhat useful then on Fedora 31 you can now install the lcdproc package and it will automatically recognize the keyboard and show "top" like information on it. No need to manually write an LCDd.conf or anything, this works fully plug and play:

sudo dnf install lcdproc
sudo udevadm trigger


If you have a MX5000 and you do not want the LCD panel to show "top" like info, you may still want to install the mx5000tools package, this will automatically send the system time to the keyboard, after which it will display the time.

Once the 5.5 kernel becomes available as an update for Fedora you will also be able to use the keys surrounding the LCD panel to control the lcdproc menu-s on the LCD panel. The 5.5 kernel will also export key backlight brightness control through the standardized /sys/class/leds API, so that you can control it from e.g. the GNOME control-center's power-settings and you get a nice OSD when toggling the brightnesslevel using the key on the keyboard.

The 5.5 kernel will also make the "G" keys send standard input-events (evdev events), once userspace support for the new key-codes these send has landed, this will allow e.g. binding them to actions in GNOME control-center's keyboard-settings. But only under Wayland as the new keycodes are > 255 and X11 does not support this.

17 Jan 2020 1:39pm GMT

feedPlanet GNOME

Hans de Goede: Plug and play support for (Gaming) keyboards with a builtin LCD panel

A while ago as a spin-off of my project to improve support for Logitech wireless keyboards and mice I have also done some work on improving support for (Gaming) keyboards with a builtin LCD panel.

Specifically if you have a Logitech MX5000, G15, G15 v2 or G510 and you want the LCD panel to show something somewhat useful then on Fedora 31 you can now install the lcdproc package and it will automatically recognize the keyboard and show "top" like information on it. No need to manually write an LCDd.conf or anything, this works fully plug and play:

sudo dnf install lcdproc
sudo udevadm trigger


If you have a MX5000 and you do not want the LCD panel to show "top" like info, you may still want to install the mx5000tools package, this will automatically send the system time to the keyboard, after which it will display the time.

Once the 5.5 kernel becomes available as an update for Fedora you will also be able to use the keys surrounding the LCD panel to control the lcdproc menu-s on the LCD panel. The 5.5 kernel will also export key backlight brightness control through the standardized /sys/class/leds API, so that you can control it from e.g. the GNOME control-center's power-settings and you get a nice OSD when toggling the brightnesslevel using the key on the keyboard.

The 5.5 kernel will also make the "G" keys send standard input-events (evdev events), once userspace support for the new key-codes these send has landed, this will allow e.g. binding them to actions in GNOME control-center's keyboard-settings. But only under Wayland as the new keycodes are > 255 and X11 does not support this.

17 Jan 2020 1:39pm GMT

feedplanet.freedesktop.org

Iago Toral: Raspberry Pi 4 V3D driver gets OpenGL ES 3.1 conformance

So continuing with the news, here is a fairly recent one: as the tile states, I am happy to announce that the Raspberry Pi 4 is now an OpenGL ES 3.1 conformant product!. This means that the Mesa V3D driver has successfully passed a whole lot of tests designed to validate the OpenGL ES 3.1 feature set, which should be a good sign of driver quality and correctness.

It should be noted that the Raspberry Pi 4 shipped with a V3D driver exposing OpenGL ES 3.0, so this also means that on top of all the bugfixes that we implemented for conformance, the driver has also gained new functionality! Particularly, we merged Eric's previous work to enable Compute Shaders.

All this work has been in Mesa master since December (I believe there is only one fix missing waiting for us to address review feedback), and will hopefully make it to Raspberry Pi 4 users soon.

17 Jan 2020 10:02am GMT

Iago Toral: Raspberry Pi 4 V3D driver gets Geometry Shaders

I actually landed this in Mesa back in December but never got to announce it anywhere. The implementation passes all the tests available in the Khronos Conformance Tests Suite (CTS). If you give this a try and find any bugs, please report them here with the V3D tag.

This is also the first large feature I land in V3D! Hopefully there will be more coming in the future.

17 Jan 2020 9:45am GMT