28 May 2020

feedplanet.freedesktop.org

Christian Schaller: Into the world of Robo vacums and Robo mops

So this is a blog post not related to Fedora or Red Hat, but rather my personal experience with getting a robo vacuum and robo mop into the house.

So about two Months ago my wife and I decided to get a Robo vacuum while shopping at Costco (a US wholesaler outfit). So we brought home the iRobot Roomba 980. Over the next week we ended up also getting the newer iRobot Roomba i7+ and the iRobot Braava m6 mopping robot. Our dream was that we would never have to vacuum or mop again, instead leaving that to our new robots to handle. With two little kids being able to cut that work from our todo list seemed like a dream come through.

I feel that whenever you get into a new technology it takes some time with your first product in that category to understand what questions to ask and what considerations to make. For instance I feel a lot of more informed and confident in my knowledge about electric cars having owned a Nissan Leaf for a few years now (enough to wish I had a Tesla instead for instance :). I guess our experience with robot vacuums here is similar.

Anyway, if you are considering buying a Robot vacuum or mop I think the first lesson we learned is that it is definitely not a magic solution. You have to prepare your house quite a bit before each run, including obvious things like tidying up anything on the floor like the kids legos etc., to discovering that certain furniture, like the IKEA Poang chairs are mortal enemies with your robo vacuum. We had to put our chair on top of the sofa as the Roomba would get stuck on it every time we tried to vacuum the floor. Also the door mat in front of our entrance door kept having its corners sucked into the vacuum getting it stuck. Anyway, our lesson learned is that vacuuming (or mopping) is not something we can do on an impulse or easily on a schedule, as it takes quite a bit of preparation. If you don't have small kid leaving random stuff all over the house all the time you might be able to just set the vacuum on a schedule, but for us that has turned out to be a big no :). So in practice we only vacuum at night now when my wife and I have had time to prep the house after the kids have gone to bed.

It is worth nothing that we only got one vacuum now. We got the i7+ after we got the 980 due to realizing that the 980 didn't have features like the smart map allowing you to for instance vacuum specific rooms. It also had other niceties like self emptying and it was supposed to be more quiet (which is nice when you run it at night). However in our experience it also had a less strong vacuum, so we felt it left more crap on the floor then the older 980 model. So in the end we returned the i7+ in favour of the 980, just because we felt it did a better job at vacuuming. It is quite loud though, so we can hear it very loud and clear up on the second floor while trying to fall asleep. So if you need a quiet house to sleep, this setup is not for you.

Another lesson we learned is that the vacuums or mops do not work great in darkness, so we now have to leave the light on downstairs at night when we want to vacuum or mop the floor. We should be able to automate that using Google Home, so Google Home could turn on the lights, start the vacuum and then once done, turn off the lights again. We haven't actually gotten around to test that yet though.

As for the mop, I would say that it is not a replacement for mopping yourself, but it can reduce the frequency of you mopping yourself and thus help maintain a nice clear floor for longer after you done a full manual mop yourself. Also the m6 is super sensitive to edges, which I assume is to avoid it trying to mop your rugs and mats, but it also means that it can not traverse even small thresholds. So for us who have small thresholds between our kitchen area and the rest of the house we have to carry the mop over the thresholds and mop the rest of the first floor as a separate action, which is a bit of an annoyance now that we are running these things at night. That said the kitchen is the one room which needs moping more regularly, so in some sense the current setup where the roomba vacuums the whole first floor and the braava mop mops just the kitchen is a workable solution for us. One nice feature here is that they can be set up to run in order, so the mop will only start once the vacuum is done (that feature is the main reason we haven't tested out other brand mops which might handle the threshold situation better).

So to conclude, would I recommend robot vacuums and robot mops to other parents with you kids? I would say yes, it has definitely helped us keep the house cleaner and nicer and let us spend less time cleaning the house. But it is not a miracle cure in any way or form, it still takes time and effort to prepare and set up the house and sometimes you still need to do especially the mopping yourself to get things really clean. As for the question of iRobot versus other brands I have no input as I haven't really tested any other brands. iRobot is a local company so their vacuums are available in a lot of stores around me and I drive by their HQ on a regular basis, so that is the more or less random reason I ended up with their products as opposed to competing ones.

28 May 2020 4:37pm GMT

25 May 2020

feedplanet.freedesktop.org

Roman Gilg: Wrapland redone

The KWinFT project with its two major open source offerings KWinFT and Wrapland was announced one month ago. This made quite some headlines back then but I decided to keep it down afterwards and push the project silently forward on a technical side.

Now I am pleased to announce the release of a beta version for the next stable release 5.19 in two weeks. The highlights of this release are a complete redesign of Wrapland's server library and two more projects joining KWinFT.

Wrapland Server library redesign

One of the goals of KWinFT is to facilitate large upsetting changes to the internal structure and technological base of its open source offerings. As mentioned one month ago in the project announcement these changes include pushing back the usage of Qt in lower-level libraries and instead making use of modern C++ to its full potential.

We achieved the first milestone on this route in an impressively short timeframe: the redesign of Wrapland's server library for improved encapsulation of external libwayland types and providing template-enhanced meta-classes for easy extension with new functionality in the future.

This redesign work was organized on a separate branch and merged this weekend into master. In the end that included over 200 commits and 40'000 changed lines. Here I have to thank in particular Adrien Faveraux who joined KWinFT shortly after its announcement and contributed several class refactors. Our combined work enabled us to deliver this major redesign already now with the upcoming release.

Aside from the redesign I used this opportunity to add clang-based tools for static code analysis: clang-format and clang-tidy. Adding to our autotests that run with and without sanitizers Wrapland's CI pipelines now provide efficient means for handling contributions by GitLab merge requests and checking back on the result after merge. You can see a full pipeline with linters, static code analysis, project build and autotests passing in the article picture above or check it out here directly in the project.

New in KWinFT: Disman and KDisplay

With this release Disman and KDisplay join the KWinFT project. Disman is a fork of libkscreen and KDisplay one of KScreen. KScreen is the main UI in a KDE Plasma workspace for display configuration and I was its main contributor and maintainer in the last years.

Disman can be installed in parallel with libkscreen. For KDisplay on the other side it is recommended to remove KScreen when KDisplay is installed. This way not both daemons try to meddle with the display configuration at the same time. KDisplay can make use of plugins for KWinFT, KWin and wlroots so you could also use KDisplay as a general replacement.

Forking libkscreen and KScreen to Disman and KDisplay was an unwanted move from my side because I would have liked to keep maintaining them inside KDE. But my efforts to integrate KWinFT with them were not welcomed by some members of the Plasma team. Form your own opinion by reading the discussion in the patch review.

I am not happy about this development but I decided to make the best out of a bad situation and after forking and rebranding directly created CI pipelines for both projects which now also run linters, project builds and autotests on all merge requests and branch pushes. And I defined some more courageous goals for the two projects now that I have more control.

One would think after years of being the only person putting real work into KScreen I would have had this kind of freedom also inside KDE but that is not how KDE operates.

Does it need to be this way? What are arguments for and against it? That is a discussion for another article in the future.

Very next steps

There is an overall technical vision I am following with KWinFT: building a modern C++ framework for Wayland compositor creation. A framework that is built up from independent yet well interacting small libraries.

Take a look at this task for an overview. The first one of these libraries that we have now put work in was Wrapland. I plan for the directly next one to be the backend library that provides interfacing capabilities with the kernel or a host window system the compositor runs on, what in most cases means talking to the Direct Rendering Manager.

The work in Wrapland is not finished though. After the basic representation of Wayland objects has been improved we can push further by layering the server library like this task describes. The end goal here is to get rid of the Qt dependency and make it an optional facade only.

How to try out or contribute to KWinFT

You can try out KWinFT on Manjaro. At the moment you can install KWinFT and its dependencies on Manjaro's unstable images but it is planned to make this possible also in the stable images with the upcoming 5.19 stable release.

I explicitly recommend the Manjaro distribution nowadays to any user from being new to Linux to experts. I have Manjaro running on several devices and I am very pleased with Manjaro's technology, its development pace and its community.

If you are an advanced user you can also use Arch Linux directly and install a KWinFT AUR package that builds KWinFT and its dependencies directly from Git. I hope a package of KWinFT's stable release will also be soon available from Arch' official repositories.

If you want to contribute to one of the KWinFT projects take a look at the open tasks and come join us in our Gitter channel. I am very happy that already several people joined the project who provide QA feedback and patches. There are also opportunities to work on DevOps, documentation and translations.

I am hoping KWinFT will be a welcoming place for everyone interested in high-quality open source graphics technology. A place with low entry barriers and many opportunities to learn and grow as an engineer.

25 May 2020 10:00am GMT

20 May 2020

feedplanet.freedesktop.org

Melissa Wen: Everyone makes a script

Meanwhile, in GSoC:

I took the second week of Community Bonding to make some improvements in my development environment. As I have reported before, I use a QEMU VM to develop kernel contributions. I was initially using an Arch VM for development; however, at the beginning of the year, I reconfigured it to use a Debian VM, since my host is a Debian installation - fewer context changes. In this movement, some ends were loose, and I did some workarounds, well… better round it off.

I also use kworkflow (KW) to ease most of the no-coding tasks included in the day-to-day coding for Linux kernel. The KW automates repetitive steps of a developer's life, such as compiling and installing my kernel modifications; finding information to format and send patches correctly; mounting or remotely accessing a VM, etc. During the time that preceded the GSoC project submission, I noticed that the feature of installing a kernel inside the VM was incompleted. At that time, I started to use the "remote" option as palliative. Therefore, I spent the last days learning more features and how to hack the kworkflow to improve my development environment (and send it back to the kw project).

I have started by sending a minor fix on alert message:

kw: small issue on u/mount alert message

Then I expanded the feature "explore" - looking for a string in directory contents - by adding GNU grep utility in addition to the already used git grep. I gathered many helpful suggestions for this patch, and I applied them together with the reviews received in a new version:

src: add grep utility to explore feature

Finally, after many hours of searching, reading and learning a little about guestfish, grub, initramfs-tools and bash, I could create the first proposal of code changes that enable kw to automate the build and install of a kernel in VM:

add support for deployment in a debian-VM

The main barrier to this feature was figuring out how to update the grub on the VM without running the update-grub command via ssh access. First, I thought about adding a custom file with a new entry to boot. Thinking and researching a little more, I realized that guestfish could solve the problem and, following this logic, I found a blog post describing how to run "update-grub" with guestfish. From that, I made some adaptations that created the solution.

However, in addition to updating grub, the feature still lacks some steps to install the kernel on the VM properly. I checked the missing code by visiting an old FLUSP tutorial that describes the step-by-step of compiling and install the Linux Kernel inside a VM. I also used the implementation of the "remote" mode of the "kw deploy" to wrap up.

Now I use kw to automatically compile and install a custom kernel on my development VM. So, time to sing: "Ooh, that's why I'm easy; I'm easy as Sunday morning!"

Maybe not now. It's time to learn more about IGT tests!

20 May 2020 6:00pm GMT

09 Nov 2011

feedPlanet KDE

Cool new stuff in CMake 2.8.6 (2): pkg-config compatible mode added for use e.g. with autotools

After introducing the automoc feature in my last blog, here comes the next part of this series. More will follow.

The new --find-package mode of CMake

Typically, in projects which are built using autotools or handwritten Makefiles, the tool pkg-config is used to find whether and where some library, used by the software, is installed on the current system, and prints the respective command line options for the compiler to stdout.

Since CMake 2.8.6, also CMake can be used additionally to or instead of pkg-config in such projects to find installed libraries.

With version 2.8.6 CMake features the new command line flag --find-package. When called in this mode, CMake produces results compatible to pkg-config, and can thus be used in a similar way.

E.g. to get the compiler command line arguments for compiling an object file, it can be called like this:

   $ cmake --find-package -DNAME=LibXml2 -DLANGUAGE=C -DCOMPILER_ID=GNU -DMODE=COMPILE
   -I/usr/include/libxml2
   $

To get the flags needed for linking, do

   $ cmake --find-package -DNAME=LibXml2 -DLANGUAGE=C -DCOMPILER_ID=GNU -DMODE=LINK
   -rdynamic -lxml2
   $

As result, the flags are printed to stdout, as you can see.

The required parameters are

So, you can insert calls like the above in your hand-written Makefiles.
For using CMake in autotools-based projects, you can use cmake.m4, which is now also installed by CMake.
This is used similar to the pkg-config m4-macro, just that it uses CMake internally instead of pkg-config. So your configure.in could look something like this:

   ...
   PKG_CHECK_MODULES(XFT, xft >= 2.1.0, have_xft=true, have_xft=false)
    if test $have_xft = "true"; then
        AC_MSG_RESULT(Result: CFLAGS: $XFT_CFLAGS LIBS: $XFT_LIBS)
    fi

   CMAKE_FIND_PACKAGE(LibXml2, C, GNU)
   AC_MSG_RESULT(Result: CFLAGS: $LibXml2_CFLAGS LIBS: $LibXml2_LIBS)
   ...

This will define the variables LibXml2_CFLAGS and LibXml2_LIBS, which can then be used in the Makefile.in/Makefiles.

What does that mean for developers of CMake-based libraries ?

You don't have to install pkg-config pc-files anymore, just install a Config.cmake file for CMake, and both CMake-based and also autotools-based or any other projects can make use of your library without problems.
Documentation how this is done can be found here:

What does that mean for developers working on e.g. autotools-based projects, and using a project built with CMake ?

Take a look at the cmake_find_package() m4-macro installed since CMake 2.8.6 in share/aclocal/cmake.m4, it contains documentation, and will help you using that library.
Thanks go to Matthias Kretz of Phonon fame, now working on HPC stuff, who wrote the cmake.m4 from scratch (which was necessary since it had to be BSD-licensed in order to be included in CMake).

Internals

Internally, CMake basically executes a find_package() with the given name, turns the results into the command line options for the compiler and prints them to stdout.
This means it works basically for all packages for which a FindFoo.cmake file exists or which install a FooConfig.cmake file.
There is one issue though: FindFoo.cmake files, which execute try_compile() or try_run() commands internally, are not supported, since this would required setting up and testing the compiler toolchain completely.
It works best for libraries which install a FooConfig.cmake file, since in these cases nothing has to be detected, all the information is already there.

All this stuff is still very new, and has not yet seen wide real world testing.
So, if you use it and find issues, or have suggestions how to improve it, please let me know.

Alex

09 Nov 2011 9:15pm GMT

Debugging nepomuk/virtuoso’s CPU usage

There was a lot of bug fixing regarding nepomuk and its indexing. However you might still get a high CPU-usage. Reporting this is a bit useless unless you can at least give some info about what's happening.

So what you can do is query virtuoso's status. On openSUSE it works like this: first find the .ini file currently in usage to get the port virtuoso is using, connect to virtuoso and finally query virtuoso for its status and running statements. The latter are unfortunately truncated so I would appreciate some hint on how to get around that.

ps aux | grep virtuoso

finds /usr/bin/virtuoso-t +foreground +configfile /tmp/virtuoso_T18122.ini +wait

cat /tmp/virtuoso_T18122.ini | grep Port

finds ServerPort=1111

isql-vt -H localhost -P 1111 -U dba -P dba

which connects to virtuoso

status();

which shows you some info and which queries are keeping the process busy. isql-vt is part of the virtuoso-server package but it might be that only recent packages have it compiled and older packages lack the tools.

Further you can do the following:

If you have any other hints regarding this piece of software feel free to mention them and I will add them to the post.


09 Nov 2011 3:10pm GMT

Help KDE e.V. secure funding for a sprint with just a few clicks

Some weeks ago Lydia blogged about a German bank giving away 1000 euros for each 1000 associations who can get the most votes. Well, until four days ago we were at postion 320, now we are at 735 and falling. Please, read Lydia's post about how to vote and help KDE e.V., it is just a few clicks.

It surprises me KDE e.V. have had only 3652 votes so far. If each people voted three times, which is allowed by the rules, it would give ~1218 people. Is there only 1218 people using KDE in the world?

Just the poll page is in German, but the poll is not limited to German citizens. Anybody can vote and according to the rules you can vote three times with your e-mail.

09 Nov 2011 12:22pm GMT