03 Jul 2015

feedPlanet KDE

GSoC ’15 Post #3: Install-ed!

After familiarising myself with PackageKit-Qt last week, I started working on a small application that uses it this week. The aim was simple - to create an application that uses PackageKit to install packages. Thanks to detailed guides here (PackageKit reference) and here (PackageKit-Qt API docs) pointed to by ximion, I was able to build a KF5 application that takes user input, asks for password and installs the application the user typed in. The application can be found here (git).

The Applicationsnapshot14

The application has a simple interface - a lineEdit and two pushButtons.
Once the user input has been stored into a QString variable (this is the package name), the next step is to resolve the name to a package ID. The package ID is basically the package name with some more data (related to the system on which the package is being installed). For example, package ID for the package geany turns out to be

geany;1.24.1+dfsg-1build1;amd64;vivid
geany;1.24.1+dfsg-1build1;i386;vivid

To resolve the package names to package IDs, PackageKit provides a function named resolve. Resolve emits package IDs, which when fed to the function packageInstall installs the packages on the system. That's it. All you need to know to build your application is the functions and what they emit.

Next Up

Now, I have all the tools ready to start working on the applications.

Next, I'm working on Dolphin to integrate PackageKit into it to install Samba. I have run into some building issues, but hopefully they'll be solved soon and once that's done, I'll just have to replicate what I did in the above application there.

03 Jul 2015 7:07am GMT

02 Jul 2015

feedPlanet KDE

Pointing devices KCM: update #2

For general information about the project, look at this post

Originally I planned to work on the KCM UI at this time. But as I am unsure how it should look like, I started a discussion on VDG forum, and decided to switch to other tasks.

Currently the KCM looks like this:

KCM screenshot

KDED module is, I think, almost complete. It can apply settings from configuration file, and has a method exported to D-Bus to reload configuration for all devices or for some specific device. Of course, it also applies settings immediately when device is plugged in. The only thing that is missing is auto-disabling of some devices (like disable touchpad when there's an external mouse).

As usual, here is a link to the repository.

Also, I started working on a D-Bus API for KWin. The API will expose most of libinput configuration settings. Currently it lists all available devices, some of their most important read-only properties (like name, hardware ids, capabilities), and allows to enable/disable tap-to-click as an example of writable property. As I already said, KCM isn't yet ready, but I was able to enable tap-to-click on my touchpad using qdbusviewer.

My kwin repo clone is here, branch libinput-dbusconfig

02 Jul 2015 11:00pm GMT

Fiber UI Experiments – Conclusion?

It's been one heckuva road, but I think the dust is starting to settle on the UI design for Fiber, a new web browser which I'm developing for KDE. After some back-and fourth from previous revisions, there are some exciting new ideas in this iteration! Please note that this post is about design experiments - the development status of the browser is still very low-level and won't reach the UI stage for some time. These experiments are being done now so I can better understand the structure of the browser as I program around a heavily extension-based UI, so when I do solidify the APIs it we have a rock-solid foundation.

Just as an aside before I get started; just about any time I mention "QML", there is the possible chance that whatever is being driven could also alternatively use HTML. I'm looking into this, but make no guarantees.

As a recap to previous experiments, one of the biggest things that became very clear from feedback was that the address bar isn't going away and I'm not going to hide it. I was a sad panda, but there are important things the address bar provides which I just couldn't work around. Luckily, I found some ways to improve upon the existing address bar ideology via aggressive use of extensions, and slightly different usage compared to how contemporary browsers embed extensions into the input field - so lets take a look at the current designs;

tabsOnSide tabsOnBottom
By default, Fiber will have either "Tabs on Side" or "Tabs on Bottom"; this hasn't been decided yet, but there will also have a "Tabs on Top" option (which I have decided will not be default for a few reasons). Gone is the search box as it was in previous attempts - replaced with a proper address bar which I'm calling "Multitool" - and here's more about it why I'm a little excited;

Multitool

Fiber is going to be an extensions-based browser. Almost everything will be an extension, from basic navigational elements (back, forward), to the bookmarks system - and all will either disable-able or replaceable. This means every button, every option, every utility will be configurable. I've studied how other browsers embed extensions in the address bar, and none of them really integrate with it in a meaningful and clearly defined way. Multitool is instead getting a well-defined interface for extensions which make use of the bar;

Extensions which have searchable or traversable content will be candidates for extending into the Multitool, which includes URL entry, search, history, bookmarks, downloads, and other things. Since these are extensions with a well-defined API you will be able to ruthlessly configure what you want or don't want to show up, and only the URL entry will be set in stone. Multitool extensions will have 3 modes which you can pick from: background, button, and separate.

Background extensions will simply provide additional results when typing into the address bar. By default, this will be the behaviours of things like current tabs, history, and shortcut-enabled search. Button extensions in mutitool will provide a clickable option which will take over the Multitool when clicked, offering a focused text input and an optional QML-based "home popout". Lastly, "separate" extensions will split the text input offering something similar to a separate text widget - only integrated into the address bar.

The modes and buttons will be easily configurable, and so you can choose to have extensions simply be active in the background, or you could turn on the buttons, or disable them entirely. Think of this as applying KRunner logic to a browser address bar, only with the additional ability to perform "focused searches".bookmarkshome

Shown on the right side of the Multitool are the two extensions with dedicated buttons; bookmarks and search, which will be the default rollout. When you click on those embedded buttons they will take over the address bar and you may begin your search. These tools will also be able to specify an optional QML file for their "home" popout. For example the Bookmarks home popout could be a speed-dial UI, History could be a time-machine-esque scrollthrough. Seen above is a speed dial popout. With Bookmarks and Search being in button mode by default, just about everything else that performs local searches will be in "background mode", except keyword-based searches which will be enabled - but will require configuration. Generally, the address portion of Multitool will NOT out-of-box beam what you type to a 3rd party, but the search extension will. I have not selected search providers.

We also get a two-for-one deal for fast filtering, since the user is already aware they have clicked on a text entry. Once you pick a selection from a focused search or cancel, the bar will snap back into address mode. If you hit "enter" while doing a focused search, it will simply open a tab with the results of that search.

Aside from buttons, all the protocol and security information relevant to the page (the highlighted areas on the left) will also be extension-driven. Ideally, this will let you highly customise what warnings you get, and will also let extensions tie any content-altering behaviour into proper warnings. For example, the ad-blocker may broadcast the number of zapped ads. When clicked the extensions will us QML-driven popouts.

Finally, the address itself (and any focused extension searches) will have extension-driven syntax highlighting. Right now I'm thinking of using a monospace font so we can drive things like bold fonts without offsetting text.

Tabs

Tab placement was a big deal to people; some loved the single-row approach, others wanted a more traditional layout. The solution to the commotion was the fact that there isn't a single solution. Tabs will have previews and simple information (as seen in the previous round of designs), so by default tabs will be on the bottom or side so the previews don't obstruct unnecessary amounts of UI.

tabsontop

Fiber will have 3 tabbing options; Tabs on top, tabs on bottom, and tabs on side. When tabs are "on side" it will reduce the UI to one toolbar and place the tabs on the same row as the Multitool, and should also trigger a "compressed" layout for Multitool as well.

There will be the usual "app tab" support of pinning tabs, but not shown here will be tab-extensions. Tab extensions will behave like either app tabs or traditional tabs, and will be QML-powered pages from extensions. These special tabs will also be home-screen or new-tab options, and that is, largely, their purpose; but clever developers may find a use in having extension-based pages.

Tabs can also embed simple toggle-buttons, as usual, powered by extensions. Main candidates for these will be mute buttons or reader-mode buttons. There won't be much to these buttons, but they will be content-sensitive and extensions will be required to provide the logic for when these buttons should be shown. For example, "reader mode" won't be shown on pages without articles, and "mute" won't be shown on pages without sound.

Current Progress

The current focus in Fiber is Profiles, Manifest files, and startup. Profiles will be the same as Firefox profiles, where you can have separate profiles with separate configurations. When in an activities-enabled environment, Fiber Profiles will attempt to keep in sync with the current activity - otherwise they will fall back to having users open a profile tool.

The manifest files are a big deal, since they define how extensions will interact with the browser. Fiber manifest files were origionally based on a slimmed down Chrome manifest with more "Qt-ish" syntax (like CamelCase); but with the more extensive extension plans and placement options there's more going on with interaction points. There's a decent manifest class, and it provides a reliable interface to read from, including things like providing missing defaults and offering some debugging info which will be used in Fibers extension development tools.I'm using DBus for Fiber to check a few things on startup; Fiber will be a "kind of" single-instance application, but individual profiles will be separate processes. DBus is being used to speak with running instances to figure out what it should do. The idea behind this setup is to keep instances on separate activities from spiking eachother, but to still allow the easier communication between windows of a single instance - this should help things like tab dragging between windows immensely. It also gives the benefit that you could run "unstable" extensions in a separate instance, which will be good for development purposes.I wish I could say development is going quickly, but right now my time is a bit crunched; either way things are going smoothly, and I'd rather be slow and steady than fast and sloppy.Development builds will be released in the future (still a long ways away) which I'll be calling "Copper" builds. Copper builds will mostly be a rough and dirty way for me to test UI, and will not be stable or robust browsers. Mostly, it'll be for the purpose of identifying annoying UI patterns and nipping them before they get written into extensions.


02 Jul 2015 9:13pm GMT

Joining the press – Which topic would you like to read about?

WritingWhen I saw an ad on Linux Veda that they are looking for new contributors to their site, I thought "Hey, why shouldn't I write for them?". Linux Veda (formerly muktware) is a site that offers Free Software news, how-tos, opinions, reviews and interviews. Since its founder Swapnil Bhartiya is personally a big KDE fan, the site has a track record of covering our software extensively in its news and reviews, and has already worked with us in the past to make sure their articles about our software or community were factually correct (and yes, it was only fact-checking, we never redacted their articles or anything).

Therefore, I thought that a closer collaboration with Linux Veda could be mutually beneficial: Getting exclusive insights directly from a core KDE contributor could give their popularity an additional boost, while my articles could get an extended audience including people who are currently interested in Linux and FOSS, but not necessarily too much interested in KDE yet.

I asked Swapnil if I could write for him. He said it would be an honor to work with me, which I must admit made me feel a little flattered. So I joined Linux Veda as a freelance contributor-

My first article actually isn't about anything KDE-related, but a how-to for getting 1080p videos to work in Youtube's HTML5 player in Firefox on Linux, mainly because I had just explained it to someone and felt it might benefit others as well if I wrote it up.

In the future you will mainly read articles about KDE-related topics from me there. Since I'm not sure which topics people would be most interested in, I thought I'd ask you, my dear readers. You can choose between three topics that came to my mind which one I should write about, or add your own ideas. I'm excited which topic will win!

Take Our Poll

Of course this doesn't mean I won't write anything in my blog here anymore. I'll decide on a case-by-case basis if an article would make more sense here or over at Linux Veda. I hope you'll find my articles there interesting and also read some of the other things they have on offer, you'll find many well-written and interesting articles there!


Filed under: KDE

02 Jul 2015 8:26pm GMT

KDEPIM report (week 26)

My focus was KAddressBook last week.

KAddressBook:

It's not a complicated application, so I didn't find a lot of bugs. But indeed as I maintain it I already fixed critical bugs.

Some fixes:

Other works in KDEPIM:

Future:

Next week I will focus on KNotes and Kleopatra.

Other info:

Dan merged his work about replacing text protocol by a binary protocol for Akonadi. I confirm it's faster.

He worked a lot to improve speed. Now kmail is very speed :)

I hope that he will add more speed patch :)

Sergio continued to add some optimizations in kdepimlibs/kdepim/akonadi. !!!

02 Jul 2015 7:41pm GMT

KStars Observers Management patched

This update is a little break from my current GSoC project so i won't talk about my progress just yet. I will talk about the current observers management dialog that is currently active in KStars. Basically, an observation session requires observer information like first name, last name and contact. Currently, an observer could be added only from the settings menu so i thought that it would be more intuitive if this functionality was placed in a more appropirate place and a proper GUI was to be implemented for a better user experience.

This is how the new observers management dialog looks like:

observermanagement

Now, the user has a heads on display on how many observers are currently in the database and has the possibility of managing that information.

Regarding GSoC, i am now working on the main Scheduler logic. I will come with an update as soon as possible. Stay tuned :D


02 Jul 2015 12:29pm GMT

01 Jul 2015

feedPlanet KDE

Convergence through Divergence

It's that time of the year again, it seems: I'm working on KPluginMetaData improvements.

In this article, I am describing a new feature that allows developers to filter applications and plugins depending on the target device they are used on. The article targets developers and device integrators and is of a very technical nature.

Different apps per device

This time around, I'm adding a mechanism that allows us to list plugins, applications (and the general "service") specific for a given form factor. In normal-people-language, that means that I want to make it possible to specify whether an application or plugin should be shown in the user interface of a given device. Let's look at an example: KMail. KMail has two user interfaces, the desktop version, a traditional fat client offering all the features that an email client could possibly have, and a touch-friendly version that works well on devices such as smart phones and tablets. If both are installed, which should be shown in the user interface, for example the launcher? The answer is, unfortunately: we can't really tell as there currently is no scheme to derive this information from in a reliable way. With the current functionality that is offered by KDE Frameworks and Plasma, we'd simply list both applications, they're both installed and there is no metadata that could possibly tell us the difference.

Now the same problem applies to not only applications, but also, for example to settings modules. A settings module (in Frameworks terms "KCM") can be useful on the desktop, but ignored for a media center. There may also be modules which provide similar functionality, but for a different use case. We don't want to create a mess of overlapping modules, however, so again, we need some kind of filtering.

Metadata to the rescue

Enter KPluginMetaData. KPluginMetaData gives information about an application, a plugin or something like this. It lists name, icon, author, license and a whole bunch of other things, and it lies at the base of things such as the Kickoff application launcher, KWin's desktop effects listing, and basically everything that's extensible or uses plugins.

I have just merged a change to KPluginMetaData that allows all these things to specify what form factor it's relevant and useful for. This means that you can install for example KDevelop on a system that can be either a laptop or a mediacenter, and an application listing can be adapted to only show KDevelop when in desktop mode, and skipping it in media center mode. This is of great value when you want to unclutter the UI by filtering out irrelevant "stuff". As this mechanism is implemented at the base level, KPluginMetaData, it's available everywhere, using the exact same mechanism. When listing or loading "something", you simply check if your current formfactor is among the suggested useful ones for an app or plugin, and based on that you make a decision whether to list it or skip it.

With increasing convergence between user interfaces, this mechanism allows us to adapt the user interface and its functionality in a fully dynamic way, and reduces clutter.

Getting down and dirty

So, how does this look exactly? Let's take KMail as example, and assume for the sake of this example that we have two executables, kmail and kmail-touch. Two desktop files are installed, which I'll list here in short form.

For the desktop fat client:

[Desktop]
Name=Email
Comment=Fat-client for your email
Exec=kmail
FormFactors=desktop

For the touch-friendly version:

[Desktop]
Name=Email
Comment=Touch-friendly email client
Exec=kmail
FormFactor=handset,tablet

Note that that "FormFactors" key does not just take one fixed value, but allows specifying a list of values - an application may support more than one form-factor. This is reflected throughout the API with the plural form being used. Now the only thing the application launcher has to do is to check if the current form-factor is among the supplied ones, for example like this:

foreach (const KPluginMetaData &app, allApps) {
    if (app.formFactors().count() == 0 || app->formFactors().contains("desktop")) {
        shownAppsList.append(app);
    }
}

In this example, we check if the plugin metadata does specify the form-factor by counting the elements, and if it does, we check whether "desktop" is among them. For the above mentioned example files, it would mean that the fat client will be added to the list, and the touch-friendly one won't. I'll leave it as an exercise to the reader how one could filter only applications that are specifically suitable for example for a tablet device.

What devices are supported?

KPluginMetaData does not itself check if any of the values make sense. This is done by design because we want to allow for a wide range of form-factors, and we simply don't know yet which devices this mechanism will be used on in the future. As such, the values are free-form and part of the contract between the "reader" (for example a launcher or a plugin listing) and the plugins themselves. There are a few commonly used values already (desktop, mediacenter, tablet, handset), but in principle, adding new form-factors (such as smartwatches, toasters, spaceships or frobulators) is possible, and part of its design.

For application developers

Application developers are encouraged to add this metadata to their .desktop files. Simply adding a line like the FormFactors one in the above examples will help to offer the application on different devices. If your application is desktop-only, this is not really urgent, as in the case of the desktop launchers (Kickoff, Kicker, KRunner and friends), we'll likely use a mechanism like the above: No formfactors specified means: list it. For devices where most of the applications to be found will likely not work, marking your app with a specific FormFactor will increase the chances of it being found. As applications are being adopted to respect the form-factor's metadata, its usefulness will increase. So if you know your app will work well with a remote control, add "mediacenter", if you know it works well on touch devices with a reasonably sized display, add "tablet", and so on.

Moreover…

We now have basic API, but nobody uses it (a chicken-and-egg situation, really). I expect that one of the first users of this will be Plasma Mediacenter. Bhushan is currently working on the integration of Plasma widgets into its user interface, and he has already expressed interest in using this exact mechanism. As KDE software moves onto a wider range of devices, this functionality will be one of the cornerstones of the device-adaptable user interface. If we want to use device UIs to their full potential, we do not just need converging code, we also need to add divergence features to allow benefiting from the difference of devices.

01 Jul 2015 10:53pm GMT

Hello Red Hat

Red Hat Logo As I mentioned in my last post I left my previous employer after quite some years - since July 1st I work for Red Hat.

In my new position I will be a Solutions Architect - so basically a sales engineer, thus the one talking to the customers on a more technical level, providing details or proof of concepts where they need it.

Since its my first day I don't really know how it will be - but I'm very much looking forward to it, it's an amazing opportunity! =)


Filed under: Business, Fedora, Linux, Politics, Technology, Thoughts

01 Jul 2015 10:35pm GMT

The Kubuntu Podcast Team is on a roll

kubuntu_desktop_1600x1200_hd-wallpaper-771776

Building on their UOS Hangout, the Kubuntu Podcast Team has created their second Hangout, featuring Ovidiu-Florin Bogdan, Aaron Honeycutt, and Rick Timmis, discussing What is Kubuntu?

01 Jul 2015 8:39pm GMT

The Earth, on Android

In the previous month I worked on compiling Marble widget to Android. It was a long and hard road but it is here:



(I shot this screenshot on my phone)

The globe can be rotated, and the user can zoom with the usual zooming gesture. Here is a short video example:
https://www.youtube.com/watch?v=h0i75ryWdgY


The hardest part was to figure out, how to compile everything with cmake instead of qmake and Qt Creator. There are some very basic things what can sabotage your successfully packaged and deployed app. For example if you did not set a version number in cmake for your library...

As you maybe know Marble also uses some elements of QtWebKit, but this is not supported on Android. So I introduced some dummy classes to substitute these (of course, not in their useability) to be able to compile Marble for Android.

You can find here step-by-step instructions, how to compile Marble Maps for Android:
https://techbase.kde.org/Projects/Marble/AndroidCompiling

The next steps:
We have decided to separate Marble's functionality into two separate apps. I introduce you Marble Maps and Marble Globe. As their name suggests Marble Map will be essentially a map application with navigation, and Marble Globe will be an app where you can switch to other planets, view historical maps, etc. what also can be used for teaching purposes.

The main goal for the summer to give life for Marble Maps. But if everything goes fine, Marble Globe can be expected too.

To close this article, here are some screenshots:







01 Jul 2015 6:36pm GMT

Road so far

As GSOC's mid-term is closing in, I thought I'd share what's been done so far! In case you haven't seen my earlier posts, here's a quick reminder on what I'm working on: implementing an Open Street Map (OSM) editor for Marble that allows the user to import ".osm" files, edit them with OSM-specific tools, and finally export them into ready-for-upload files. All that inside Marble's existing Annotate Plugin ( editor for ".kml" maps ).


What's been done so far?

As one would imagine, OSM( http://wiki.openstreetmap.org/wiki/OSM_XML ) has noticeable differences from KML ( https://developers.google.com/kml/documentation/kmlreference ), the schema upon which Marble is built. These differences, from an OSM perspective, mainly consist in server-generated data such as id, changeset, timestamp etc. but also in core data elements, such as the <relation> and <tag> tags.

Up until now, I've developed a way to store this server-generated data, mainly by saving it
as KML's ExtendedData. Exporting to ".osm" files is now possible as well, so that pretty much makes Marble a KML-to-OSM ( and in reverse ) translator at the moment ( it has some draw backs of course )

What was the main challenge?
Not everything can be translated perfectly from OSM to KML and vice-versa, so while translating, I had to ensure as little data as possible is lost.

Since data parsing isn't a really picture-worthy topic, here is an example of a map's journey through Marble's editor:

The OSM version of a highway: "sample highway"

<?xml version="1.0" encoding="UTF-8"?>
<osm version="0.6" generator="Marble 0.21.23 (0.22 development version)">
<node lat="-23.7082750358" lon="-4.4577696853" id="-1" action="modify" visible="false"/>
<node lat="-21.0946495732" lon="-11.9900406335" id="-2" action="modify" visible="false"/>
<node lat="-16.6010784801" lon="-6.7785258299" id="-3" action="modify" visible="true">
<tag k="name" v="sample placemark"/>
</node>

<way id="-75891" action="modify" visible="true">
<tag k="name" v="sample highway"/>
<tag k="highway" v="residential"/>
<nd ref="-1"/>
<nd ref="-2"/>
</way>
</osm>


The KML version of it after going through Marble's editor: The osm data( that is irrelevant from a KML perspective ) is stored within an ExtendedData block

<Placemark>
<name>sample highway</name>
<ExtendedData xmlns:osm_data="Marble/temporary/namespace">
<osm_data:OsmDataSnippet id="-75891" visible="true">
<osm_data:tags>
<osm_data:tag k="highway" v="residential"/>
</osm_data:tags>
<osm_data:nds>
<osm_data:nd count="0">
<osm_data:OsmDataSnippet id="-1" visible="false" action="modify"/>
</osm_data:nd>
<osm_data:nd count="1">
<osm_data:OsmDataSnippet id="-2" visible="false" action="modify"/>
</osm_data:nd>
</osm_data:nds>
</osm_data:OsmDataSnippet>
</ExtendedData>
<Style>
<IconStyle>
<Icon>
<href>share/marble/data/bitmaps/default_location.png</href>
</Icon>
</IconStyle>
</Style>
<LineString>
<coordinates>-4.457769,-23.708275 -11.990040,-21.094649</coordinates>
</LineString>
</Placemark>

01 Jul 2015 4:41pm GMT

Reproducible testing with docker

Reproducible testing is hard, and doing it without automated tests, is even harder. With Kontact we're unfortunately not yet in a position where we can cover all of the functionality by automated tests.

If manual testing is required, being able to bring the test system into a "clean" state after every test is key to reproducibility.

Fortunately we have a lightweight virtualization technology available with linux containers by now, and docker makes them fairly trivial to use.

Docker

Docker allows us to create, start and stop containers very easily based on images. Every image contains the current file system state, and each running container is essentially a chroot containing that image content, and a process running in it. Let that process be bash and you have pretty much a fully functional linux system.

The nice thing about this is that it is possible to run a Ubuntu 12.04 container on a Fedora 22 host (or whatever suits your fancy), and whatever I'm doing in the container, is not affected by what happens on the host system. So i.e. upgrading the host system does not affect the container.

Also, starting a container is a matter of a second.

Reproducible builds

There is a large variety of distributions out there, and every distribution has it's own unique set of dependency versions, so if a colleague is facing a build issue, it is by no means guaranteed that I can reproduce the same problem on my system.

As an additional annoyance, any system upgrade can break my local build setup, meaning I have to be very careful with upgrading my system if I don't have the time to rebuild it from scratch.

Moving the build system into a docker container therefore has a variety of advantages:
* Builds are reproducible across different machines
* Build dependencies can be centrally managed
* The build system is no longer affected by changes in the host system
* Building for different distributions is a matter of having a couple of docker containers

For building I chose to use kdesrc-build, so building all the necessary repositories is the least amount of effort.

Because I'm still editing the code from outside of the docker container (where my editor runs), I'm simply mounting the source code directory into the container. That way I don't have to work inside the container, but my builds are still isolated.

Further I'm also mounting the install and build directories, meaning my containers don't have to store anything and can be completely non-persistent (the less customized, the more reproducible), while I keep my builds fast and incremental. This is not about packaging after all.

Reproducible testing

Now we have a set of binaries that we compiled in a docker container using certain dependencies, so all we need to run the binaries is a docker container that has the necessary runtime dependencies installed.

After a bit of hackery to reuse the hosts X11 socket, it's possible run graphical applications inside a properly setup container.

The binaries are directly mounted from the install directory, and the prepared docker image contains everything from the necessary configurations to a seeded Kontact configuration for what I need to test. That way it is guaranteed that every time I start the container, Kontact starts up in exactly the same state, zero clicks required. Issues discovered that way can very reliably be reproduced across different machines, as the only thing that differs between two setups is the used hardware (which is largely irrelevant for Kontact).

..with a server

Because I'm typically testing Kontact against a Kolab server, I of course also have a docker container running Kolab. I can again seed the image with various settings (I have for instance a John Doe account setup, for which I have the account and credentials already setup in client container), and the server is completely fresh on every start.

Wrapping it all up

Because a bunch of commands is involved, it's worthwhile writing a couple of scripts to make the usage a easy as possible.

I went for a python wrapper which allows me to:
* build and install kdepim: "devenv srcbuild install kdepim"
* get a shell in the kdepim dir: "devenv srcbuild shell kdepim"
* start the test environment: "devenv start set1 john"

When starting the environment the first parameter defines the dataset used by the server, and the second one specifies which client to start, so I can have two Kontact instances with different users for invitation handling testing and such.

Of course you can issue any arbitrary command inside the container, so this can be extended however necessary.

While that would of course have been possible with VMs for a long time, there is a fundamental difference in performance. Executing the build has no noticeable delay compared to simply issuing make, and that includes creating a container from an image, starting the container, and cleaning it up afterwards. Starting the test server + client also takes all of 3 seconds. This kind of efficiency is really what enables us to use this in a lather, rinse, repeat approach.

The development environment

I'm still using the development environment on the host system, so all file-editing and git handling etc. happens as usual so far. I still require the build dependencies on the host system, so clang can compile my files (using YouCompleteMe) and hint if I made a typo, but at least these dependencies are decoupled from what I'm using to build Kontact itself.

I also did a little bit of integration in Vim, so my Make command now actually executes the docker command. This way I get seamless integration and I don't even notice that I'm no longer building on the host system. Sweet.

While I'm using Vim, there's no reason why that shouldn't work with KDevelop (or whatever really..).

I might dockerize my development environment as well (vim + tmux + zsh + git), but more on that in another post.

Overall I'm very happy with the results of investing in a couple of docker containers, and I doubt we could have done the work we did, without that setup. At least not without a bunch of dedicated machines just for that. I'm likely to invest more in that setup, and I'm currently contemplating dockerizing also my development setup.

In any case, sources can be found here:
https://github.com/cmollekopf/docker.git


01 Jul 2015 3:22pm GMT

Web Open Font Format (WOFF) for Web Documents

The Web Open Font Format (short WOFF; here using Aladin font) is several years old. Still it took some time to get to a point, where WOFF is almost painless to use on the linux desktop. WOFF is based on OpenType style fonts and is in some way similar to the more known True Type Font (.ttf). TTF fonts are widely known and used on the Windows platform. Those feature rich kind of fonts are used for high quality font displaying for the system and local office-and design documents. WOFF aims at closing the gap towards making those features available on the web. With these fonts it becomes possible to show nice looking fonts on paper and web presentations in almost the same way. In order to make WOFF a success, several open source projects joined forces, among them Pango and Qt, and contributed to harfbuzz, a OpenType text shaping engine. Firefox and other web engines can handle WOFF inside SVG web graphics and HTML web documents using harfbuzz. Inkscape uses at least since version 0.91.1 harfbuzz too for text inside SVG web graphics. As Inkscape is able to produce PDF's, designing for both the web and print world at the same time becomes easier on Linux.

Where to find and get WOFF fonts?
Open Font Library and Google host huge font collections . And there are more out on the web.

How to install WOFF?
For using inside inkscape one needs to install the fonts locally. Just copy the fonts to your personal ~/.fonts/ path and run

fc-cache -f -v

After that procedure the fonts are visible inside a newly started Inkscape.

How to deploy SVG and WOFF on the Web?
Thankfully WOFF in SVG documents is similar to HTML documents. However simply uploading a Inkscape SVG to the web as is will not be enough to show WOFF fonts. While viewing the document locally is fine, Firefox and friends need to find those fonts independent of the localy installed fonts. Right now you need to manually edit your Inkscape SVG to point to the online location of your fonts . For that open the SVG file in a text editor and place a CSS font-face reference right after the <svg> element like:

</svg>
<style type="text/css">
@font-face {
font-family: "Aladin";
src: url("fonts/Aladin-Regular.woff") format("woff");
}
</style>

How to print a Inkscape SVG document containing WOFF?
Just convert to PDF from Inkscape's file menue. Inkscape takes care for embedding the needed fonts and creates a portable PDF.

In case your prefered software is not yet WOFF ready, try the woff2otf python script for converting to the old TTF format.

Hope this small post gets some of you on the font fun path.

01 Jul 2015 2:55pm GMT

Qt3D Technology Preview Released with Qt 5.5.0

KDAB are pleased to announce that the Qt 5.5.0 release includes a Technology Preview of the Qt3D module. Qt3D provides a high-level framework to allow developers to easily add 3D content to Qt applications using either QML or C++ APIs. The Qt3D module is released with the Technology Preview status. This means that Qt3D will continue to see improvements across the API design, supported features and performance before release. It is provided to start collecting feedback from users and to give a taste of what is coming with Qt3D in the future. Please grab a copy of the Qt 5.5.0 release and give Qt3D a test drive and report bugs and feature requests.

Qt3D provides a lot of functionality needed for modern 3D rendering backed by the performance of OpenGL across the platforms supported by Qt with the exception of iOS. There is work under way to support Qt3D on iOS and we expect this to be available very shortly. Qt3D allows developers to not only show 3D content easily but also to totally customise the appearance of objects by using the built in materials or by providing custom GLSL shaders. Moreover, Qt3D allows control over how the scene is rendered in a data-driven manner. This allows rapid prototyping of new or custom rendering algorithms. Integration of Qt3D and Qt Quick 2 content is enabled by the Scene3D Qt Quick item. Features currently supported by the Qt3D Technology Preview are:

Beyond rendering, Qt3D also provides a framework for adding additional functionality in the future for areas such as:

To learn more about the architecture and features of Qt3D, please read KDAB's series of blogs and the Qt3D documentation.

KDAB and The Qt Company will continue to improve Qt3D over the coming months to improve support for more platforms, input handling and picking, import of additional 3D formats, instanced rendering, more materials and better integration points to the rest of Qt. If you wish to contribute either with code, examples, documentation or time then please contact us on the #qt-3d channel on freenode IRC or via the mailing lists.

The post Qt3D Technology Preview Released with Qt 5.5.0 appeared first on KDAB.

01 Jul 2015 1:02pm GMT

30 Jun 2015

feedPlanet KDE

Good bye credativ

As you might know 7 years ago I joined a company called credativ. credativ was and is a German IT company specialized in Open Source support around Debian solutions.

And it was a great opportunity for me: having no business/enterprise experience whatsoever there was much to learn for me. Dealing with various enterprise and public customers, learning and executing project management, support sales as a technician/pre-sales and so on. Without credativ I wouldn't be who I am today. So thanks, credativ, for 7 wonderful years!

However, everything must come to an end: over the recent time I realized that it's time for me to try something different: to see what else I am capable of, to explore new and different opportunities for me and also to dive into more aspects of the ever growing open source ecosystem.

And thus I decided to look out for a new job. My future still is with Linux, and might not be that surprising for some readers - but more about that in another post.

Today, I'd just like to say thanks to credativ. Good bye, and all the best for the future! =)


Filed under: Business, Debian, Linux, Politics, ProjectManagement, Technology, Thoughts

30 Jun 2015 10:37pm GMT

KStars GSoC 2015 Project


This year marks my first year as a Google Summer of Code (GSoC) mentor, and it has been an exciting experience thus far. I have been a KStars developer for the last 12 years and it is amazing what KStars has accomplished in all those years.

Since KStars caters to both casual and experienced astronomy enthusiasts, the KStars's 2015 GSoC projects reflects this direction. For the casual educational fun side, I proposed the inclusion of constellation art work to be superimposed on the sky map. KStars currently draws constellation lines, names, and boundaries, but constellation art is missing. We required that the data structure must support multiple sky cultures (e.g. western, Chinese..etc) and the artwork itself must be available under a permissible license. New constellation artwork should be available for download using the KNewStuff framework.

Here is a very early look at the constellation art in KStars. The student still needs to work on scaling, rotation, among other things, but it looks promising! But the end of the project, all 88 western constellations can be viewed within KStars in addition to another cultural group.



For the more advanced users who utilize KStars to perform astrophotography, I proposed a simple Ekos Scheduler tool.

Ekos is an advanced astrophotography tool for Linux. It utilizes INDI for device control. With Ekos, the user can use the telescope, CCD, and other equipment to perform Astrophotography tasks. However, the user has to be present to configure the options and to command the actions to perform all the astrophotography related tasks, and hence a scheduler is required to automate observations to be constrained within certain limitations such as required minimum angular separation from the moon, whether conditions...etc. Furthermore, the observations should be triggered when certain conditions are met such as observation time, object's altitude...etc.

The Ekos scheduler is still at very early stages but the workhorse algorithm responsible for dispatching observations jobs is in the works and should be completed soon. Even though the scheduler is currently an Ekos module, it operates by utilizing Ekos DBus interface completely.



Fortunately for KStars, both projects were accepted in GSoC 2015 and I am glad to be working with two very talented and highly motivated students:
The students have made good progress on the objectives of the project and been great when it comes to communications. Being introduced to a new framework and a new paradigm of thinking is a shock to new comers who need time to adjust and get the wheel rolling.

I certainly hope the projects stay on track and get completed on time!

30 Jun 2015 6:36pm GMT