01 Aug 2014

feedPlanet KDE

Utopic Alpha 2 Released

Alpha 2 of Utopic is out now for testing. Download it or upgrade to it to test what will become 14.10 in October.

01 Aug 2014 9:21am GMT

Plasma Media Center – DVB Settings Interface

Hello folks \o/

It's been a while since my last update, but the DVB implementation for Plasma Media Center (PMC) is fully functional, so from now on I am polishing the user interfaces. Up until now, I've been working on the settings panel (I uploaded on a previous post some early snapshots), and after a lot of playing, I think that the UI is quite mature now. Below you'll see how the settings panel is. LibVLC automates most of the settings so this gives me the freedom not to blow the UI with too many features for the time being. I would really appreciate any feedback and thoughts about the UI. Next week I'll upload some images (maybe a video too) of the final UI of the main DVB-T player in action!

Last but not least, I would really like to thank you all for giving a helping hand with my "call for help". I really got too many e-mails from you people and helped me a lot. Kudos.

dvb

01 Aug 2014 8:12am GMT

31 Jul 2014

feedPlanet KDE

fonts in the current era

While our CPU clock speeds may not be increasing as they once did and transistor density is slowing, other elements of our computing experiencing are experience impressive evolution. One such area is in the display: screen pixel density is jumping and GPUs have become remarkable systems. We are currently in a transition between "low density" and "high density" displays and the various screens in a house are likely to have a widely varying mix of resolutions, pixel densities and graphics compute power.


In the house here we have a TV with 110 dpi, a laptop with 125 dpi, another laptop with 277 dpi and phones that top out at over 400 dpi!

Now let's consider a very simple question: what font size should one use? Obviously, that depends on the screen. Now, if applications are welded to a single screen, this wouldn't be too much of a problem: pick a font size and stick with it. Unfortunately for us reality is more complex than that. I routinely plug the laptops into the TV for family viewing. If I get my way, in the near future our applications will also migrate between systems so even in cases where the screen is welded to the device the application will be mobile.

The answer is still not difficult, however: pick a nice base font size and then scale it to the DPI. Both of the major proprietary desktop operating systems have built this concept into their current offerings and it works pretty well. In fact, you can scale the whole UI to the DPI of the system.

This still leaves us with how to get a "reasonable font size". Right now we ship a reasonable default and allow the user to tweak this. How hard can it be, right?

Well, currently KDE software is doing two things very, very wrong when it comes to handling fonts in this modern era. Both of these can be fixed in the Frameworks 5 era if the application developers band together and take sensible action. Let's look at these two issues.

Application specific font settings

What do Kate, Konqueror, Akregator, Dolphin, KMail, Rekonq, KCalc, Amarok, and, I presume, many other KDE applicationsKorganizer and Konsole have in common? They all allow the user to set custom fonts. For some of these applications it default to using system fonts but still allows the user to modify them. For some of these applications, they always use their own fonts. Neither is good, and the latter is just plain evil.

Kontact, being made up of several of the above applications, is a real pit of font sadness since each of its components manages its own font settings.

This wasn't such a big deal in the "old days" when everyone's screen was equally good (or equally crappy, depending on how you wish to look at it ;). In that world, the user could wade through these settings once and then never touch them again.

Today with screens of such radically different pixel densities and resolutions, the need to quickly change fonts depending on the configuration of the system is a must-have. When you have to change those settings in N different applications, it quickly becomes a blocker.

Moreover, if every application is left to its own devices with fonts, at least some will get it wrong when it comes to properly scaling between different screens. When applications start moving between devices this will become even more so the case than it is now.

The solution is simple: applications must drop all custom font settings.

Before anyone starts hollering, I'm not suggesting that there should be no difference between the carefully selected font in your konsole window and the lovingly chosen font in your email application. (I would suggest that, but I have a sense of the limits of possibilities ... ;) What I am suggesting is that every font use case should be handled by the central font settings system in KDE Frameworks. Yes, there should be a "terminal" font and perhaps one for calendars and email, too. A categorization system that doesn't result in dozens of settings but which serves the reasonable needs of all desktop applications could be arrived at with a bit of discipline.

With that in place, applications could rely on the one central system getting the font scaling right so that when the user changes screens (either connected to the device or the screen the application is running on) the application's fonts will "magically" adjust.

Scaling user interface to font size

The idea to scale user interface to font size is one that the current Plasma team has recently decided to adopt. I can not state strongly enough just how broken this is. Watching the results when plugging plug that laptop with the 3300x1800 @277dpi screen into the 110 dpi television is enough to make baby kittens cry tears of sadness. The reason is simple: the font sizes need to scale to the screen. When they aren't scaled, the UI becomes comically large (or tragically small, depending on the direction you are going).

.. and that's ignoring accessibility use cases where people may need bigger fonts but really don't need bigger window shadows to go with it, thank you very much.

The answer here is also very simple, so simple that both Apple and Microsoft are doing it: scale the fonts and the UI to the DPI of the screen. Auto-adjust when you can, let the user adjust the scaling to their preference.

The reason Plasma 5 is not doing this is because Qt doesn't report useful DPI information. Neither does xdpyinfo. So now one might ask where I got all those DPI figures at the start of this blog entry. Did I look them up online? Nope. I used the EDID information from each of those screens as reported by xrandr or similar tools. With the help of the monitor-parse-edid tool, which is 640 lines of Perll, I was able to accurately determine the DPI of every screen in this house. Not exactly rocket science.

With DPI in hand, one can start to think about scaling font sizes and the UI as well, independently. Doesn't even require waiting for Qt to get this right, either. All the information need is right there in the EDID block.

There is a caveat: some KVMs strip the EDID information (bad KVM, bad!), older monitors may not have useful EDID info (the last version of the EDID standard was published eight years ago, so this isn't new technology) and occasionally a vendor goofs and gets the EDID block wrong in a monitor. These are edge cases, however, and should not be the reason to hold back everyone else from living life la vida loca, at least when it comes to font and UI proportions.

In those edge cases, allowing the user to manually tweak the scaling factor is an easy answer. In fact, this alone would be an improvement over the current situation! Instead of tweaking N fonts in N places, you could just tweak the scaling factor and Be Done With It(tm). There is even a natural place for this to happen: kscreen. It already responds to screen hotplug events, allows user tweaking and restores screen-based configurations automagically, it could add scaling to the things it tracks.

If people really wanted to get super fancy, creating a web service that accepts monitors, EDID blocks and the correct scaling factors according to the user and spits out recommendations by tallying up the input would take an afternoon to write. This would allow the creation of a "known bad" database with "known good" settings to match over time. That's probably overkill, however.

The other area of edge case is when you have two screens with different DPI connected to the system at the same time. This, too, is manageable. One option is to simply recognize it as a poorly-supported edge case and keep to one global scaling value. This is the "user broke it, user gets to keep both halves" type approach. It's also the easiest. The more complex, and certainly the one that would give better results, is to have per-screen scaling. To make this work scaling needs to change on a per-window basis based on which screen it is on. This would be manageable in the desktop shell and the window manager, though a bigger ask to support in every single application. It would mean applications would need to not only drop their in-application font settings (which they ought to anyways) but to make fonts and UI layout factors a per-window thing.

If you are running right now to your keyboard to ask about windows that are on more than screen at a time, don't bother: that's a true edge case that really doesn't need to be supported at all. Pick a screen that the is "on" and scale it appropriately. Multi-screen windows can take over the crying for the kittens who will now be bouncing around with happy delight on the other 99.999% of systems in use.

Build a future so bright, we gotta wear shades

These two issues are really not that big. They are the result of some unfortunate decision making in the past, but they can be easily rectified. As it stands, the way fonts are handled completely and without question ruins the user experience so hopefully as applications begin (or complete) the process of porting to Frameworks 5 they can pay attention to this.

.. and just so nobody feels too badly about it all: all the Linux desktop software I've tested so far for these issues has failed to make the grade. So the good news is that everything sucks just as much .. and the great news is that KDE is perfectly poised to rise above the rest and really stand out by getting it right in the Frameworks 5 era.

31 Jul 2014 6:39pm GMT

Of vectors and scalable things.

rect4220Moving

away for my original plan, today we will be talking about Vectors.

To start this series of posts I had a main motivator, SVG. It is a great file format, its the file type I use day in day out and the format I use the most to create all of my images…
But every so often the question about scalable UI's and Vectors pops up. And someone will say something like "we should just use vectors to scale things". To that, I will usually say something like, "Scalable Vectors Graphics are scalable but your screen is not", hoping it will close the conversation just there, and it usually does.

However the above statement is only partly correct and is not the definitive reason why we should avoid off-the-shelf vectors as the source image format for our UI assets.

Scalable definition is a bit like being "impassioned".

The way we define "Scalable" UI's, as we have seen in the past posts, is very peculiar and we tend to use it the way it suits us best, ignoring the practical differences between the different meanings of the concept. Ergo, like being impassioned, the target of our focus is more what we want it to be rather than what it really is.
This tends to produce the confusions and misunderstandings that are so common in this area, precisely because the Scalable part in SVG, is for a certain type of the referred concept, and most of the time not the type of scalable we need in UI.

So what does scalable mean for a Scalable Vector Graphic?

An SVG or any other main vector format is formed (mostly) of mathematical information about paths and its control points (Bézier curves), its visual size is only relevant in regards to the render size of the canvas it's on, and as a result you can zoom an image almost infinitely and will never see pixels. (the pixels are just rendered for a given view area and change accordingly to the section of the vectors in that area).
This makes its a great format to scale images to really huge formats. A rendered 40000×40000 px image that is scaled down to 1000×1000 will look exactly like the image originally rendered in 1000×1000.
Now as we have seen so far, this is often not the type of scalable we want.

SVG's in QML.

You can use SVG in QML right now as a source format, but attention, it won't be re-rendered unless you tell it to do that, the result will be that if you do a scale or even change width height you will end up seeing pixels. You can create bigger renders that will provide higher definitions that are more zoomable, but at the cost of it taking more time to render it and taking a lot of memory for the cached image.
Also SVGs can be very complex, I have created SVGs that take several hours to render Many of my past wallpapers for KDE were done in outline mode and I would only occasionally look at them with filters and colors on and I do have a powerful desktop to render those; trying similar things on a mobile is not a great idea.
SVG support in QT is limited, many things won't work, the filters are mostly not working so the look will dramatically change if you expect blur-based drop shadows to work, you will not see those, the same goes for multiply filters, opacity masks, etc, etc…
So, in a nutshell, don't use SVG as a base source image unless you know its limitations and how it works, it's a wonderful format if you understand it's limitations and strengths, and is sometimes useful in QML.

rect4134Vector sunset wallpaper crop several hours to render on my old linux pc.

What about other vector formats? like Fonts?

There is a special Vector format's that we use all the time and that is also scalable, and its a format that has dealt with this problems many years ago, Fonts…

Fonts are special types of monochromatic vector paths that can have special hints to cater to pixel scalable issues. It does that via 2 methods

rect4164

All of this magic is done by your local font render engine and the extent to how much these operations are done depends on your local font rendering definitions…

Now, since the introduction of QML 2, the way fonts are rendered has changed, and, by default font hinting is ignored. As a result, if you zoom into a font by animating the pixel size you get a nice smooth zoom effect, but slightly more blurry all around fonts, since they are not hinted.
QML does allow you to have native looking fonts by doing

Text {
    text: "Native here!"
    renderType: Text.NativeRendering
}

How ever if you try to do a zoom effect here, via animating the pixel size, you will have a "jumpy" text feeling because, as the size increases, the hinting instructions of the font will keep on trying to adjust to an ever-changing pixel grid.
Also, the default method does a vector like scaling via the distance field method when changing the scale: property, where when using native rendering, you see the pixels of the 1:1 scale ratio being scaled.
Another side effect of the distance field method used by default is that if the scale/font.size is very large you start to see the inaccuracies of the method, and this is valid for any font size you pick, the method generates the distance field glyph from a small image based on the font and it's not updated as you scale it.

rect4307
Also if the font is not well formatted (glyph bounding box smaller than the glyph itself) it might clip the font in some border areas with weird visual results.

So, my advice here is: if you need readable text that you don't want to do pinch to zoom or any other effect that changes the font size in an animated way, use the native render. If you want something dynamic then go for the default mode. Or, you can even try for a compromise solution where, when animating, you choose default and in the end turn native on. It's mostly a matter of what you consider most important and more polished at the end of the day.
Also noteworthy is that on the higher DPI screens the hinting instructions lose a lot of their importance since the pixel size and respective sub-pixel antialiasing 'grays' become much smaller and relatively less important in relation to the font body. The same is true for non square pixels like many (but not all) AMOLED screens have.

Next!

Next post will return to the subject of making scalable X/Y independent elements that work well with DPI metrics…
By the way, we will be discussing this subjects at the training days of Qt Developer Days 2014 Berlin, if you are interested in this subjects Registration is here.

So see you soon, here on my next post or at DevDays.

The post Of vectors and scalable things. appeared first on KDAB.

31 Jul 2014 3:18pm GMT

Text Splitting and Indexing

Over the last week, we have been working on improving the file searching experience in Plasma. We were mostly doing a decent job, but we were lacking in terms of proper Unicode support and making it simpler to search in non English languages. This blog post is a simplified explanation about now what goes on internally.

For the purpose of this discussion, I'm going to treat all files as blobs of text.

How indexing works?

When we're indexing a file we typically have to take all the text and split it into words. This process is called Text Segmentation or Tokenization.

The most trivial implementation of this is just splitting on any white space. However, in practice it gets way more complex as punctuations need to be taken into account. Fortunately, there is an existing standard for this.

After obtaining the words one needs to simplify the words. Since Plasma is not dependent on one language when we do this, we need to do it in a language independent manner.

Currently we do the following -

Finally, we're ready to store the words. We generally store them in a big table where every word corresponds to the file it was found in.

Here each file is represented by a number in order to save space.

We additionally also store where in the file every word was found. This comes at an expensive cost as with Xapian storing positional information doubles your database size. This means slower indexing and more IO consumption.

How searching works?

The initial part of search process is quite similar to the indexing process. When we get a string to search for, we split it up into words, and then simplify each word in the exact fashion we did when indexing those words.

After this we simply lookup each of the words in the table and return the set of files which matched every word.

For example if we were searching for the words árk Zombie in the above table. It would look as follows.

ark AND zombie -> (1, 3, 8) AND (6, 8) -> 8.

Phrases

The explanation above works for simple words, but the moment you bring in more complex words, stuff starts to get a little messy.

Imagine searching for an email address vhanda@kde.org. This would be split into 3 words vhanda, kde and org We could just search for these 3 words, but that's not exactly what the user expected. They expect these words to appear in that exact order. This is where the positional information that we stored during indexing is used. We now search for those 3 words but we make sure they appear consecutively.

This does give some minor false positives such as a document containing the text "vhanda kde org". But in general, it gives us what we want. It also allows users to explicitly search for words appearing consecutively.

Filtering vs Searching

Searching on the Desktop is quite different than searching on the web. Not only are we expected to be much faster, the wealth of information available is much smaller. This results in users expecting searches to work as a filter.

When searching on the web, one generally types the full word. On the desktop, however, depending on the feedback one will only type a part fo the word.

Example: Say searching for a file with the name Dominion - The Flood. One can expect the user to start typing Dom see many other results pop up and then type flood in order to get the desired file. They might never actually type the full word dominion.

Searching by typing only parts of the word gets more complex from an implementation point of view. We only have a mapping from (word) -> (file). So in order to search for a part of the word, we need to iterate over the table and look for every word which starts with that prefix. This makes the query quite long.

Example: Searching for Fi rol might expand to (fi OR fight OR fill finger OR fire) AND (rol OR role OR roller OR rollex)

This whole method of expanding the prefix to every word breaks down when the word is extremely small. Depending on your index expanding one word could result in over 10000 words. Practically, it results in results much much larger than 10000, and that makes the query slower and consumes a crazy amount of memory to represent the query. In these cases we typically try to guess which words occur more frequently than others and only expand the word to the most frequently occurring words.

So, what's changed?

With Plasma 5.1, we've moved away from using Xapian's internal Query Parser and word segmentation engine. We're using our own custom implementation in Qt.

This gives us more control over the entire process, it makes it more testable as we have unit tests for every condition, and lets us modify it in custom ways such as splitting on _, removing diacritic marks and expanding every word when searching for queries.

31 Jul 2014 1:28pm GMT

30 Jul 2014

feedPlanet KDE

GSoC Status Report: Code Completion features

Context: I'm currently working on getting Clang integration in KDevelop into usable shape as part of this year's Google Summer of Code. See the initial blog post for my road map.

While we had basic support for code completion provided by Clang from the beginning (thanks to David Stevens for the initial) work, it still didn't really feel that useful in most cases. In the last two weeks I've spent my time streamlining the code completion features provided by Clang.

This blog post is going to be full of screenshots showing the various features we've been working on lately.

Task recap: Code completion

Achievements

Virtual override completion

Simple case

When in a class context, we can now show completion items for methods that are declared virtual in the base class.

KDevelop screenshot KDevelop showing the "virtual override helper". By pressing Ctrl+Space inside the derived class, KDevelop will propose overriding virtual functions from the base class

By pressing Enter now, KDevelop automatically inserts the following code at the current cursor position:

virtual void foo()

Oh, no! Templates!

We've spent a bit of work to make this feature work with templated base classes, too. Have a look at this:

KDevelop screenshot KDevelop showing the "virtual override helper". KDevelop knows the specialized version of the virtual method in the base-class and proposes to reimplement it

Nice, right?

Implement function helper

When encountering an undefined method which is reachable from within the current context, KDevelop offers to implement those via a tooltip

KDevelop screenshot KDevelop showing the "implement function helper". By pressing Ctrl+Space in an appropriate place, KDevelop offers to implement undefined functions (this also works for free functions, of course)

By pressing Enter now, KDevelop automatically inserts the following code at the current cursor position:

void Foo::foo()
{
}

This works for all types of functions, be it class member functions or free member functions and/or functions in namespaces. Since this is mostly the same code path as the "virtual override helper" feature, this plays nicely with templated functions, too.

"Switch to Definition/Declaration" feature

Sorry, no pictures here, but be assured: It works!

Pressing Ctrl+, ("Jump to Definition") while having the cursor on some declaration will bring you to the definition. Consecutively, pressing Ctrl+. ("Jump to Declaration") on some definition will bring you to the declaration of that definition.

Show viable expressions for current context

Best matches

KDevelop screenshot KDevelop showing completion items when calling a function. KDevelop offers all declarations that are reachable from and useful for the current context. In addition to that, best matching results are put to the front. As you can see variable str gets a higher "match" than variable i.

This is some of the features we actually get for free when using Clang. We get the completion results by invoking clang_codeCompleteAt(...) on the current translation unit and iterating through the results libclang is offering us. Clang gives highly useful completion results, the LLVM team did an amazing job here.

Another example: Enum-case completion

KDevelop screenshot KDevelop showing completion items when in a switch-context and after a 'case' token. KDevelop is just offering declarations that match the current context. Only enumerators from SomeEnum are shown here.

You can play around with Clang's code completion ability from the command-line. Consider the following code in some file test.cpp:

enum SomeEnum { aaa, bbb };

int main()
{
    SomeEnum e;
    switch (e) {
    case 
    //   ^- cursor here
    }
}

Now do clang++ -cc1 -x c++ -fsyntax-only -code-completion-at -:7:9 - < test.cpp and you'll get:

COMPLETION: aaa : [#SomeEnum#]aaa  
COMPLETION: bbb : [#SomeEnum#]bbb  

Awesome, right?

Issues: Too many code completion result from Clang

One thing I've found a bit annoying about the results we're getting is that Clang also proposes to explicitly call the constructors/destructors or assignment operators in some cases. Or in other words: It proposes too many items

Consider the following code snippet:

struct S
{
    void foo();
};

int main()
{
    S s;
    s. 
    //^- cursor here
}

Now doing clang++ -cc1 -x c++ -fsyntax-only -code-completion-at -:8:7 - < test.cc results in:

COMPLETION: foo : [#void#]foo()  
COMPLETION: operator= : [#S &#]operator=(<#const S &#>)  
COMPLETION: S : S::  
COMPLETION: ~S : [#void#]~S()  

Using one of the last three completion results would encourage Clang to generate code such as s.S, s.~S or s.operator=. While these constructs point to valid symbols, this is likely undesired.
Solution: We filter out everything that looks like a constructor, destructor or operator declaration by hand.

So, in fact, what we end up showing the user inside KDevelop is:

KDevelop screenshot KDevelop showing completion items after a dot member access on the variable s. KDevelop is just offering useful declarations, hiding all undesired results from Clang.

Just what you'd expect.

Wrap-Up

Code completion features are mostly done (at least from the point-of-view of what Clang can give us here).

Still, there other interesting completion helpers that could^Wshould be ported over from oldcpp to kdev-clang, such as Olivier's lookahead-completion feature (which I find quite handy). This is not yet done.

I'm writing up yet another blog post which is going to highlight some of the other bits and pieces I've been busy with during the last weeks.

Thanks!

30 Jul 2014 5:54pm GMT

29 Jul 2014

feedPlanet KDE

Logging in into Picasa 3.9 under Linux

A few years ago I showed my father Picasa under Linux, he liked it and started to use it to upload his photos, and has been using it for almost 6 years, even Google discontinued Picasa for Linux at version 3.0 (Picasa is at 3.9 now).

Unfortunately a few weeks ago seems Google decided to kill support for old APIs in the server side and Picasa 3.0 for Linux was giving back an error when trying to upload an image ("Could not find POST url" or similar). I suggested to wait to see if they would come back, but it seems they haven't and so i've had to fix it for him.

Since he's heavily invested in Picasa I've had to install Picasa for windows under wine to make it work. It has not been trivial to get to work so I'll share it here for others that committed the error of trusting privative software and services.

The story is this: Installing picasa 3.9 for windows under wine is pretty easy (next, next, next). The problem is once you are running it, being able to log in. First problem is that the webview using for login doesn't even show. Most of the interwebs suggest installing ie8 using winetricks to solve that and it indeed solves the problem of the webview not showing, but still i can't log in (interestingly the webview will tell you if you wrote the password wrong).

At this point i was stuck for a few hours, even found some dude that claimed he had installed Google Chrome Frame for Internet Explorer and that had fixed for him. But not for me.

After a few hours, I stopped trusting the internet and started to think. I have a windows installation laying around, and i can log in from there, and once logged in Picasa does not ask for the password again, so it must be storing something no?

So I made a copy of the Program Files folder and compared it after loggin in, folders where exactly the same. So it was not stored there, which makes sense since log in is per user not per machine. Next i tried in that weird Personal Folder (Windows $HOME) but could not find any change either. Last chance was the registry, i used http://www.nirsoft.net/utils/reg_file_from_application.html and saw that when logging in, Picasa writes a few entries in HKEY_CURRENT_USER\Software\Google\Picasa\Picasa2\Preferences namely GoogleOAuth, GoogleOAuthEmail, GoogleOAuthServices and GoogleOAuthVersion, so I copied these over to the wine installation (with "wine regedit") and now my father can run Picasa just fine again.

Lessons learned:
* Non Free Software will eventually come back and hit you, if possible don't use it for stuff that is critical to you
* Think about your problem, sometimes is easier than just googling random instructions from the internet.

29 Jul 2014 9:50pm GMT

ownCloud 7 Release Party August 8, Berlin

In a little over a week, on the 8th of August, you're all invited to join Danimo, Blizz and myself at a release party to celebrate the awesomeness that is ownCloud 7 in Berlin!



When and where

We will gather at 7pm at the Wikimedia office in Berlin:
Tempelhofer Ufer 23/24
10963 Berlin
Germany
It is awesome that we can use their office, a big thank you to our fellow data lovers!!

So we start to gather at 7 and round 7:30 we'll have a demo of/talk about ownCloud 7. We will order some pizza to eat. After that: party time!






29 Jul 2014 6:09pm GMT

Rohan on ubuntuonair.com

Kubuntu Ninja Rohan was on today's ubuntuonair talking about Plasma 5 and what is happening in Kubuntu. Watch it now to hear the news.

29 Jul 2014 4:07pm GMT

YouID Identity Claim

di:sha1;eCt+TB1Pj/vgY05nqB48sd1seqo=?http=trueg.selfhost.eu%3A8899


29 Jul 2014 1:54pm GMT

[GSoC'14]: Chronicle of a hitchhiker’s journey so far

nuqneH [Klingon | in English- "Hello"], I am Avik [:avikpal] and this summer I got the opportunity to work with Andreas Cord-Landwehr [:CoLa] to contribute to the KDE-Edu project Artikulate. My task is to implement a way so as to tell a learner how well his/her pronunciation is compared to a native speaker.

Let me warn you about a couple of stuff beforehand; firstly, the trailing post is going to be a bit lengthy to read but I have tried to keep things interesting, secondly, I have a habit of addressing people by their IRC nicks though I have tried to put their real name as well ;)

So let me dive right into what I have been doing for the last couple of months. The first thing I had to do was to port Artikulate to QtGStreamer 1.0. The API changes in QtGStreamer mainly follow the changes performed in GStreamer 1.0. The biggest change is that Gst::PropertyProbe is gone or in our case QGst::PropertyProbePtr is gone which results in a compilation error. So the related code had to be adapted i.e. worked around to do the same. I got some great insights and tips from George Kiagiadakis [:gkiagia] and Diane Trout [:detrout] at #qtgstreamer and finally resolved this.

But still I was getting a runtime error because of Artikulate linking to both libgstreamer-1.0.so.0 and libgstreamer-0.10.so.0. It is a very common problem as GStreamer does not use symbol versioning and in some cases programs end up linking both of them through indirect shared library dependencies. I used pax-utils and lddtree (thanks to CoLa for telling me about these two great tools) to find out the cause of the linking error. Actually libqtwebkit.so.4 links the GStreamer 0.10 shared library as its dependency. CoLa got libqtwebkit built against GStreamer 1.0 and did some code changes and refactoring.

Also we decided against keeping phonon multimedia backend and Artikulate now supports only GStreamer backend. Precisely with Artikulate we are at QtGstreamer 1.2 and for the last few days the CI system also has it. This is just a heads up- I will let CoLa share the details of this work himself so stay tuned.

For pronunciation comparison I had initially decided to generate fingerprint of the audio file and then compare the two fingerprints (i.e. learner pronunciation and native pronunciation). Most of the phrases available with the trainer have one/two-syllable and are around 4-5 seconds in duration. The present chromaprint APIs don't generate distinguishable fingerprints for audio of such low duration. I talked to Lukas Lalinsky from the Accoustid about how can the Chromaprint library be tweaked so as to get distinguishable fingerprints for small duration audio files. Chromaprint does a STFT analysis (FFT over a sliding window) and the window size and overlap determines how much data the algorithm generates. I went on trying to better the results by tweaking with the library but it was giving me only erratic data.

This was the time when I decided that it would be prudent to start working on writing a very basic audio fingerprint generator to cater my purpose. The concept is well discussed and illustrated in numerous papers and blogs so it wasn't hard to break it up into modules.

The first job was to generate a spectrogram of the audio clip. I used the sox API to generate a spectrogram- the following system illustrates such a spectrogram.

Spectrogram of 'European Union' pronounced by me in Bengali

Spectrogram of 'European Union' pronounced by me in Bengali

Next I wrote a code to find the peaks in amplitude where peak is a (time, frequency) pair corresponding to an amplitude value which is the greatest in a local neighborhood around it. Other pairs around it are lower in amplitude, and thus are less likely to survive noise.

My next job is to group these neighborhood peaks into collections/beans and then use a hash function to get the final fingerprint. I am currently working on implementing this part.

Now to get the peaks out of the spectrogram I first found out the histogram of the image and there came an idea to see how different are the histograms of spectrograms of two pronunciation are. There are several statistical ways to compare histograms and so far the results that I have found are quite promising. I shall try to demonstrate using an example.

I asked CoLa for audio recordings of the word "weltmeisterschaft" [World Champion in English] and he send me several recordings- let me take a couple of those.

And its spectrogram looks like this-

CoLa's pronunciation (sample 1)

CoLa's pronunciation (sample 1)

And this is another sample from CoLa

And its spectrogram looks like-

CoLa's pronunciation (sample 2)

CoLa's pronunciation (sample 2)

It may be noted that in above two spectrograms there is only a linear shift by a small amount which is expected and desired.

Before giving examples of my pronunciations let me clarify how I have compared the two histograms. To compare two histograms (H1 and H2), first we have to choose a metric (d(H1,H2)) to express how well both histograms match. I have computed four different metric to compute the matching: Correlation, Chi-Square, Intersection and Bhattacharya distance.

Next I present to you my first attempt at pronouncing "weltmeisterschaft"

Yeah I admit, though my first obviously I could have done better and it sounds kind of like "wetmasterschaft". And here is the reportcard (err….spectrogram) of my poor performance

My first attempt

My first attempt

But I was not ready to give up yet…. I made some disgusting though necessary gurgling sounds and tried to set my vocals into tune and this is what I came up with.

and its spectrogram looks like

weltmeisterschaft- by me after a few attempts

Now I shall show you how the comparison metrics stack up- for correlation and intersection methods, the higher the metric the more accurate the match and for the other two the less the metrics the better the match.

*it is actually a comparison between the two same pronunciation by CoLa with which the rest are compared- this is just to give a sense of accuracy achievable.

*it is actually a comparison between the two same pronunciation by CoLa with which the rest are compared- this is just to give a sense of accuracy achievable.

The next job is to converge on a single metric which will take into account all four metrics that I currently have. Meanwhile I will also work on the fingerprinting part as it would also enable it to point out specific parts of pronunciation in which further improvement is needed. I am working on removing noise from the spectrograms as it is needed in finding out the intensity peaks(part of the fingerprinting work)- I have finished writing a code to find an intensity threshold for the noise from the histogram.

Below is a histogram of the spectrogram of my somewhat better pronunciation of "weltmeisterschaft"-

Histogram- different colours depict different channels

Histogram- different colours depict different channels

I hope to club these all modules in a standalone application and share it with community members for their testing, meanwhile you may use the code at https://github.com/avikpal/noise-removal-and-sound-visualization and test it yourself.

Now its time to fire up the warp machine but even in a parallel universe too I will be eagerly listening to #kde-artikulate with my identifier being "avikpal" for any kind of suggestion and/or queries. You may also mail me at avikpal[dot]me[at]gmail[dot]com.

Qapla'![Klingon | in English- "Good-bye"] until next time.


29 Jul 2014 11:43am GMT

28 Jul 2014

feedPlanet KDE

Layout Guidelines: A quick example

For planet readers: This post is written by Andrew Lake

We recently released new layout guidelines to aid with designing applications and plasmoids. So I wanted to provide a quick example of how to use the guidelines to design the layout for an imaginary calendar application.

A quick design for an imaginary calendar app


To choose an application layout, the new guidelines encourage awareness of the functions and commands provided by your application as well as the structure of the content provided by your application.

So let's start with commands. Suppose we want the primary function of our calendar application to be providing a daily, weekly or monthly schedule of the user's upcoming work or personal events. The user, Sue, would also like to be able to add events to her schedule. There are many other functions for a calendar application that I'm sure we're all aware of. We're not designing in a vacuum - there are many calendar applications from within KDE and elsewhere from which to draw inspiration. For the sake of this example though, lets start with the described functions and commands.

The guidelines suggest layout patterns for simple, complex and very complex command structures. So where does our calendar app fit? Well, I wasn't quite sure either. And that's ok! Some things are tough to know until you start delving into the design work. The guidelines suggest starting with a pattern for a simple command structure when you're not sure. So that's what I did. As I started putting together a design and thinking about how Sue would use it for the purposes described, it became clear that not only were there several other desirable functions (like switching calendars, setting up calendar accounts, setting calendar colors, and more) but there are also certain commands Sue might use quite often (like switching between a day, week and month view of her schedule, adding an event and quickly getting back to today after browsing forward or back in time). So I settled on the suggested Toolbar + Menu Button command pattern for a complex command structure.


The mockup toolkit provides an example:


The content structure for a calendar is pretty much flat: Just a collection of days (with or without events). I wanted to show a single day view, a week view(7 days) or month view (28 - 31 days) as well as properties related to the current view or selection like the date, the agenda for the current day or view and the active calendars. So I settled on a Selection-Properties navigation pattern from the recommended patterns for a flat content structure


The mockup toolkit provides an example of a Selection-Properties navigation pattern combined with a Toolbar+Menu Button.


Now I have a basic layout I can use for the rest of the design work. I put what I think will be the most frequently used commands on the toolbar - Today, Day, Week and Month views as well as a command to add an event.


Many of the other commands like setting up calendar accounts and the like are exposed through the menu button. I design a week view using the recommended design color set and occasionally checking the typography guidelines. For the properties panel. I draw some inspiration of Sebastian Kügler's great design work on the new clock plasmoid panel popup for the current date to achieve some additional visual consistency with the desktop. I also decided to add a mini month view for convenience and a legend for the active calendars (possibly directly editable?).


Put it all together and we have a quick design for an imaginary calendar application.


It's not a complete design in any sense of the word - icons, day and month views aren't shown, nor are calendar settings and the like. But it's probably enough to, for example, start a review on the VDG forums to get feedback from our fellow intrepid designers, the usability team and/or potential developers.

Just to be clear, you still have to design. Design is a creative activity. While guidelines can provide a sandbox, it still requires creativity within that sandbox. Often that means that the best way is to just start and figure it out as you go. For me, that's usually a bumpy trial and error process which I simply accept rather than agonize about. And no, you're not allowed to declare that you're not creative! :-)

Always feel free to ask for help or feedback on the VDG forums - it's a great place for us to learn together. This design was done using the mockup toolkit, but just use whatever tool you're comfortable with, including just sketching on paper and taking a picture of it. Don't wait. Don't hesitate. Just do. The long term hope is that these new layout guidelines will provide enough flexibility to create layouts suitable across the full spectrum of KDE applications while also helping to achieve layout consistency where it makes actually sense. Like all guidelines, it is a living document which we'll update collaboratively over time. We'll also do more examples like this in the future.

Hope this helps!




28 Jul 2014 9:35pm GMT

Monday Report: Creating Documentation

Yesterday we reached a small but important goal, we completed the first of our self-set tasks for this cycle, namely finishing our application layout guideline. The guideline has been presented to the community to gather feedback and was moved to the HIG wiki yesterday.
If you think something blatantly obvious has been overlooked, don't be afraid to speak up. The guidelines can always be improved. Keep in mind though that we intend to make more visual examples available in the guidelines. As of now we were mainly concerned with the textual content of the guidelines.
The guidelines are intended to standardise which layout elements KDE applications use and to help pointing out which layout is best suited for a specific type of application.
Additionally to the release of the application layout guidelines Andrew Lake has updated the mockup toolkit with some examples of how one can use the layout guidelines in combination with the toolkit to prototype application designs.



Another project that starts to take shape is the next network system settings module. This module is intended to be one of the first with an updated design for the new system settings.
This work is in its early planning phase so if you have something important to say, chime in!

On the window decoration front Martin Gräßlin has blogged about the advances he's made. If you haven't seen the blog post yet give it a read, it provides very interesting insights in the internals of KWin's new decoration API. Now it's time for us to work on the details like padding, etc. We're very pleased with the amount of progress being made, which shows again how absolutely awesome the KDE developers are.

Community

We are very happy to see the increased community activity in the forums. Last week a whole bunch of new users arrived in the forums and started to give all kinds of useful feedback and ideas.
It's very impressive what a difference a few motivated individuals can make. We hope to integrate all of you into our workflow as good as possible and try not miss any of your ideas. If this should happen anyway don't be afraid to voice your ideas again.


28 Jul 2014 5:09pm GMT

meta-kf5 usable

Finally I've had the time to work over the final issues in meta-kf5. Right now, I build most tier 1 and tier 2 components. I've packaged most functional modules and integration modules from these tiers.

When it comes to integration modules, there might be missing dependencies that need to be added - but that should not be too hard to add.

To be able to create useable cmake files, I had to employ a small hack modifying the cmake-files from KF5 before installing and packaging them. This seems to work (i.e. tier 2 builds), but there might be other sed-expressions that are needed.

Also, the autotests are not built as long at Qt5Test is left out form the build. If you would add Qt5Test, I believe that the unit tests will be included in the same package as the libs. I'll address this as I integrate the autotests into ptest.

Summing up all of this, I'd say that the meta-kf5 layer now is usable!

That is all for now. As always, contributions are welcome! If you find a use for this, I'd be happy to add your project as a reference to the layer!

28 Jul 2014 3:21pm GMT

New Kubuntu Plasma 5 Flavour in Testing

Kubuntu Plasma 5 ISOs have started being built. These are early development builds of what should be a Tech Preview with our 14.10 release in October. Plasma 5 should be the default desktop in a future release.

Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

28 Jul 2014 10:33am GMT

Kubuntu Plasma 5 ISOs Rolling

KDE Project:

Your friendly Kubuntu team is hard at work packaging up Plasma 5 and making sure it's ready to take over your desktop sometime in the future. Scarlett has spent many hours packaging it and now Rohan has spent more hours putting it onto some ISO images which you can download to try as a live session or install.

This is the first build of a flavour we hope to call a technical preview at 14.10. Plasma 4 will remain the default in 14.10 proper. As I said earlier it will eat your babies. It has obvious bugs like kdelibs4 theme not working and mouse themes only sometimes working. But also be excited and if you want to make it beautiful we're sitting in #kubuntu-devel having a party for you to join.

I recommend downloading by Torrent or failing that zsync, the server it's on has small pipes.

Default login is blank password, just press return to login.

28 Jul 2014 10:20am GMT