30 Jul 2016

feedPlanet KDE

Normal People that Uses Linux IV

Where you guys missing those small posts from which I describe my non geeky friends that for one reason or another started to use linux?

I think there`s quite a longe time I don't write anything about this, but then four of my non programmers, non geecky friends asked me to help them how to linux, and what where the differences - the main ones - from the other operating systems, so this time I introduce you Loli, a.k.a Pacifica.

Pacifism and Sunset

Pacifism and Sunset

Pacifica is a dear colleague in the massotheraphy field, a good and awesome teacher for "Curso Baiano de Massoterapia" and as usual, when I visit Bahia we do a get together to exchange techiniques. This time however she complained to me about her computer. "It takes around five minutes to open a single document, it's terrible to work with it.", the usual questions and the usual answers:

"But what is your computer?"
"A not really good dual core from AMD, with 2gbs of RAM, it wasn't bad when I bougth it but now it's terrible."
"And your windows version?"
"Windows 10, but I had set it's preferences to favor speed instead of beauty."

So I took a look at her computer for a few seconds, didn't needed more than that - "Look, it's full of viruses, I cannot access any of the preferences and you are the admin user, don't you want the other option?" - So we spend the rest of the evening talking about the differences of Linux regarding other Operating Systems and what was an Desktop Enviroment (For people that is used to the *nix way, a desktop enviroment is something really common, almost like breathing, but for those comming from windows, this can be a really daunting new thing) and since her computer was old with not much of a ram we discussed the lightweigth ones: Mate, LXQt.

Then I downloaded Mint linux to install on her computer and then we hit the first bummer: The latest mint failed to recognize her partition scheme, and she didn't want to loose the documents on her second partition, a bit bitter for that I downloaded Arch - for being my linux box of choice I know how to deal with almost everything that hits it from time to time (also, the wiki is awesome).

Arch installed with SDDM and LXQt, libreoffice-next, google-chrome (for netflix), vlc, and all that jazz - she wanted to try her document that took almost 5 minutes to open on windows: around 30 seconds to open on her shiny new linux box. and Yes, I know that 30 seconds is quite a lot of time to open a single document, but it was way way faster than her old box was.

This is her first week dealing with linux, so I'm pretty sure that she will hit something along the way, but for that there's always the Wiki and myself to answer the questions.

30 Jul 2016 1:45pm GMT

Convert iPhone contacts to vCard

On a recent troubleshooting attempt, I lost all the contacts in my Android phone. It had also received a recent update which took away the option to import contacts from another phone via bluetooth.
I still had some contacts in the old iPhone, but now that mass transfer via bluetooth is gone, it was a question of manually sending each contact in vCard format to the Android phone. That means I should probably find a less dreadful way to get the contacts back.

Here is one way to extract contacts en-masse from iPhone into popular vCard format. The contact and address details in iOS are stored by AddressBook application in a file named 'AddressBook.sqlitedb' which is an sqlite database. The idea is to open this database using sqlite, extract the details from a couple of tables and convert the entries into vCard format.

Disclaimer: the iPhone is an old 3GS running iOS 6 and it is jailbroken. If you attempt this, your mileage would vary. Required tools/softwares are usbmuxd (especially libusbmuxd-utils) and sqlite, with the prerequisite that openssh server is running on the jailbroken iPhone.

  1. Connect iPhone via USB cable to the Linux machine. Run iproxy 2222 22 to connect to the openssh server running on the jailbroken phone. iproxy comes with libusbmuxd-utils package.
  2. Copy the addressbook sqlite database from phone:scp -P 2222 mobile@localhost:/var/mobile/Library/AddressBook/AddressBook.sqlitedb .

Instead of steps 1 and 2 above, it might be possible to copy this file using Nautilus (gvfs-afc) or Dolphin (kio_afc) file manager, although I'm not sure if the file is accessible.

  1. Extract the contact and address details from the sqlite db (based on this forum post):sqlite3 -cmd ".out contacts.txt" AddressBook.sqlitedb "select ABPerson.prefix, ABPerson.first,ABPerson.last,ABPerson.organization, c.value as MobilePhone, h.val ue as HomePhone, he.value as HomeEmail, w.value as WorkPhone, we.value as WorkEmail,ABPerson.note from ABPerson left outer join ABMultiValue c on c.record_id = ABPerson.ROWID and c.label = 1 and c.property= 3 left outer join ABMultiValue h on h.record_id = ABPerson.ROWID and h.label = 2 and h.property = 3 left outer join ABMultiValue he on he.record_id = ABPerson.ROWID and he.label = 2 and he.property = 4 left outer join ABMultiValue w on w.record_id = ABPerson.ROWID and w.label = 4 and w.property = 3 left outer join ABMultiValue we on we.record_id = ABPerson.ROWID and we.label = 4 and we.property = 4;"
  2. Convert the extracted contact details to vCard format:cat contacts.txt | awk -F\| '{print "BEGIN:VCARD\nVERSION:3.0\nN:"$3";"$2";;;\nFN:"$2" "$3"\nORG:"$4"\nEMAIL;type=INTERNET;type=WORK;type=pref:" $9"\nTEL;type=CELL;type=pref:"$5"\nTEL;TYPE=HOME:"$6"\nTEL;TYPE=WORK:"$8"\nNOTE:"$9"\nEND:VCARD\n"}' > Contacts.vcf
  3. Remove the empty content lines if some contacts do not have all the different fields:sed -i '/.*:$/d' Contacts.vcf

Now simply transfer the Contact.vcf file containing all the contact details to Android phone's storage and import contacts from there.

Tagged: hacking, linux, mac

30 Jul 2016 9:16am GMT

29 Jul 2016

feedPlanet KDE

The Vision Quest


If you happened to read my last blog post you saw that I fixed all the assistants to work with an OpenGL 3.2 Core Profile. Let's take a step back and see why this fix was necessary and what was wrong with the code.


In Krita 3.0 we introduced 'Instant preview'. This is a mechanism for speeding up big brush strokes on large canvases and uses OpenGL3. Before this mechanism Krita exclusively used OpenGL2 and below.

A side-effect of OpenGL3 is that it deprecated some functions from older versions. Now, normally this isn't a problem as Windows and Linux support a thing called Compatibility Profile, which allows the user to use new and deprecated functions together.

However on Mac OS this compatibility profile is not supported and they leave us two choices. Either don't use any functionality from OGL3, or remove all deprecated functions from our code.

Our solution

By now you might have guessed that we chose the latter option and set out to remove all deprecated functionality. The problem here is that not all of this legacy code is in Krita, but actually in Qt (which is the library we use for the graphical user-interface). More specifically, we use some functions of Qt which contains legacy code to draw our canvas decorations (Brush outline, Assistants, etc.).

Since we don't have direct control over the Qt code, we decided to copy their legacy code into Krita and use this copied code to implement our fix. This means that drawing the decorations would now make use of our copied code instead of the Qt code. Ultimately however, we don't want to keep using this copied code as it would be a nightmare to keep up-to-date with the current Qt version. Therefore, the plan was to implement our fix and send our patch back to the guys over at Qt for them to merge it into their library.

So what did this fix involve exactly?



Making a custom Qt installation

As you saw the fix worked well using our copied Qt code, but now the next step was to move these fixes to the current Qt 5.7 code. Of course, Qt 5.7 contained some changes that weren't in our old copied code, so I had to merge my changes manually into the new files. Luckily this all went well and my first custom Qt installation was born.

And then, le moment suprême, as we run Krita with this custom Qt version...

Well.. unfortunately it didn't run...

On start-up Krita complained that there wasn't a valid context bound or that the OpenGL implementation does not support buffers. This happened in a piece of code that is completely unrelated to my fixes, but one that I luckily recently had a look at.

The unnerving thing is that my fix contains nothing that meddles with the OpenGL context and doesn't touch the file that gave the error. What's even worse, when debug printing the current context in that file it looked perfectly intact. So what could possibly be causing this?

Well it turned out that there is no such error when I run Krita without my fix, so it had to be something I had done. Alas, there was nothing left to do but to very slowly remove parts of my fix until the error stopped appearing, while at the same time keeping the code runnable.

Finally, I found the troublesome piece of code. It was already present in Qt and I had commented it out as it is chock-full of deprecated functions. The act of commenting out this piece of code apparently has severe consequences on unrelated files. I have no idea why...

Uncommenting this piece of code no longer caused any issues and fixed the error, soooo... ¯\_(ツ)_/¯

Sending the patch to Qt

Last Wednesday I cleaned up the fixes and sent in a change request to the Qt people. Over the coming days we will discuss the best way to implement parts of it in preparation of them taking in the changes so that we may drop our copied code and just use Qt as-is.

Their vision is to keep support for deprecated functionality, but to also allow the user to pick an OpenGL3.2 Core Profile which removes all these functions. This means I will have to implement checks in the fix to see which profile the user has requested. This incremental preparation of the patch will happen over a couple of weeks as we get closer to a solution we are both happy with.

Bonus talk

As one might imagine it is not a super fast process to update fixes and wait for comments and critique from the patch reviewers. This leaves me with some extra time in my Summer of Code to look at other parts of Krita. In particular, I am interested in the deep dark depths of the Krita painting engine.

The first part of these depths that I looked at was the way in which parts of the canvas are updated as people paint on it. This happens in a tile based manner.
The canvas is divided in tiles of size 256x256 and as paint strokes hit certain tiles only they get updated. An image would look something like this to Krita internally:

You notice I've drawn some red borders around the tiles. These borders represent where we extend each tile by 16 pixels on every side. This tile + border together is a 256x256 texture (so the effective size of the actual image tile is only 224x224).

Why do we extend each tile by 16 pixels? Well we keep what is called 'levels of detail' of the image. Effectively what this means is that we keep lower quality versions of the image (also called mip-maps). These levels of detail are progressively lower in resolution by powers of 2. So if the original image had a resolution of 1024x1024 its mip-maps would be: 512x512, 256x256, 128x128, 64x64 etc.

To see why these levels of detail are useful we have to dive into the implementation of 'Instant preview'. Essentially what that mechanic does is simulate a user's brush stroke on a lower level of detail where it is much faster to calculate and show this preview to the user, while in the background it is applying the brush stroke to the actual image. This gives the user an 'instant preview' of the brush stroke and retains the integrity of the image.

But I still haven't told you about why we need this border around the image. Well this has to do with the filtering we perform. To show a high-quality image at all zoom levels we might apply filters such as bilinear interpolation. For every pixel you see on screen bilinear interpolation takes the 4 pixels closest to the pixel you want to calculate and averages these according to how close they are.

In the image below you see a pixel with an imprecise position (because it has been zoomed in/out) called (x, y) for which we want to calculate the colour, and the 4 closest pixels in the actual image (x1,y1), (x2,y1), (x2, y2) and (x1, y2). The colour of the pixel is then taken as the average of the colour of the other pixels multiplied by the area the pixel directly diagonal from it takes up.

Now you have an idea of how bilinear interpolation works, you might ask yourself how this works when the pixel is at the edge of the image. Because obviously there aren't any pixels outside of the image to sample colours from.

Well this is exactly why we need an extra border of pixels around the image. We need at least one extra pixel around the image in order to handle the corner cases of bilinear interpolation. But what colour should this border be? It should be the colour of the pixel directly next to it! So in a way we are just taking all the pixels of the image edge and copying them to form a 1 pixel border.

But.. we have a 16 pixel border? Here is where the mip-mapping comes in. If we want to have a 1 pixel border at the lowest level of detail, we should have a border that is 2 pixels on the next higher level of detail (LoD). This is the case because if the second-to-lowest LoD is halved in size to form the lowest LoD we end up with a 1 pixel border.

In Krita we store five levels of detail (including the original image) and so we need a 1px, 2px, 4px, 8px and finally a 16px border on the original image.

So far I have been talking about these borders in the context of the image, but actually we need this border of every tile as they are little images that form the complete image. So now you hopefully understand the red lines on the Kiki image.

Speed up

You might be wondering why I am telling you all of this. While going through the code that handles all this tile business I found out that the code that extends each tile by 16 pixels takes up half of the processing time of each tile. This means that when you are drawing, half of the time it spends updating your canvas is spent on extending the tiles a little bit.

Here is my tiny benchmark of updating a full 8000x8000 canvas with and without tile borders:

Time taken on updating 8000x8000 canvas with borders: ~402ms

Time taken on CPU Time taken on GPU Total time
1 123ms 106.3ms 229.3ms
2 237ms 208.4ms 445.4ms
3 140ms 135.7ms 275.7ms
4 155ms 148.1ms 303.1ms
5 325ms 256.3ms 581.3ms
6 279ms 249.7ms 528.7ms
7 237ms 208.2ms 445.2ms
8 283ms 267.2ms 550.2ms
9 225ms 209.0ms 434.0ms
10 122ms 109.8ms 231.8ms

Time taken on updating 8000x8000 canvas without borders: ~194ms

Time taken on CPU Time taken on GPU Total time
1 55ms 52.8ms 107.8ms
2 87ms 123.8ms 210.8ms
3 82ms 125.3ms 207.3ms
4 46ms 122.8ms 168.8ms
5 245ms 197.3ms 442.3ms
6 53ms 45.6ms 98.6ms
7 46ms 125.1ms 171.1ms
8 47ms 122.9ms 169.9ms
9 50ms 122.9ms 172.9ms
10 61ms 124.7ms 185.7ms

I think the current implementation of this extending has a lot of opportunity to be optimised. So the time I have left while waiting for Qt critique I will spend on trying to get this border extension implementation optimised and possible getting a nice speed-up on the painting. I doubt it will be twice as fast, because I am sure there is a lot of other things going on during a paint stroke, but it will at least go some way to squeezing more performance out of Krita.

29 Jul 2016 8:20pm GMT

Good Web Based programming information for 2016? [help]

Hello, Dear Lazyweb.

It's been quite a while since I don't program for web, I think I have almost 10 years without doing anything serious in that regard. So I want… information, where to look at things, what should I expect from the current capabilities of nowdays.

I started to create a silly and dumb system just for myself, to re-learn how to do proper web based programming, and the amount of things that I need to take into consideration is so huge that I sincerely don't know how people think that web based programming is easier than C++.

I know that PHP now is at version 7, I also know about Django and Rails, I know the fancy HTML5 hype, I surely do know about SASS and other CSS compilers, Javascript on the client and server side, but I actually don't know where to start.

If you have any programming experience or have a link about a good, correct, information about a sane programming guide for web in 2016, please share. The internet is currently filled with tons of unuserfull information and a filter is needed.

29 Jul 2016 1:19pm GMT

28 Jul 2016

feedPlanet KDE

Static Library Nightmare (and how to fix it)

Everybody knows about DLL hell on windows, but there's another kind of hell that programmers also face, and the fix is simple but it takes quite a few searches with the correct keywords on google and stack overflow to find the one.

Today I was fighting with Umbrello, using the frameworks-gsoc port to build, test and compile. It was failing for me miserably, but I know that Lays works daily on that code so the issues that I was facing she didn't faced, and I had to dig into the build failures.

Almost a thousand lines of linking errrors like this one:

pp.o): In function `clang::driver::toolchains::CrossWindowsToolChain::AddClangSystemIncludeArgs(llvm::opt::ArgList const&, llvm::SmallVector<char const*, 16u>&) const':
(.text._ZNK5clang6driver10toolchains21CrossWindowsToolChain25AddClangSystemIncludeArgsERKN4llvm3opt7ArgListERNS3_11SmallVectorIPKcLj16EEE+0x162): undefined reference to `llvm::opt::ArgList::getLastArg(llvm::opt::OptSpecifier) const'
/usr/lib/gcc/x86_64-pc-linux-gnu/6.1.1/../../../../lib/libclangDriver.a(CrossWindowsToolChain.cpp.o): In function `clang::driver::toolchains::CrossWindowsToolChain::AddClangCXXStdlibIncludeArgs(llvm::opt::ArgList const&, llvm::SmallVector<char const*, 16u>&) const':
(.text._ZNK5clang6driver10toolchains21CrossWindowsToolChain28AddClangCXXStdlibIncludeArgsERKN4llvm3opt7ArgListERNS3_11SmallVectorIPKcLj16EEE+0x3a): undefined reference to `llvm::opt::ArgList::getLastArg(llvm::opt::OptSpecifier) const'

I don't like linking errors, I really don't, but what I absolutely loath is linking errors on static libraries, mostly because the way the linker looks at them are braindead. you need to know the exact order of the library linkage step so you don't get link errors during make. Now imagine that I have almost 50 static libraries (couting the LLVM and clang libraries), wich can probably give me a 50! number of trials to get it right, I can't do that…

StackOverflow to the rescue, didn't rescued anything, the answers on stack overflow asked us to use the nm tool to see what a certain library needed and using that information add the correct one on the linking steps. that is too much of work because one library can call for another, that can call for another and I would lose a ton of time doing that. Now, I remember a long long time ago when I worked creating Asterisk channels for a company named Khomp I used a tool to figure out that part for me, but I didn't remembered the name of the tool.

Looked on gnuutils, nothing. nothing on gnu that could me help do what I needed, nerf.

but then it came to me, like a flash, like a vision: lorder + tsort, see, I spend almost an hour trying to order the libraries manually, but with those nice little tools you can just:

➜ lib lorder libclang*.a | tsort

have the correct order for you in a go. Then it was just a matter of removing the uneeded libraries, adjusting the CMake and the test was fixed. <3

28 Jul 2016 4:41pm GMT

27 Jul 2016

feedPlanet KDE

GSoC Update: Tinkering with KIO

I'm a lot closer to finishing the project now. Thanks to some great support from my GSoC mentor, my project has turned out better than what I had written about in my proposal! Working together, we've made a lot of changes to the project.

For starters, we've changed the name of the ioslave from "File Tray" to "staging" to "stash". I wasn't a big fan of the name change, but I see the utility in shaving off a couple of characters in the name of what I hope will be a widely used feature.

Secondly, the ioslave is now completely independent from Dolphin, or any KIO application for that matter. This means it works exactly the same way across the entire suite of KIO apps. Given that at one point we were planning to make the ioslave fully functional only with Dolphin, this is a major plus point for the project.

Next, the backend for storing stashed files and folders has undergone a complete overhaul. The first iteration of the project stored files and folders by saving the URLs of stashed items in a QList in a custom "stash" daemon running on top of kded5. Although this was a neat little solution which worked well for most intents and purposes, it had some disadvantages. For one, you couldn't delete and move files around on the ioslave without affecting the source because they were all linked to their original directories. Moreover, with the way 'mkdir' works in KIO, this solution would never work without each application being specially configured to use the ioslave which would entail a lot of groundwork laying out QDBus calls to the stash daemon. With these problems looming large, somewhere around the midterm evaluation week, I got a message from my mentor about ramping up the project using a "StashFileSystem", a virtual file system in Qt that he had written just for this project.

The virtual file system is a clever way to approach this - as it solved both of the problems with the previous approach right off the bat - mkdir could be mapped to virtual directory and now making volatile edits to folders is possible without touching the source directory. It did have its drawbacks too - as it needed to stage every file in the source directory, it would require a lot more memory than the previous approach. Plus, it would still be at the whims of kded5 if a contained process went bad and crashed the daemon.

Nevertheless, the benefits in this case far outweighed the potential cons and I got to implementing it in my ioslave and stash daemon. Using this virtual file system also meant remapping all the SlaveBase functions to corresponding calls to the stash daemon which was a complete rewrite of my code. For instance, my GitHub log for the week of implementing the virtual file system showed a sombre 449++/419--. This isn't to say it wasn't productive though - to my surprise the virtual file system actually worked better than I hoped it would! Memory utilisation is low at a nominal ~300 bytes per stashed file and the performance in my manual testing has been looking pretty good.

With the ioslave and other modules of the application largely completed, the current phase of the project involves integrating the feature neatly with Dolphin and for writing a couple of unit tests along the way. I'm looking forward to a good finish with this project.

You can find the source for it here: https://github.com/KDE/kio-stash (did I mention it's now hosted on a KDE repo? ;) )

27 Jul 2016 7:00pm GMT

Self Compiled KF5 / Plasma / Apps

David Faure posted a while ago his take on having all of the Qt + KDE libraries + plasma on his blog here, but I tougth it was missing something. even on the Wiki that he pointed out on his blog, and oh boy, I really think he's right. we should all compile all of KDE applications, it really doesn't take that long (around 8h with -j3 on my old i5), so I prepared my machine: I removed *all* packages related to kde from my distro, and started.

Then I found that something was missing, and the missing part was "How do I actually log in using the hand compilled plasma/", using the information from the wiki I could start applications from the terminal, but that wasn't what I wanted, but well, I know a bit about the login managers, but dfaure uses a .xinitrc based solution, and I wanna use a modern DM, like sddm.

I wanted to actually use my display manager of choice to login on my compiled plasma, so if you wanna join the fun, please first follow David's intro and the Wiki, then come back here:

First I installed RazorQt, so I had something working, and copied the /usr/share/xsessions/razorqt.desktop to plasma.desktop and edittted the plasma.desktop to call my $KF5/bin/startkde, this of course failed because I didn't had set any of the enviroment variables needed for Plasma and the KDE Applications, so I got a bit of flickering and them sddm was on again. so I created an file that I named 'start-startkde5' that setted all of the variables and called startkde5 on it, changed the plasma.desktop again, I didn't set the variables on the startkde5 file because on every new compilation it would have ben overwritten, and I didn't wanted that.

With my start-startkde5 things got better, I could see the Plasma greeting, but then everything I got was a black screen. I cursed.

something was wrong, but I was setting the correct variables, I double checked that, so I installed again plasma packages from my distro, just to double check, and compared my plasma-build.desktop to the distro plasma.desktop, and after a few changes on the enconding, type, DesktopNames, things worked, This is my curent plasma-build.desktop, if anyone tried to do that in the past and didn't found the solution.

[Desktop Entry]

Now, the only thing that didn't worked out of the box is the kglobalaccel5 that I need to manually start, and kwallet.

27 Jul 2016 6:09pm GMT

20 Years of KDE, 10 Years for me and KDE’s influence on my Life

As you probably know, KDE is preparing to celebrate its 20 years anniversary in October 2016. See how it all started: https://www.kde.org/announcements/announcement.php

Just recently I realised that I started contributing 10 years ago. Coming from fvwm2 I had just started using KDE shortly before. Contributing started for me with the German translation of an amaroK 1.4 release announcement with … room for improvement (Yes, Amarok was amaroK back then :)). I made some suggestions, the translation team's coordinator from back then asked for more and I delivered.

Two years later I started contributing to KDEGames a bit, mainly in KShisen to get some practice in software development.

This has always been a leisure time activity for me and something that gave me a feeling of achievement when the "real world" did not have that ready for me.

Over the years there have been times where I did not find the time to contribute. Then, my only connection to KDE was me using it and my monthly donation. Eventually, though, I always found my way back to translating KDE and to KDEGames, two projects that are unfortunately constantly under-staffed.

But now, during the last few weeks KDE left a bigger imprint on my life.
My new job opportunity came from my KDE involvement (having connections is half the way) and the almost impossible undertaking of finding a flat in the city suddenly became easy for me when one of the landlords (a theoretical physicist) googled my name and found KDE contributions within the search results. Kind of awesome, isn't it?

So thank you KDE, not only for a great desktop, a nice community and the opportunity to learn about technology and (through communicating with other contributors) about other cultures but also for my new job and my new flat.

27 Jul 2016 3:10pm GMT

OpenStack Summit Barcelona: Vote for Presentations

The next OpenStack Summit takes place in Bacelona (Spain) in October (25.-28.10.2016). The "Vote for Presentations" period started on 26.07.2016. All proposals are now up for community votes. The period will end August 8th at 11:59pm PDT (August 9th at 8:59am CEST).

I have submitted this time two proposals:
This period the voting process changed slightly. Based on community feedback, unique URLs to proposals are no longer available. So if you would like to vote for these talks, you have to search for them (e.g. use the title from above or search for "Al-Gaaf"). As always: every vote is highly welcome. I recommend also to search for "Ceph" or what ever topic your are interested in. You find the voting page here with all proposals and abstracts. I'm looking forward to see if and which of these talks will be selected.

27 Jul 2016 12:26pm GMT

Neon Updates – KDE Network, KDE Applications

Not a great week for Neon last week. I server we used for building packages on filled up limiting the work we could do and then a patch from Plasma broke some people's startup and they were faced with a dreaded black screen. Apologies folks.

But then magically we got an upgrade to the server with lots of nice new disk space and the problem patch was reverted so hopefully any affected was able to upgrade again and recover.

So I added some KDE Network bits and rebuilt the live/installable ISO images so they're all updated to Applications 16.04.3 in User Edition. And Applications forked so now Dev Edition Stable Branches uses the 16.08 Beta branches and you can try out lots of updated apps. And because the developer made a special release just for us and wears cute bunny ears I added in Konversation to our builds for good old fashioned IRC chit chat (none of your modern Slacky/Telegram/Web2.0 protocols here).

Facebooktwittergoogle_pluslinkedinby feather

27 Jul 2016 10:37am GMT

25 Jul 2016

feedPlanet KDE

Plasma 5.7.2, Qt 5.7.0, Applications 16.04.3 and Frameworks 5.24.0 available in Chakra

This announcement is also available in Italian, Spanish and Taiwanese Mandarin.

The latest updates for KDE's Plasma, Applications and Frameworks series are now available to all Chakra users, all of which have been built against the brand new Qt 5.7.0.

The Plasma 5.7.2 release provides additional bugfixes to the many new features and changes that were introduced in 5.7.0 aimed at enhancing users' experience:

Applications 16.04.3 include more than 20 recorded bugfixes and improvements to 'ark, cantor, kate. kdepim, umbrello, among others'.

Frameworks 5.24.0 include bugfixes and improvements to breeze icons, plasma framework, kio and ktexteditor, among others.

Other notable package upgrades:

pulseaudio 9.0
ccr 4.0.4
sdl2 2.0.4
libinput 1.3.1
rust 1.10.0
samba 4.3.11

calibre 2.63.0
qmmp 1.1.1
qtcreator 4.0.3

wine 1.9.15

It should be safe to answer yes to any replacement question by Pacman. If in doubt or if you face another issue in relation to this update, please ask or report it on the related forum section.

Most of our mirrors take 12-24h to synchronize, after which it should be safe to upgrade. To be sure, please use the mirror status page to check that your mirror synchronized with our main server after this announcement.

25 Jul 2016 10:16pm GMT

#30: GSoC with KDE Now – 8

Hey ! I'm making KDE Now, an application for the Plasma Desktop. It would help the user see important stuff from his email, on a plasmoid. It's similar to what Google Now does on Android. To know more, click here.

Time sure passes fast when you are enjoying something. And so, another week has passed and I'm back with another status update. Things look very very good from here on. At least, not until we decide to change something drastic that I hadn't proposed earlier😄.

Last week was rough. I was struggling with viral fever and throat infection for the better part of the week. Plus I had some other things to attend to. Nonetheless, I managed to devote the needed time on my project. I managed to create the UI of the Flight and Restaurant cards. They look good and work as expected (the dynamically loading up of things and all)

Here are the obligatory screenshots

dark1 dark2 light1 light2

Other than that, I'm afraid I don't have much to talk about. I still have not made the Database fix I had talked about in my earlier post. But I'll do it next.If you look at it, then in a way I have managed to successfully deliver what all I had planned initially in my proposal:). This was a real confidence boost.

For the next few weeks, I plan to work on some tidbits I had left earlier (like the Database thingy I mentioned). I also might refine the UI. Plus, I also need to figure out a convenient way to make the plasmoid available for the end user in terms of his email credentials. I might look into KWallet but me and my mentor have to still talk about it.

Ending it with a request: If you are a graphics designer and have experience with making vector graphic images and want to help me out, feel free to contact me. Just comment your email id below (it's private and I moderate comments :)) and I'll contact you. I could really use your help.

Till next time


25 Jul 2016 6:25pm GMT

Update on my work at GCompris

Two months into GSoC, I must say that this summer is the best in my life. I have learned a great deal of programming, went jogging every day and had fun overall in my vacation.

If you followed my blog posts, you already know that before the actual start of the GSoC coding period, to better familiarize myself with the programming languages GCompris uses and also to prove my commitment and dedication, I contributed to GCompris by solving various bugs and developing two activities. "PhotoHunter" is an activity ported from the previous GTK version of GCompris. Its goal is finding differences between two similar images.

The second activity is "Share", an authentic creation of mine. In this activity, the user is presented with a problem about a child who has some candies and wants to equally share them with his friends. The user has to drag the exact amount of candies from the left panel to the child friends' areas on the main screen.

Now for the GSoC part of my work, I am happy to announce that "Crane" is already merged, and "Alphabetical Order" is coming fast from behind - everything is finished, but it needs some more testing.


In this post I will just mention the updates brought to Crane. For a more detailed presentation on the functionality of Crane, please check my previous posts.

For adding a pedagogical side to this activity, I decided to use as starter levels some easy words consisting of image letters connected one to the other. The user's goal is to reproduce the model and by doing so he automatically learns new words.


To better teach the association between the two models, the levels are carefully designed to gradually increase the difficulty: for the first levels, the two boards are at the same height and the grid is displayed, then for some levels, the grid is removed, for others, the height of the two boards is changed, and at the end, both changes are made: the grid is removed and the heights differ.

Click to view slideshow.

At last, there are levels with small images that the user must match. As the levels increase, there are more and more images on the board, making it more difficult to match.

6 7

Alphabetical Order:


The second activity on my list for Google Summer of Code was "Alphabetical Order" - an educational game in which the user learns the alphabet by practice. Its goal is to arrange some letters in their alphabetical order: on top of the screen is the main area, where some letters are displayed from which some are missing. The user has to drag the missing letters from the bottom part of the screen to their right place in the top area.

As the difficulty increases, the levels become more and more complicated:


Pressing on any letter triggers a sound and the letter is read so the user (children) learns its spelling as well. In the configuration panel, these sounds can be turned on or off, by pressing one button. The language used by the activity can be changed in the same configuration panel.


Here we can find two more buttons: "Easy mode" and "Ok button".

In "Easy mode", when the user drags a letter to its place, if the answer is correct, a sparkle is triggered. If the answer is wrong, the letter will turn red for a few seconds.

If the "Ok button" is activated, the level will be passed only when the user presses the "Ok button". If the answer is good, a sparkling effect will appear on all the letters, else, the wrong letters will glow red.

If the user's answers are mostly correct for the first level, an adaptive algorithm will increase the difficulty and as he passes more and more levels, the algorithm will dictate the difficulty of the game. On the other hand, if his answers are mostly incorrect, the difficulty is lowered so he can better learn the alphabet with easier levels.


After finishing "Crane" and "Alphabetical Order", I went back to "PhotoHunter" and "Share": for the first one, I added a new feature - a Help Button. Pressing it once will move the two pictures to the center, one on top of the other one. A sliding bar will appear and as the user drags it to right, the two images will combine and reveal the differences. In this "Help Mode", the user cannot press on the differences; he has to press the "Help Button" to exit the "Help Mode" so the images will go to their initial positions and then press on the difference again in order for it to be counted.


This is how the slider bar works:


A portrait view:

6 7



For "Share", I added a new type of levels and a new feature: "Easy mode". In "Easy mode", the user can use the maximum number of candies given in the problem. If he gives more candies to a friend on the board, then he won't have enough left for the others. On the other hand, if the "easy mode" is deactivated, the user can drag more than the exact amount of candies to each friend on the board needs. This addition forces the user solve the given problem and find its answer instead of guessing it by dragging the candies from one child to another.





The new levels consist of placing candies in some friends' areas before the user starts playing. This feature makes him take in consideration the candies already added to the board and compute again to find the new solution.

I am currently working on porting TuxPaint, a paint activity in which children can have fun drawing lines, rectangles and circles or free-drawing their own creation. The next post will mainly cover the development of TuxPaint.

25 Jul 2016 1:24pm GMT

Interview with Liz de Souza


Could you tell us something about yourself?

Hi! I'm 32 years old, Brazilian, I'm a full-time wife and mother, and also an illustrator.

Do you paint professionally, as a hobby artist, or both?

Both, but after having children I paint mostly professionally. I still have sketchbooks to carry in my backpack when I have to go to the doctor or do something where I'll stay waiting - while I wait, I draw. At home, I honor my daughters' requests for specific drawings or drawing lessons.

What genre(s) do you work in?

I'm working mostly in portraits, character designs, concept art and illustration. People call me specially for portraits/illustrations for wedding invitations and family drawings. Another genre that I work with as a favorite is illustrating Catholic themes. My faith is always portrayed in my personal works.


Whose work inspires you most - who are your role models as an artist?

I admire artists from all periods of history. I love Giotto, Fra Angelico, Michelangelo, Leonardo da Vinci, Caravaggio, Renoir, Ingres, Monet… Great masters always inspire me.

I really have as role models the Eastern Orthodox iconographers. The Eastern icons are so full of meaning and an inexplicable beauty.

And I have lots of artists I admire that work with digital painting. Some of them I follow for the technique, others because of the use of colors, others because of the way they illustrate abstract concepts… But I can list some: Yuko Shimizu, Lois Van Baarle (Loish), Charlie Bowater, Vicktoria Ridzel (Viria), Cyarine, Bobby Chiu, David Revoy, well… and hundreds more. My favs list is huge 🙂

As a Brazilian, my role model in my Country is the artist Maurício de Sousa. He has been an inspiration for me since I was a toddler. I love comics. Oh, I love Will Eisner also. Well…. I love lots of artists Did I mention I like manga too?


How and when did you get to try digital painting for the first time?

When I was in my 3rd or 4th semester of College (2001), I had a class called "Electronic Art" (yes, creepy name). I bought my first tablet, a huge Genius model I can't even remember the name of. I did works scanning my lineart and coloring with Photoshop 5.

After that, I only got into digital painting, started to research and really practice after 2 years I finished College, in 2007, when I bought a Genius Wizard Pen and tried hard to make this thing happen. Bobby Chiu was a great mentor and friend that year. I heard all his podcasts and drew a lot, when I was not at my job.

In 2009 I got married and had the opportunity to leave the job to dedicate my professional efforts to what I love: illustrating and digital painting. It took some years to get somewhere, but I can tell that having children
helped so much to make my brain work quickly to learn new techniques and improve, since I never have too much spare time.

In 2013 I started using social media to post free drawings, and got a new tablet, a Wacom Intuos. After that, I've always had commissions, thank God!

What makes you choose digital over traditional painting?

I work mostly with digital media because it's easy to correct problems, the client can ask for changes without making me do everything from scratch again, and because I've never actually learned how to paint with real paint (even if I tried hard in College - with no result).


I like the opportunity digital painting gives me to share my work and get commissions from anywhere. I've done commissions for USA and Germany and had lot of feedback about my free drawings from several countries!

I still love traditional drawing, specially black and white drawings with pen, brush and india ink.

How did you find out about Krita?

My husband and I started using only Linux on our computers when we got married and I installed all paint programs I had available to test and find something that was close or better than Photoshop. I used GIMP for a couple of years, but more or less in 2012 I found Krita at the Ubuntu Software Center and tried it. And liked it. And never left it.

What was your first impression?

Krita seemed to me very similar to Photoshop. It took several months to get used to it. It had at that time many bugs that shut down the app without warning, what annoyed me a lot. But after I changed my OS from
Ubuntu to Kubuntu, I work a lot better with it.

What do you love about Krita?

It's great software, and I love that such a great project has been made free software (of course there are paid versions, but the free one is the most popular). All functionalities and features are fantastic and work so well for the digital artist. But what I admire the most is the fact that the team is so available to answer questions, and work so hard to make Krita better and better. My husband is a software engineer and I know how much work it is to build a program, how much time you spend on it, how many nights you lose due to the project deadline, and how great it is to hear the feedback from people who use your app and help you to make it better. If all human beings had this inner good will, so many good things would happen in the world. God bless the Krita Foundation.


What do you think needs improvement in Krita? Is there anything that really annoys you?

Well… Actually there are some issues about importing brushes I would like to happen (like importing MyPaint brushes and PS.TPL brushes), but I believe I should only thank the team for all the hard work, and try to help them with the bugs so Krita become the great software for digital painting. Sometimes I ask if the team plans to implement this or that feature, but when they answer with the expression "reverse engineering" I have goosebumps. I know what it is and how hard it is. I saw my husband doing that once. It was a nightmare. So, I feel that my duty is to be thankful for them and do something to help Krita Foundation (like the Krita Training
in Portuguese I'm doing right now).

What sets Krita apart from the other tools that you use?

It is high-quality open-source software. Runs in my dear Kubuntu. That's happiness for me. I have other tools installed in my OS, such as GIMP and MyPaint. But Krita does everything, it has all the features a professional digital artist needs. I still like MyPaint, but only for sketches.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I'm always drawing something new, and the newest drawings are always the best ones, because we learn something every day. I don't know if I can pick only one drawing! Mmmmm… maybe the portrait I made for a family of a dear friend in 2015, the first real digital painting I made, is still my favourite. The result was awesome, I was so proud of myself when I finished it and my friend loved it. Happiness everywhere.*


What techniques and brushes did you use in it?

I used one of the three techniques I'm used to; it is doing a layer with a very realistic drawing, painting a basic color one layer below, and then complete the painting with a third layer above it, making all lines disappear. It gives a look of a real portrait, and people love it. I also use two other techniques for painting below and using lots of layers, but this one is for simple drawings, usually cartoon portraits, memes and T-shirt illustration. In fact, I love trying other people's techniques,there is always many ways to solve digital painting problems. I love
learning with other artists!

Where can people see more of your work?

My commissions and best drawings (and very old stuff) are in my DeviantArt - artelizdesouza.deviantart.com
Best quality memes and other drawings you can see in my Tumblr - lizdesouza.tumblr.com
News, updates, memes, and thoughts at my Facebook - facebook.com/lizdesouza
Random photos, paper sketches and sometimes memes at Instagram - https://instagram.com/artelizdesouza/
Recently I've joined twitter, but I use it to talk with the developers of Krita and MyPaint
- https://twitter.com/artelizdesouza
My Youtube channel has some speedpaints - https://www.youtube.com/channel/UC1MVazT8tdIV0t6LNnD8Kcw
I have a Patreon page, which isn't really active yet but people can follow me to receive news when I make it happen - https://www.patreon.com/lizdesouza

Anything else you'd like to share?

I'm right now doing a Krita training in Brazilian Portuguese, and it has taken a lot of the little spare time I have. But if the English-Spanish-other-Portuguese speakers want to have access to it, I'm doing a closed class, where I post the videos and share knowledge with the students. To have more information, please join my crowdfunding to feed my family while I work sending me an email to artelizdesouza@gmail.com - you can write in Portuguese, English or Spanish, and I answer it. And with the help of my husband I can answer
German and Italian speakers too.

I also take commissions - to ask for it and get information, email me: artelizdesouza@gmail.com

And… Thank you Krita Foundation! God bless you all!

25 Jul 2016 8:00am GMT

24 Jul 2016

feedPlanet KDE

We’ve come a long way from where we began!

"Measuring programming progress by lines of code is like measuring aircraft building progress by weight."― Bill Gates

After working for several weeks on our WikiRating:Google Summer of Code project Davide, Alessandro and I have slowly reached up to the level where we can now visualize the entire project in its final stages. It has been quite long since we wrote something , so after seeing this :


We knew it was the time that we summarize what we were busy doing in these 50 commits.

I hope you have already understood our vision by reading the previous blog post. So after a fair amount of planning it was the time that I start coding the actual engine, or more importantly start working on the❤ of the project. This was the time when my brain was buzzing with numerous Design patterns and Coding paradigms and I was finding it a bit overwhelming. I knew it was a difficult phase but I needed some force (read perseverance) to get over it. I planned , re-planned but eventually with the soothing advice of my mentors : Don't worry about the minor intricacies now, focus on the easiest thing first.
I began to code!

How stuff works?

I understand that there are numerous things to talk about and it's easy to lose track of main theme therefore we are going to tour the engine as it actually functions that is we will see what all happen under the hood as we run the engine.Let me make it easier for you, have a look at this:

methodThe main method for engine run

You can see there are some methods and some parent classes involved in the main run of the engine let's inspect them.

Fetching data(Online):

The initial step is to create all the classes for the database to store data. After this we will fetch the required data like Pages,Users and Revisions via the queries listed here.

    "batchcomplete": "",
    "limits": {
        "allpages": 500
    "query": {
        "allpages": [
                "pageid": 148,
                "ns": 0,
                "title": "About WikiToLearn"
                "pageid": 638,
                "ns": 0,
                "title": "An Introduction to Number Theory"
                "pageid": 835,
                "ns": 0,
                "title": "An Introduction to Number Theory/Chebyshev"
                "pageid": 649,
                "ns": 0,
                "title": "An Introduction to Number Theory/Primality Tests"
                "pageid": 646,
                "ns": 0,
                "title": "An Introduction to Number Theory/What are prime numbers and how many are there?"

This is a typical response from the Web API, giving us info about the pages on the platform.

Similarly we fetch all the other components (Users and Revisions) and simultaneous store them too.

database addition1Code showing construction of Page Nodes

After fetching the data for Pages and Users we will work towards linking the user contributions with their corresponding user contributors . Here we will make edges from the user nodes to the respective revisions of the pages. These edges also contain useful information like the size of the contribution done.

We also need to work on linking the pages with other pages via Backlinks for calculating the PageRank (We will discuss these concepts after a short while).

Once we have all the data via the online API calls , we will now move toward our next pursuit to do offline computation on data fetched.


Since this feature is new to WikiToLearn platform therefore there were no initial user votes on any of the page versions, hence we wrote a class to select random users and then making them vote for various pages. Later we will write a MediaWiki Extension to gather actual votes from the users but till then now we have sample data to perform further computations.

So after generating votes we need to calculate various parameters like User Credibility, Ratings, PageRank and Badges (Platinum,Gold,Silver,Bronze,Stone). The calculation of the credibility and Ratings are listed here. But Badges and PageRank are a new concept .


We will be displaying various badges based on the Percentile Analysis of the Page Ratings. That is we will be laying the threshold for various badges say top 10% for platinum badge then we filter out the top 10% pages on the basis of their page rating and then assign them the suitable badge.The badges will give readers an immediate visual sense of the quality of the pages.

Another very important concepts are PageRank and Backlinks let's talk about them too.

PageRank & Backlinks:

Let's consider a scenerio :

Drawing (1)Page interconnections

There are 5 pages in the system the arrows denote the hyperlink from a page to another these are called backlinks . Whenever the author decides to cite another user's work a backlink is formed from that page to the other. It's clear to understand that the more backlinks a page will have the more reliable it becomes (Since each time the authors decide to link someone else's work then they actually think it is of good quality).

So the current graph :

Page 0 : 4, 3, 2
Page 1 : 0 ,4
Page 2 : 4
Page 3 : 4, 2
Page 4 : 3

Here we have connections like Page 0 is pointed by 3 pages 4,3,2 and so on.

Now we will calculate a base rating of all the pages with respect to the page having maximum number of backlinks.Therefore we see that Page 0 has maximum number of backlinks(3).
Then we divide the backlinks of all the other pages by the maximum.This will give us the
importance of pages based on their backlinks.

We used this equation:
Base Weight=(no of backlinks)/(Max backlinks)

So Base Weight of Page 0 = (1+1+1)/3=1

Base weights
1, 0.666667 ,0.333333 ,0.666667 ,0.333333 of Page 0 ,Page 1 and so on

There is a slight problem here:
Let's assume that we have 3 pages A , B and C. A has high backlinks than B but according to the above computation whenever a link from A to C is there it will be equivalent to that of B to C. But it shouldn't happen as Page A's link carries more importance than Page B's link because A has higher backlinks than B.Therefore we need a way to make our computation do this.

We can actually fix this problem by running the computation one more time but now instead of taking 1.0 for an incoming link we take the Base Weight so now the more important pages contribute more automatically. So the refined weights are:

Revised Base Weight of Page 0 =(0.334+0.667+0.334)/3=0.444444

Page weights
0.444444, 0.444444 ,0.111111, 0.222222 ,0.222222

So we see that the anomaly is resolved:)

This completes our engine analysis. And finally our graph in OrientDB looks like this:

sample graph

Right now I am developing an extension for the user interaction of the engine and will return soon with the latest updates. Till then stay tuned😀

24 Jul 2016 6:43pm GMT

23 Jul 2016

feedPlanet KDE

LabPlot 2.3.0 released

Less then four months after the last release and after a lot of activity in our repository during this time, we're happy to announce the next release of LabPlot with a lot of new features. So, be prepared for a long post.

As already announced couple of days ago, starting with this release we provide installers for Windows (32bit and 64bit) in the download section of our homepage. The windows version is not as well tested and maybe not as mature as the linux version yet and we'll spent more time in future to improve it. Any feedback from windows users is highly welcome here!

With this release me make the next step towards providing a powerful and user-friendly environment for data and visualization. Last summer Garvit Khatri worked during GSoC2015 on the integration of Cantor, a frontend for different open-source computer algebra systems (CAS). Now the user can perform calculations in his favorite (open-source) CAS directly in LabPlot, provided the corresponding CAS is installed, and do the final creation, layouting and editing of plots and curves and the navigation in the data (zooming, shifting, scaling) in the usuall LabPlot's way within the same environment. LabPlot recognizes different CAS variables holding array-like data and allows to select them as the source for curves. So, instead of providing columns of a spreadsheet as the source for x- and y-data, the user provides the names of the corresponding CAS-variables.

Currently supported CAS data containers are Maxima lists and Python lists, tuples and NumPy arrays. The support for R and Octave vectors will follow in one of the next releases.

Let's demonstrate the power of this combination with the help of three simple examples. In the first example we use Maxima to generate commonly used signal forms - square, triangle, sawtooth and rectified sine waves ("imperfect waves" because of the finite truncation used in the definitions):

Maxima Example

In the second example we solve the differential equation of the forced Duffing oscillator, again with Maxima, and plot the trajectory, the phase space of the oscillator and the corresponding Poincaré map with LabPlot to study the chaotic dynamics of the oszillator:

Maxima Example

Python in combination with NumPy, SciPy, SymPy, etc. became in the scientific community a serious alternative to many other established commercial and open-source computer algebra systems. Thanks to the integration of Cantor, we can do the computation in the Python environment directly in LabPlot. In the example below we generate a signal, compute its Fourier transform and illustrate the effect of Blackman windowing on the Fourier transform. Contrary to this example, only the data is generated in python, the plots are done in LabPlot.

FFT with Python

In this release with greatly increased the number of analysis features.

Fourier transform of the input data can be carried out in LabPlot now. There are 15 different window functions implemented and the user can decide which relevant value to calculate and to plot (amplitude, magnitude, phase, etc.). Similarly to the last example above carried out in Python, the screenshot below demonstrates the effect of three window functions where the calculation of the Fourier transform was done in LabPlot directly now:

FFT with LabPlot

For basic signal processing LabPlot provides Fourier Filter (or linear filter in the frequency domain). To remove unwanted frequencies in the input data such as noise or interfering signals, low-pass, high-pass, band-pass and band-reject filters of different types (Butterworth, Chebyshev I+II, Legendre, Bessel-Thomson) are available. The example below, inspired by this tutorial, shows the signal for "SOS" in Morse-code superimposed by a white noise across a wide range of frequencies. Fourier transform reveals a strong contribution of the actual signal frequency. A narrow band-pass filter positioned around this frequency helps to make the SOS-signal clearly visible:

Fourier Filter Example

Another technique (actually a "reformulation" of the low-pass filtering) to remove unwanted noise from the true signal is smoothing. LabPlot provides three methods to smooth the data - moving average, Savitzky-Golay and percentile filter methods. The behavior of these algorithms can be controlled by additional parameters like weighting, padding mode and polynom order (for Savitzky-Golay method only).

Smoothing Example

To interpolate the data LabPlot provides several types of interpolations methods (linear, polynom, splines of different types, piecewise cubic Hermite polynoms, etc.). To simplify the workflow for many different use-cases, the user can select what to evaluate and to plot on the interpolations points - function, first derivative, second derivative or the integral. The number of interpolation points can be automatically determined (5 times the number of points in the input data) or can be provided by the user explicitly.

More new analysis features and the extension of the already available feature set will come in the next releases.

Couple of smaller features and improvements were added. The calculation of many statistical quantities was implemented for columns and rows in LabPlot's data containers (spreadsheet and matrix):

Column Statistics

Furthermore, the content of the data containers can be exported to LaTeX tables. The appearance of the final LaTeX output can be controlled via several options in the export dialog.

LaTeX Export

To further improve the usability of the application, we implemented filter and search capabilities in the drop down box for the selection of data sources. In projects with large number of data sets it's much easier and faster now to find and to use the proper data set for the curves in plots.

A new small widget for taking notes was implemented. With this, user's comments and notes on the current activities in the the project can be stored in the same project file:

Notes Example

To perform better on a large number of data points, we implemented the double-buffering for curves. Currently, applying this technique in our code worsens the quality of the plotted curves a bit. We decided to introduce a configuration parameter to control this behavior during the run-time. On default, the double buffering is used and the user benefits from the much better performance. Users who need the best quality should switch off this parameter in the application settings dialog. We'll fix this problem in future.

The second performance improvement coming in version 2.3.0 is the much faster generation of random values in the spreadsheet.

There are still many features in our development pipeline, couple of them being currently already worked on. Apart from this, this summer again we get contribution from three "Google Summer of Code" students working on the support for FITS, on a theme manager for plots and on histograms.

You can count on many new cool features in the near future!

Randa Meetings 2016 Fundraiser Campaign

23 Jul 2016 7:53pm GMT