07 Dec 2016

feedPlanet KDE

The dangers of stable/LTS/supported versions

Ubuntu 14.04 LTS is supported until April 2019 and ships poppler 0.24.5 http://packages.ubuntu.com/search?suite=trusty&searchon=names&keywords=libpoppler-dev

RHEL 7.3 ships poppler 0.26.5 (I may be wrong, https://git.centos.org/summary/?r=rpms/poppler is the best info i could find, Red Hat does not make easy to know what you're buying)

Debian stable (Jessie) ships poppler 0.26.5 https://packages.debian.org/search?suite=jessie&searchon=names&keywords=libpoppler-dev

Current release is poppler 0.49 https://poppler.freedesktop.org/releases.html

This means that people are running stable versions and thinking they are secure, but if we trust security specialists, [almost] every crash can be exploited, and I'm almost sure neither Ubuntu nor RedHat nor Debian have backported all of the crash fixes of the more than 20 releases and 2 years of development behind those *very old* versions they are shipping.

I don't know how/if this can be fixed, but i honestly think we're giving users a false sense of security by letting them run those versions.

07 Dec 2016 12:30am GMT

No one "works" on Poppler

I thought that was obvious, but today someone thought that i was "working" as "paid working" on it.

No, I don't get paid for the work i do on Poppler.

It's my computing hobby, and on top of that it's not even my "primary" computing hobby, lots of KDE stuff take precedence over it, and i guess Gnome stuff may also take precedence for Carlos (second top commiter according to the git shortlog)

Aside a few paid contributions and some patches that may have come from people that use the software on their business (and we could file them under "paid" since they did the fix as part of their job) no one has a paid job that is mainly "work on poppler".

I guess we've done a good enough job as hobbyist :)

Obviously we could do better, so if you have lots of money and are interested in making free software PDF rendering beter please hire someone to help us (no, this is not me asking for money, I've a good enough job already).

And if you don't have money but you have some free time and like to help, join us :)

And if you really really have some free time or lots of money you could port Okular, Evince et al to pdfium and see if it's actually better/worse than poppler.

07 Dec 2016 12:00am GMT

06 Dec 2016

feedPlanet KDE

How input works – creating a Device

Recently I did some work on the input stack in KWin/Wayland, e.g. implemented pointer gesture and pointer constraints protocol, and thought about writing a blog series about how input events get from the device to the application. In the first blog post I focus on creating and configuring an input device and everything that's related to get this setup.

evdev

Input events are provided by the Linux kernel through the evdev API. If you are interested in how evdev works, I recommend to read the excellent post on that topic by Peter Hutterer. For all we care about the input event API is too low level and we want to use an abstraction for that.

libinput and device files

This abstraction exists and is called libinput. It allows us to get notified whenever an input device gets added or removed and when an input event is generated. But not so fast. First of all we need to open the input devices. And that's a challenge.

The device files are normally not readable by the user. That's a good thing as otherwise every application would be able to read all key events. Getting a key logger would be very trivial in that case.

But if KWin runs as a normal user and the user is not able to read from the device files, how can KWin read them? For this we need some support. Libinput is prepared for the situation and doesn't try to open the files itself, but invokes an open_restricted function the library user has to provide. KWin does so and outsources the task to open the file to logind. Logind allows one process to take control over the current session. And this session controller is allowed to open some device files. So KWin interacts with logind's dbus API to become the session controller and then opens the device files through the logind API and passes them back to libinput.

This is the reason why for a full Wayland session KWin has a runtime dependency on logind's DBus interface. Please note that this does not mean that you need to use logind or systemd. It only means that one process is required which speaks logind's DBus interface.

Devices in KWin

Now libinput is ready to open the device files and emits an LIBINPUT_EVENT_DEVICE_ADDED event for each device. KWin creates a small facade class for each device type and applies configuration options for it. KWin supports reading the configuration options set by Plasma's mouse configuration module and has an own device specific configuration file which will soon allow the touchpad configuration module to configure the touchpad on Wayland. Also as part of setting up the device KWin enables LEDs - if the device supports them - for Num Lock and Caps Lock.

Input Devices

All the input devices created by KWin can be investigated in the Debug console (open KRunner, enter "KWin"). KWin reads a lot of information about the device from libinput and shows those in the Debug console. In the input event tab each of the events include the information which device generated the event.

Input Devices exported to DBus

All devices are also exported to DBus with the same properties as shown in the Debug console. This means the configuration can be changed at runtime through DBus. KWin saves the configuration after successful apply and thus ensures that your settings are restored correctly when you restart your system or replug your external device. This is also an important feature to support the touchpad configuration module.

If you want to support our work, consider donating to our Make the World a Better Place! - KDE End of Year 2016 Fundraising campaing.

06 Dec 2016 6:29pm GMT

6 months of Nextcloud

Last Friday was the 6 month anniversary of Nextcloud, a good opportunity to look back and reflect on what we have achieved since we started. I also have some interesting news to share, including that Nextcloud GmbH is a profitable company already!

Half a year ago a most of the ownCloud engineering team including myself started the Nextcloud project. Our goal was to take lessons from the past and create a next generation open source project with a better, more stable company behind it. Those were very ambitious goals but I'm happy to report that things have worked out better then what I was hoping for!

With Nextcloud we wanted to create a more sustainable ecosystem with the right balance between stakeholders' needs, both technical and business-wise. When done right, both the project and the customers win. I tried this when I started ownCloud but unfortunately I wasn't successful.

The good news is that it is working very well this time! Nextcloud has gotten a huge community traction as well as massive commercial interest. Let me cover the different areas in more detail.

Project

Nextcloud has become the most active project in our space and we're still growing fast.
This is because we do a few things right:

And does this all work? Yes it does. Look at the community activity statistics. We are are already the most active project in our space and still growing fast. I'm so happy that Nextcloud is back on track as a real open source project, similarly structured to what I learned during my time at the KDE community!

Product

The next big area that I want to cover is Nextcloud as a product itself.

I'm happy that Nextcloud is fully AGPL again, the license that I picked at the beginning when I founded ownCloud. This gives everyone legal clarity and guarantees real benefits and freedom to all users and contributors. We don't mix potentially incompatible licenses which might become a legal minefield. We are committed to protect and defend this license also for our contributors if needed.

Business

6 month ago we also founded a new company called Nextcloud GmbH. The idea was to learn from the past and make this a real sustainable company. It is build to provide a long term home for core developers and a guarantee that the product will be developed and maintained for a long time. Everything at Nextcloud GmbH is build to be a sustainable business. Nextcloud Gmbh doesn't exist to be sold and it isn't designed and optimized for an exit. We are completely self funded and we don't depend on any external investors. This gives us an amount of freedom to do the right thing for the project, company and the people that we never had in the past.

Transparency is key. We do everything in the open. The only exceptions are customer data and legal topics. We even develop our main website in github and we get very significant external contributions from the community. If you've seen how our website has evolved and then see who is doing the work, you'll be looking at another example of the strength of community. I don't know a lot of companies that have that level of openness!

To really benefit from what the open source model has to offer, we decided not to just be open core for marketing purposes but follow the 100% open source model successful companies like RedHat, SUSE and others are leading with.

Our customers notice and appreciate this, as feedback we got from partners and customers shows. We received significant contributions in both code and other input. This is because the customer or partner knows their work won't end up in a proprietary product which then makes them pay for their own work later on. Customers also like that they don't have a lock-in in Nextcloud because it is completely open source. They pay for our excellent expert support and services, like with other real open source companies, and we constantly have to prove our value. And we do, seeing how business is going!

Business wise, confidence in your business model pays off. We are already well over 20 employees and that is only counting full time employees, not partners or freelancers.

The amount of customer interest we get is unbelievable, we still have a hard time processing all the incoming requests and sending out quotes and contracts quick enough. And yes, we're looking for help! People clearly really like what we are doing.

The big news that I want to announce today is that as of last week we already reached profitability. This is crazy after only 6 months, long before we planned. You might remember that we secured initial funding for three years, which means we will be able to continue to pursue an aggressive growth strategy, investing more in nextcloud and customer satisfaction while maintaining a sustainable business over the coming years.

Investing means hiring and we have a big number of job openings - if you'd like to work for an awesome, innovative, open, young and very healthy company, send your resume!

We're also looking for partners who want to help bring Nextcloud to an even wider user base - you can use the contact form to talk to us about this.

Future

So what is next? I'm really happy that everyone in the Nextcloud community shares the same vision and idea. We want to enable our users to secure their data, protect their privacy and fix their data handling and communication problems. And that is exactly what we will keep working on, double speed!

Join the Nextcloud community for the 6 month and the next 10 years: make a difference

06 Dec 2016 3:22pm GMT

05 Dec 2016

feedPlanet KDE

KDE Framworks 5 Content Snap Techno

In the previous post on Snapping KDE Applications we looked at the high-level implication and use of the KDE Frameworks 5 content snap to snapcraft snap bundles for binary distribution. Today I want to get a bit more technical and look at the actual building and inner workings of the content snap itself.

The KDE Frameworks 5 snap is a content snap. Content snaps are really just ordinary snaps that define a content interface. Namely, they expose part or all of their file tree for use by another snap but otherwise can be regular snaps and have their own applications etc.

KDE Frameworks 5's snap is special in terms of size and scope. The whole set of KDE Frameworks 5, combined with Qt 5, combined with a large chunk of the graphic stack that is not part of the ubuntu-core snap. All in all just for the Qt5 and KF5 parts we are talking about close to 100 distinct source tarballs that need building to compose the full frameworks stack. KDE is in the fortunate position of already having builds of all these available through KDE neon. This allows us to simply repack existing work into the content snap. This is for the most part just as good as doing everything from scratch, but has the advantage of saving both maintenance effort and build resources.

I do love automation, so the content snap is built by some rather stringy proof of concept code that automatically translates the needed sources into a working snapcraft.yaml that repacks the relevant KDE neon debs into the content snap.

Looking at this snapcraft.yaml we'll find some fancy stuff.

After the regular snap attributes the actual content-interface is defined. It's fairly straight forward and simply exposes the entire snap tree as kde-frameworks-5-all content. This is then used on the application snap side to find a suitable content snap so it can access the exposed content (i.e. in our case the entire file tree).

slots:
    kde-frameworks-5-slot:
        content: kde-frameworks-5-all
        interface: content
        read:
        - "."

The parts of the snap itself are where the most interesting things happen. To make things easier to read and follow I'll only show the relevant excerpts.

The content snap consists of the following parts: kf5, kf5-dev, breeze, plasma-integration.

The kf5 part is the meat of the snap. It tells snapcraft to stage the binary runtime packages of KDE Frameworks 5 and Qt 5. This effectively makes snapcraft pack the named debs along with necessary dependencies into our snap.

    kf5:
        plugin: nil
        stage-packages:
          - libkf5coreaddons5
        ...

The kf5-dev part looks almost like the kf5 part but has entirely different functionality. Instead of staging the runtime packages it stages the buildtime packages (i.e. the -dev packages). It additionally has a tricky snap rule which excludes everything from actually ending up in the snap. This is a very cool tricky, this effectively means that the buildtime packages will be in the stage and we can build other parts against them, but we won't have any of them end up in the final snap. After all, they would be entirely useless there.

    kf5-dev:
        after:
          - kf5
        plugin: nil
        stage-packages:
          - libkf5coreaddons-dev
        ....
        snap:
          - "-*"

Besides those two we also build two runtime integration parts entirely from scratch breeze and plasma-integration. They aren't actually needed, but ensure sane functionality in terms of icon theme selection etc. These are ordinary build parts that simply rely on the kf5 and kf5-dev parts to provide the necessary dependencies.

An important question to ask here is how one is meant to build against this now. There is this kf5-dev part, but it does not end up in the final snap where it would be entirely useless anyway as snaps are not used at buildtime. The answer lies in one of the rigging scripts around this. In the snapcraft.yaml we configured the kf5-dev part to stage packages but then excluded everything from being snapped. However, knowing how snapcraft actually goes about its business we can "abuse" its inner workings to make use of the part after all. Before the actual snap is created snapcraft "primes" the snap, this effectively means that all installed trees (i.e. the stages) are combined into one tree (i.e. the primed tree), the exclusion rule of the kf5-dev part is then applied on this tree. Or in other words: the primed tree is the snap before exclusion was applied. Meaning the primed tree is everything from all parts, including the development headers and CMake configs. We pack this tree in a development tarball which we then use on the application side to stage a development environment for the KDE Frameworks 5 snap.

Specifically on the application-side we use a boilerplate part that employs the same trick of stage-everything but snap-nothing to provide the build dependencies while not having anything end up in the final snap.

  kde-frameworks-5-dev:
    plugin: dump
    snap: [-*]
    source: http://build.neon.kde.org/job/kde-frameworks-5-release_amd64.snap/lastSuccessfulBuild/artifact/kde-frameworks-5-dev_amd64.tar.xz

Using the KDE Framworks 5 content snap KDE can create application snaps that are a fraction of the size they would be if they contained all dependencies themselves. While this does give up optimization potential by aggregating requirements in a more central fashion it quickly starts paying off given we are saving upwards of 70 MiB per snap.

Application snaps can of course still add more stuff on top or even override things if needed.

Finally, as we approach the end of the year, we begin the season of giving. What would suit the holidays better than giving to the entire world by supporting KDE with a small donation?
postcard02

05 Dec 2016 4:10pm GMT

03 Dec 2016

feedPlanet KDE

Productivity++

In keeping with tradition of LTS aftermaths, the upcoming Plasma 5.9 release - the next feature release after our first Long Term Support Edition - will be packed with lots of goodies to help you get even more productive with Plasma!

Taking a screenshot with an interactive previewTaking a screenshot with an interactive preview

Richer Notifications

Our notification system has stayed virtually the same for the past decade and it shows. Notifications are basically just a bit of text, an icon, and some buttons. They don't have any semantics, no description of what they're actually about.

I started a wiki page during Akademy collecting ideas on how to improve notifications in Plasma. The first feature that I implemented is the ability for applications to annotate a notification with a URL (or multiple URLs). The notification service will then show a large preview of said file (or a thumbnail strip in case of multiple files) which can then even be dragged to another window, e.g. to a webbrowser window, an email composer, a chat window, the desktop, anywhere you need it.

This is again in line with our goal for Plasma, allowing you to fully immerse yourself in your current task without ever having to leave the application you're working with. "Hey, can you send me a screenshot of that thing?" - Meta+Shift+PrtScr, select region, hit return, drag screenshot from notification to chat window, done.

Task Manager Keyboard Shortcuts

Easily number three on the list of most wanted features in Plasma (after Global Menu, scheduled for 5.9, and single Meta key press for opening the launcher, available since 5.8) is the ability to switch between windows and activate launchers using Meta + number keyboard shortcuts.

One of the reasons this hasn't been implemented in Plasma so far is that we're infinitely customizable™ and you could have 23 task managers on 3 screens spread across 12 panels. The question is: which panel should own the shortcuts? Should the be spread, and if so, in what order? It's complicated.

Initially, I tried to take all of this into account, and created a 500+ lines of code patch that allowed you to designate which panel would own the shortcuts, hinting you that "Global shortcuts only work with one Task Manager applet at a time.", and so on. This just wasn't maintainable. The new approach is less than 100 lines, very simple, and basically asks the first task manager it finds on a panel on the primary screen (if there is none, it will look on all other panels) to activate the task at the given index.

While this doesn't give you full flexibility, it implements the majority usecase of having one panel with a task manager and all of that with very little code. It's always a trade-off between code maintainability and implementing frequently requested features.

If you like what you saw and you'd like to see more of it, please consider donating to our End of Year 2016 Fundraiser!

03 Dec 2016 5:46pm GMT

02 Dec 2016

feedPlanet KDE

Snapping KDE Applications

This is largely based on a presentation I gave a couple of weeks ago. If you are too lazy to read, go watch it instead😉

For 20 years KDE has been building free software for the world. As part of this endeavor, we created a collection of libraries to assist in high-quality C++ software development as well as building highly integrated graphic applications on any operating system. We call them the KDE Frameworks.

With the recent advance of software bundling systems such as Snapcraft and Flatpak, KDE software maintainers are however a bit on the spot. As our software is building on such a vast collection of frameworks and supporting technology, the individual size of a distributable application can be quite abysmal.

When we tried to package our calculator KCalc as a snap bundle, we found that even a relatively simple application like this, makes for a good 70 MiB snap to be in a working state (most of this is the graphical stack required by our underlying C++ framework, Qt).
Since then a lot of effort was put into devising a system that would allow us to more efficiently deal with this. We now have a reasonably suitable solution on the table.

The KDE Frameworks 5 content snap.

A content snap is a special bundle meant to be mounted into other bundles for the purpose of sharing its content. This allows us to share a common core of libraries and other content across all applications, making the individual applications just as big as they need to be. KCalc is only 312 KiB without translations.

The best thing is that beside some boilerplate definitions, the snapcraft.yaml file defining how to snap the application is like a regular snapcraft file.

Let's look at how this works by example of KAlgebra, a calculator and mathematical function plotter:

Any snapcraft.yaml has some global attributes we'll want to set for the snap

name: kalgebra
version: 16.08.2
summary: ((TBD))
description: ((TBD))
confinement: strict
grade: devel

We'll want to define an application as well. This essentially allows snapd to expose and invoke our application properly. For the purpose of content sharing we will use a special start wrapper called kf5-launch that allows us to use the content shared Qt and KDE Frameworks. Except for the actual application/binary name this is fairly boilerplate stuff you can use for pretty much all KDE applications.

apps:
  kalgebra:
    command: kf5-launch kalgebra
    plugs:
      - kde-frameworks-5-plug # content share itself
      - home # give us a dir in the user home
      - x11 # we run with xcb Qt platform for now
      - opengl # Qt/QML uses opengl
      - network # gethotnewstuff needs network IO
      - network-bind # gethotnewstuff needs network IO
      - unity7 # notifications
      - pulseaudio # sound notifications

To access the KDE Frameworks 5 content share we'll then want to define a plug our application can use to access the content. This is always the same for all applications.

plugs:
  kde-frameworks-5-plug:
    interface: content
    content: kde-frameworks-5-all
    default-provider: kde-frameworks-5
    target: kf5

Once we got all that out of the way we can move on to actually defining the parts that make up our snap. For the most part parts are build instructions for the application and its dependencies. With content shares there are two boilerplate parts you want to define.

The development tarball is essentially a fully built kde frameworks tree including development headers and cmake configs. The tarball is packed by the same tech that builds the actual content share, so this allows you to build against the correct versions of the latest share.

  kde-frameworks-5-dev:
    plugin: dump
    snap: [-*]
    source: http://build.neon.kde.org/job/kde-frameworks-5-release_amd64.snap/lastSuccessfulBuild/artifact/kde-frameworks-5-dev_amd64.tar.xz

The environment rigging provide the kf5-launch script we previously saw in the application's definition, we'll use it to execute the application within a suitable environment. It also gives us the directory for the content share mount point.

  kde-frameworks-5-env:
    plugin: dump
    snap: [kf5-launch, kf5]
    source: http://github.com/apachelogger/kf5-snap-env.git

Lastly, we'll need the actual application part, which simply instructs that it will need the dev part to be staged first and then builds the tarball with boilerplate cmake config flags.

  kalgebra:
    after: [kde-frameworks-5-dev]
    plugin: cmake
    source: http://download.kde.org/stable/applications/16.08.2/src/kalgebra-16.08.2.tar.xz
    configflags:
      - "-DKDE_INSTALL_USE_QT_SYS_PATHS=ON"
      - "-DCMAKE_INSTALL_PREFIX=/usr"
      - "-DCMAKE_BUILD_TYPE=Release"
      - "-DENABLE_TESTING=OFF"
      - "-DBUILD_TESTING=OFF"
      - "-DKDE_SKIP_TEST_SETTINGS=ON"

Putting it all together we get a fairly standard snapcraft.yaml with some additional boilerplate definitions to wire it up with the content share. Please note that the content share is using KDE neon's Qt and KDE Frameworks builds, so, if you want to try this and need additional build-packages or stage-packages to build a part you'll want to make sure that KDE neon's User Edition archive is present in the build environments sources.list deb http://archive.neon.kde.org/user xenial main. This is going to get a more accessible centralized solution for all of KDE soon™.

name: kalgebra
version: 16.08.2
summary: ((TBD))
description: ((TBD))
confinement: strict
grade: devel

apps:
  kalgebra:
    command: kf5-launch kalgebra
    plugs:
      - kde-frameworks-5-plug # content share itself
      - home # give us a dir in the user home
      - x11 # we run with xcb Qt platform for now
      - opengl # Qt/QML uses opengl
      - network # gethotnewstuff needs network IO
      - network-bind # gethotnewstuff needs network IO
      - unity7 # notifications
      - pulseaudio # sound notifications

plugs:
  kde-frameworks-5-plug:
    interface: content
    content: kde-frameworks-5-all
    default-provider: kde-frameworks-5
    target: kf5

parts:
  kde-frameworks-5-dev:
    plugin: dump
    snap: [-*]
    source: http://build.neon.kde.org/job/kde-frameworks-5-release_amd64.snap/lastSuccessfulBuild/artifact/kde-frameworks-5-dev_amd64.tar.xz
  kde-frameworks-5-env:
    plugin: dump
    snap: [kf5-launch, kf5]
    source: http://github.com/apachelogger/kf5-snap-env.git
  kalgebra:
    after: [kde-frameworks-5-dev]
    plugin: cmake
    source: http://download.kde.org/stable/applications/16.08.2/src/kalgebra-16.08.2.tar.xz
    configflags:
      - "-DKDE_INSTALL_USE_QT_SYS_PATHS=ON"
      - "-DCMAKE_INSTALL_PREFIX=/usr"
      - "-DCMAKE_BUILD_TYPE=Release"
      - "-DENABLE_TESTING=OFF"
      - "-DBUILD_TESTING=OFF"
      - "-DKDE_SKIP_TEST_SETTINGS=ON"

Now to install this we'll need the content snap itself. Here is the content snap. To install it a command like sudo snap install --force-dangerous kde-frameworks-5_*_amd64.snap should get you going. Once that is done one can install the kalgebra snap. If you are a KDE developer and want to publish your snap on the store get in touch with me so we can get you set up.

The kde-frameworks-5 content snap is also available in the edge channel of the Ubuntu store. You can try the games kblocks and ktuberling like so:

sudo snap install --edge kde-frameworks-5
sudo snap install --edge --devmode kblocks
sudo snap install --edge --devmode ktuberling

If you want to be part of making the world a better place, or would like a KDE-themed postcard, please consider donating a penny or two to KDE

postcard04

02 Dec 2016 2:44pm GMT

01 Dec 2016

feedPlanet KDE

Wiki, what’s going on? (Part 18-Making it real)

WTLGoinOn

WikiToLearn1.0 action plan is getting real

Release the new version and start working to improve it : done.

Ok, done! Now let's start talking about it, spam it, find new users and grow more and more!

Yes, more or less this is the work we are doing in these weeks with our team.

Unimib is funding posters, which we are using to start a new promotional campaign for WikToLearn! Promo team is working on these new info-graphics and you are going to love them. Unimib students, stay tuned and get ready to spot our posters all around you.

We are working hard also from both institutional and more informal contacts: new collaborators are coming. The team is organizing and taking part to new events in the incoming future; stay tuned, more people are going to talk about us and you'll appreciate our efforts! We are also planning a series of new talks to present the new release and to get more and more people involved in our project.

We are also working on agreements with different universities and institutional centers such as GARR, Imperial College and UCL.

Christmas is coming, if you have ideas to celebrate it with our community contact us! WikiToLearn1.0 is going to celebrate its first XMas 😉

C'mon, new year with the new WikiToLearn is coming: the moment is now!

Share your knoledge, share freedom!

wtl1-0

L'articolo Wiki, what's going on? (Part 18-Making it real) sembra essere il primo su Blogs from WikiToLearn.

01 Dec 2016 10:15pm GMT

KDevelop 5.0.3 released

KDevelop 5.0.3 released

Today, we are happy to announce the release of KDevelop 5.0.3, the third bugfix and stabilization release for KDevelop 5.0. An upgrade to 5.0.3 is strongly recommended to all users of 5.0.0, 5.0.1 or 5.0.2.

Together with the source code, we again provide a prebuilt one-file-executable for 64-bit Linux, as well as binary installers for 32- and 64-bit Microsoft Windows. You can find them on our download page.

List of notable fixes and improvements since version 5.0.2:

  • Fix a performance issue which would lead to the UI becoming unresponsive when lots of parse jobs were created (BUG: 369374)
  • Fix some behaviour quirks in the documentation view
  • Fix a possible crash on exit (BUG: 369374)
  • Fix tab order in problems view
  • Make the "Forward declare" problem solution assistant only pop up when it makes sense
  • Fix GitHub handling authentication (BUG: 372144)
  • Fix Qt help jumping to the wrong function sometimes
  • Windows: Fix MSVC startup script from not working in some environments
  • kdev-python: fix some small issues in the standard library info

The 5.0.3 source code and signatures can be downloaded from here.

sbrauch Thu, 12/01/2016 - 22:00

Category
News
Tags
release
windows
linux
KDevelop 5

Comments

Permalink

Submitted by Gaël (not verified) on Fri, 12/02/2016 - 09:39

Wrong link to the AppImage on the download page

Hi,

FYI, the download page still (12/02/2016 morning in Paris) points to the 5.0.2 AppImage. The right link is http://download.kde.org/stable/kdevelop/5.0.3/bin/linux/KDevelop-5.0.3-… .

Gaël

Permalink

In reply to Wrong link to the AppImage on the download page by Gaël (not verified)

Submitted by kfunk on Fri, 12/02/2016 - 12:44

Fixed now, thanks!

Fixed now, thanks!

Permalink

Submitted by gabo (not verified) on Fri, 12/02/2016 - 09:48

Updated, thank you!

Updated, thank you!

01 Dec 2016 9:00pm GMT

30 Nov 2016

feedPlanet KDE

KDevelop: Seeking maintainer for Ruby language support

Heya,

just a short heads-up that KDevelop is seeking for a new maintainer for the Ruby language support. Miquel Sabaté did an amazing job maintaining the plugin in the recent years, but would like to step down as maintainer because he's lacking time to continue looking after it.

Here's an excerpt from a mail Miquel kindly provided, to make it easier for newcomers to follow-up on his work in kdev-ruby:

As you might know the development of kdev-ruby has stalled and the KDevelop team is looking for developers that want to work with it. The plugin is still considered
experimental and that's because there is still plenty of work to be done. What has been
done so far:

  • The parser is based on the one that can be found on MRI. That being said, it's based on an old version of it so you might want to update it.
  • The DUChain code is mostly done but it's not stable yet, so there's quite some work to be done on this front too.
  • Code completion mostly works but it's quite basic.
  • Ruby on Rails navigation is done and works.

There is a lot of work to be done and I'm honestly skeptical whether this approach will end up working anyways. Because of this skepticism and the fact that I was using another editor, I ended up abandoning the project and thus kdev-ruby was no longer maintained by anyone.

If you feel that you can take the challenge and you want to contribute to kdev-ruby, please reach out to the KDevelop team. They are extremely friendly and will guide you on the process of developing this plugin.

Again, thanks for all your work Miquel, you will be missed!

If you're interested in that kind of KDevelop plugin development, please get in touch with us!

More information about kdev-ruby here: https://community.kde.org/KDevelop/Ruby

30 Nov 2016 11:39pm GMT

Finding a valid build order for KDE repositories

KDE has been lately been growing quite a bit in repositories, and it's not always easy to tell what needs to be build before, do i build first kdepim-apps-libs or pimcommon?

A few days ago i was puzzled by the same question and realized we have the answer in the dependency-data-* files from the kde-build-metadata repository.

They define what depends on what so what we need to do is just build a graph with those dependencies and get a valid build order from it.

Thankfully python already has a module for graphs and stuff so build-order.py was not that hard to write.

So say you want to know a valid build order for the stable repositories based on kf5-qt5

Here it is

Note i've been saying *a* valid build order, not *the* valid build order, since there are various orders that are valid since not every repo depends other repos.

Now i wonder, does anyone else find this useful? And if so to which repository do you think i should commit such script?

30 Nov 2016 11:13pm GMT

Qt Creator 4.2 RC1 released

We are happy to announce the release of Qt Creator 4.2 RC1.

Since the release of the Beta, we've been busy with polishing things and fixing bugs. Just to name a few:

For an overview of the new features in 4.2 please head over to the Beta release blog post. See our change log for a more detailed view on what has changed.

Get Qt Creator 4.2 RC1

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.2 RC1 released appeared first on Qt Blog.

30 Nov 2016 1:55pm GMT

Krita 3.1 Release Candidate

Due to illness, a week later than planned, we are still happy to release today the first release candidate for Krita 3.1. There are a number of important bug fixes, and we intend to fix a number of other bugs still in time for the final release.

You can find out more about what is going to be new in Krita 3.1 in the release notes. The release notes aren't finished yet, but take a sneak peek all the same!

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store is available in the beta channel.

OSX

Source code

30 Nov 2016 10:12am GMT

29 Nov 2016

feedPlanet KDE

Fuzzing Qt for fun and profit

Many KDAB engineers are part of the Qt Security Team. The purpose of this team is to get notified of security-related issues, and then decide the best course of action for the Qt project.

Most of the time, this implies identifying the problem, creating and submitting a patch through the usual Qt contribution process, waiting for it to be merged in all the relevant branches, and then releasing a notice to the users about the extent of the security issue. We also work together with downstreams, such as our customers, Linux distributions and so on, in order to minimize the risks for Qt users of exposing the security vulnerability.

However, that's only part of the story. As part of the security team, we can't simply wait for reports to fall in our laps; we also need to have a proactive approach and constantly review our code base and poke it in order to find problems. For that, we use a variety of tools: the excellent Coverity Scan service; the sanitizers available in GCC and Clang; clazy, maintained by KDAB's engineer Sérgio Martins; and so on.

Note that all these tools help catch any sorts of bugs, not only the security-related ones. For instance, take a look at the issues found and fixed by looking at the Undefined Behavior Sanitizer's reports, and the issues fixed by looking at Coverity Scan's reports.

Today I want to tell you a little more about one of the tools used to test Qt's code: the American Fuzzy Lop, or AFL to friends.

Fuzzing

What is AFL? It's a fuzzer: a program that keeps changing the input to a test in order to make it crash (or, in general, misbehave). This "mutation" of the input goes on forever - AFL never ends, just keeps finding more stuff, and optimizes its own searching process.

AFL gained a lot of popularity because:

The results speaks for themselves: AFL has found security issues in all major libraries out there. Therefore, I decided to give it a try on Qt.

The setup

Setting up AFL is straightforward: just download it from its website and run make. That's it - this will produce a series of executables that will act as a proxy for your compiler, instrumenting the generated binaries with information that AFL will need. So, after this step, we will end up with afl-gcc, afl-g++ and so on.

You can go ahead and build an instrumented Qt. If you've never built Qt from source, here's the relevant documentation. On Unix systems it's really a matter of running configure with some options, followed by make and optionally make install. The problem at this step is making Qt use AFL's compilers, not the system ones. This turns out to be very simple, however: just export a few environment variables, pointing them to AFL's binaries:

export CC=/path/to/afl-gcc
export CXX=/path/to/afl-g++
./configure ...
make

And that's it, this will build an instrumented Qt. (A more thorough solution would involve creating a custom mkspec for qmake; this would have the advantage of making the final testscase application also use AFL automatically. For this task, however, I felt it was not worth it.)

Creating a testcase

What you need here is to create a very simple application that takes an input file from the command line (or stdin) and uses it to stress the code paths you want to test.

Now, when looking at a big library like Qt, there are many places where Qt reads untrusted input from the user and tries to parse it: image loading, QML parsing, (binary) JSON parsing, and so on. I decided to give a shot at binary JSON parsing, feeding it with AFL's mutated input. The testcase I built was straightforward:

#include <QtCore>

int main(int argc, char **argv)
{
    QCoreApplication app(argc, argv);

    QFile file(app.arguments().at(1));
    if (!file.open(QIODevice::ReadOnly))
        return 1;

    QJsonDocument jd = QJsonDocument::fromBinaryData(file.readAll());

    return 0;
}

Together with the testcase, you will also need a few test files to bootstrap AFL's finding process. These files should be extremely small (ideally, 1-2KB at maximum) to let the fuzzer do its magic. For this, just dump a few interesting files somewhere next to your testcase. I've taken random JSON documents, converted them to binary JSON and put the results in a directory.

Running the fuzzer

Once the testcase is ready, you can run it into the fuzzer like this:

afl-fuzz -m memorylimit \
         -t timeoutlimit \
         [master/slave options] \
         -i testcases/ \
         -o findings/ \
         -- ./test @@

A few explanatory remarks:

For reference, I've launched my master like this:

afl-fuzz -m 512 -t 20 -i testcases -o findings-json -M fuzzer00 -- ./afl-qjson @@

The output is a nice colored summary of what's going on, updated in real time:

AFL running over a testcase.

AFL running over a testcase.

Now: go do something else. This is supposed to run for days! So remember to launch it in a screen session, and maybe launch it via nice so that it runs with a lower priority.

Findings

After running for a while, the first findings started to appear: inputs that crashed the test program or made it run for too long. Once AFL sees such inputs, it will save them for later inspection; you will find them under the findings/fuzzername subdirectories:

findings-json/fuzzer00/crashes/id:000000,sig:06,src:000445,op:arith8,pos:168,val:+6
findings-json/fuzzer00/crashes/id:000001,sig:11,src:000445,op:arith8,pos:168,val:+7
findings-json/fuzzer00/crashes/id:000002,sig:11,src:000449,op:arith8,pos:196,val:+6
findings-json/fuzzer00/crashes/id:000003,sig:11,src:000489,op:flip1,pos:435
findings-json/fuzzer01/crashes/id:000000,sig:06,src:000526,op:havoc,rep:2
findings-json/fuzzer01/crashes/id:000001,sig:11,src:000532,op:havoc,rep:2
findings-json/fuzzer01/crashes/id:000002,sig:06,src:000533,op:havoc,rep:4

If you're lucky (well, I guess it depends how you look at it…), you will end up with inputs that indeed crash your testcase. Time to fix something!

You may also get false positives, in the form of crashes because the testcase runs out of memory. Remember that AFL imposes a strict memory limit on your executable, so if your testcase allocates too much memory and does not know how to recover from OOM it will crash. If you see many inputs crashing into AFL but not crashing when running normally, maybe your testcase is behaving properly, but just running out of memory, and increasing the memory limit passed to AFL will fix this.

The sig part in the name of each saved input should give you a hint, telling you which Unix signal caused the crash. In the listing above, signal number 11 is a SIGSEGV, which is indeed a problem. The signal 06 is SIGABRT (that is, an abort), which was generated due to running out of memory.

To reproduce this last case, just manually run the test over that input, and check that it doesn't misbehave; then rerun it, but this time limiting its available memory via ulimit -v memory_available_in_kilobytes. If the testcase works normally but crashes under a stricter ulimit, it's likely that you're in an out-of-memory scenario. This may or may not require a fix in your code; it really depends whether it makes sense for your application/library to recover from an OOM.

Fixing upstream

After reporting the findings to the Security Team, it was a matter of a few days before a fix was produced, tested and merged into Qt. You can find the patches here and here.

Tips and tricks

If you want to play with AFL, I would recommend you to do a couple of things:

Conclusions

Fuzzing is an excellent technique for testing code that needs to accept untrusted inputs. It is straightforward to set up and run, requires no modifications to the tested code, and it can find issues in a relatively short timespan. If your application feature parsers (especially of binary data), consider to keep AFL running over it for a while, as it may discover some serious problems. Happy fuzzing!

About KDAB

KDAB is a consulting company offering a wide variety of expert services in Qt, C++ and 3D/OpenGL and providing training courses in:

KDAB believes that it is critical for our business to contribute to the Qt framework and C++ thinking, to keep pushing these technologies forward to ensure they remain competitive.

The post Fuzzing Qt for fun and profit appeared first on KDAB.

29 Nov 2016 12:50pm GMT

KDE Developer Guide needs a new home and some fresh content

As I just posted in the Mission Forum, our KDE Developer Guide needs a new home. Currently it is "not found" where it is supposed to be.

UPDATE: Nicolas found the PDF on archive.org, which does have the photos too. Not as good as the xml, but better than nothing.

We had great luck using markdown files in git for the chapters of the Frameworks Cookbook, so the Devel Guide should be stored and developed in a like manner. I've been reading about Sphinx lately as a way to write documentation, which is another possibility. Kubuntu uses Sphinx for docs.

In any case, I do not have the time or skills to get, restructure and re-place this handy guide for our GSoC students and other new KDE contributors.

This is perhaps suitable for a Google Code-in task, but I would need a mentor who knows markdown or Sphinx to oversee. Contact me if interested! #kde-books or #kde-soc

29 Nov 2016 6:31am GMT

28 Nov 2016

feedPlanet KDE

Kdenlive’s first bug squashing day

Kdenlive 16.12 will be released very soon, and we are trying to fix as many issues as possible. This is why we are organizing a Bug squashing day, this friday, 2nd of december 2016 between 9am and 5 pm (Central European Time - CET).
bugsquashing

Kdenlive needs you

There are several ways you can help us improve this release, depending on your skills or interests. During the bug squashing day, Kdenlive developers will be reachable on IRC at freenode.net, channel #kdenlive to answer your questions. A collaborative notepad has also been created to coordinate the efforts.

If you have some interest / knowledge in coding:
You can download Kdenlive's source code and find instructions on our wiki. We will also be available on friday on IRC to help you setup your development environment. You can then select an 'easy bug' from the notepad list and then look at the code to try to fix it. Feel free to ask your questions on IRC, the developers will guide you through the process, so that you can get familiar with the parts of the code you will be looking at.

If you are a user and encounter a bug:
You can help us by testing the Kdenlive 16.12 RC version. Our easy to install AppImage and snap packages will be updated on the 1rst of december with the latest code (Ubuntu users can also use our PPA). This will allow you to install the latest version without messing with your system. You can then check if a bug is still there is the latest version, or let us know if it is fixed.

So feel free to join us this friday, this is your chance to help the world of free software video editing !

For the Kdenlive team,
Jean-Baptiste Mardelle

28 Nov 2016 11:59pm GMT