30 Aug 2015

feedPlanet KDE

ocs-client GSoC

So my GSoC is coming to its end. I have no cool screenshots to upload this time and I have no new great features to talk about, in fact Claudio and I manly focused on bugfixing and testing. We have spent time also discussing about possible changes and improvements to the current OCS protocol.
So is the client ready do be lunched? In short I would say that no, not yet.. although most of its features are implemented and it is usable, it is still an "under construction" project, we both still have to make some important decisions to make it usable to everyone.
Anyway don't worry! I have absolutely no intention to stop working on it so I will continue to stay in the community and code also after my GSoC ends ;)
Being a GsoC student with KDE was amazing, I have learned a lot and I had a very good chance to improve my skills. Also having the opportunity to go to the Akademy for the first time and meet the other developers in person was a really great thing, It really gives you a better idea on how the whole community works. I can say that there you can sensibly feel the spirit of knowledge sharing and helping that aims the open source "movement". I hope that I will be able to convince other people to join.
So great thanks goes to KDE and to my mentor for having given me the possibility to be a GSoC student during this summer.
I also wish to thank Claudio for the things I've learnt from him along the way and for the patience he demonstrated when I was asking him clarifications or help.

Cheers and Long Live to KDE

Francesco Wofford

30 Aug 2015 3:38pm GMT

29 Aug 2015

feedPlanet KDE

User Data Manifesto 2.0 launched

In October 2012 I announced the first version of the User Data Manifesto during the Latinoware Keynote in Brazil. The idea was to define some basic right that all users should have in the digital age. This was still before the Snowden revelations. But it was already very clear that the privacy and security is at risk by cloud services and SaaS solutions that totally ignore the rights and interests of their users. So the idea was to try to define what this rights should be in the internet age.

The version 1.0 was instantly very popular and I got a ton of positive feedback and support. But over the time it also became clear that a few things could be expressed in a simpler and clearer way. So the idea came up to do a revision of the manifesto based on all the feedback.

During last years ownCloud Contributor Conference Hugo Roy from FSFE and ToS;DR, Jan-C. Borchardt and I started to work on the version 2. Now one year later I'm super happy to say that Hugo launched the new version of the manifesto during the ownCloud Contributor Conference keynote here in Berlin just a few minutes ago.
This is the result of a lot of discussion and the input from a lot of people and organizations. I'm also super proud to say that several well known organization are official launch partners of this 2.0 version of the manifesto and support the manifesto and the ideas behind this. This supporters are:

More information about the manifesto can be found here

I hope that this Manifesto helps to promote the importance of privacy, data protection, security and control over the own data in the cloud age.
If your organization, company or open source project wants to help to push this forward and want to support this manifesto then please send me a message and we will add you to the list of supporters.

29 Aug 2015 1:00pm GMT

FreeBSD on Beagle Bone Black (easy as pie)

For a long time, my Beagle Bone Black sat on my desk, gathering dust. Recently I decided I would give it a purpose: as a replacement for the crappy DHCP server and DNS on my home router (it's a Huawei g655d, and it has poor wireless range, a lousy interface, and wonky internal DNS). I ran an update on the Bone, which promptly downloaded a whole bunch of packages from the angstrom distribution. Over plain unauthenticated http. With, as far as I could see, no further checksumming or anything. Bad doggy.

Resigned to replacing the on-board distro anyway, I decided I would try FreeBSD, since that's my OS of choice - if it didn't work out, OpenSUSE would do.

Anyway. I wouldn't be writing this if there weren't a whole bunch of giants on whose shoulders I could stand, since actually, the whole process was deceptively simple and well-documented.

Hardware Setup: Here's a picture of my Beagle Bone, on an old DVD-case.

Beagle Bone Black

Beagle Bone Black on FreeBSD

I started from the FreeBSD Beagle Bone wiki page. I power the Bone over USB from a powered hub. There's a Olimex 3-pin serial cable attached. I spent a frustrating hour with this until I read somewhere that sometimes the TX and RX wires are reversed - so I swapped red and green and voila! You can see that in the picture.

Here's part of the boot messages:

KDB: debugger backends: ddb
KDB: current backend: ddb
Copyright (c) 1992-2015 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 10.2-STABLE #0 r287149: Thu Aug 27 06:11:58 UTC 2015
root@releng1.nyi.freebsd.org:/usr/obj/arm.armv6/usr/src/sys/BEAGLEBONE arm
FreeBSD clang version 3.4.1 (tags/RELEASE_34/dot1-final 208032) 20140512
CPU: Cortex A8-r3 rev 2 (Cortex-A core)
Supported features: ARM_ISA THUMB2 JAZELLE THUMBEE ARMv4 Security_Ext
WB disabled EABT branch prediction enabled
LoUU:2 LoC:3 LoUIS:1
Cache level 1:
32KB/64B 4-way data cache WT WB Read-Alloc
32KB/64B 4-way instruction cache Read-Alloc
Cache level 2:
256KB/64B 8-way unified cache WT WB Read-Alloc Write-Alloc
real memory = 536870912 (512 MB)

The FreeBSD image for Beagle Bone expands to fill the SD card, so I have a nice 8GB drive with a basic FreeBSD installation - hardly any different from when I create a fresh FreeBSD VM in VirtualBox.

Software Setup: then I tried to compile something. There are no binary packages generally available for ARM targets from FreeBSD, but you can compile everything from FreeBSD ports, no problem. Except after about 40 minutes waiting on the very first port that needs to be done, pkg(8), I was about to give up on this path.

At that point, Ralf Nolden asked something that totally turned this little project around: why don't you use poudriere for cross-compiling?

I'll point to Randy Westlund for a simple and straightforward recipe. No need to repeat it here, since the only difference between my setup and his are a few minor filesystem path changes. Randy points at Doug and there's more pointers from there if you want to follow the historical references. Giants.

Suffice to say that poudriere is awesome.

Really. Follow Randy's "Installing the Tools" steps, take the required modifications
to poudriere.conf from Doug, then continue with "Build the Environment".

On an i7 860 @2.8GHz, this took less than an hour, if I recall correctly. Maybe an hour and a half, which gave me time to read the documentation on other bits and pieces.

I picked a few packages - isc-dhcp42-server and unbound - and kicked off a poudriere build. I turned off all the DOCS and EXAMPLES options, since I can get those on the build host and they don't need to be on the Bone. From the extensive logging poudriere produces, I can see that it took a little over an hour and a half. For an overnight build, that's cheap.

And then the moment of truth:

root@beaglebone:/usr # pkg install isc-dhcp41-server
Updating bbbbuild repository catalogue...
bbbbuild repository is up-to-date.
All repositories are up-to-date.
Updating database digests format: 100%
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
isc-dhcp41-server: 4.1.e_9,2

The process will require 2 MiB more space.
473 KiB to be downloaded.

So there you go! FreeBSD and ARMv6 packaging is as easy as pie. Now the Bone is doing something useful, I can start using poudriere for silly things,
like building Qt5 so I can write a Qt application to control the user LEDs on the board.

29 Aug 2015 11:35am GMT

Bringing Akonadi Next up to speed

It's been a while since the last progress report on akonadi next. I've since spent a lot of time refactoring the existing codebase, pushing it a little further,
and refactoring it again, to make sure the codebase remains as clean as possible. The result of that is that an implementation of a simple resource only takes a couple of template instantiations, apart from code that interacts with the datasource (e.g. your IMAP Server) which I obviously can't do for the resource.

Once I was happy with that, I looked a bit into performance, to ensure the goals are actually reachable. For write speed, operations need to be batched into database transactions, this is what allows the db to write up to 50'000 values per second on my system (4 year old laptop with an SSD and an i7). After implementing the batch processing, and without looking into any other bottlenecks, it can process now ~4'000 values per second, including updating ten secondary indexes. This is not yet ideal given what we should be able to reach, but does mean that a sync of 40'000 emails would be done within 10s, which is not bad already. Because commands first enter a persistent command queue, pulling the data offline is complete even faster actually, but that command queue afterwards needs to be processed for the data to become available to the clients and all of that together leads to the actual write speed.

On the reading side we're at around 50'000 values per second, with the read time growing linearly with the amount of messages read. Again far from ideal, which is around 400'000 values per second for a single db (excluding index lookups), but still good enough to load large email folders in a matter of a second.

I implemented the benchmarks to get these numbers, so thanks to HAWD we should be able to track progress over time, once I setup a system to run the benchmarks regularly.

With performance being in an acceptable state, I will shift my focus to the revisioned, which is a prerequisite for the resource writeback to the source. After all, performance is supposed to be a desirable side-effect, and simplicity and ease of use the goal.

Randa

Coming up next week is the yearly Randa meeting where we will have the chance to sit together for a week and work on the future of Kontact. This meetings help tremendously in injecting momentum into the project, and we have a variety of topics to cover to direct the development for the time to come (and of course a lot of stuff to actively hack on). If you'd like to contribute to that you can help us with some funding. Much appreciated!


29 Aug 2015 10:10am GMT

28 Aug 2015

feedPlanet KDE

Artikulate Plans for Randa

Language learning is often considered as the task of memorizing new vocabulary and understanding the new grammar rules. Yet for most, the most challenging part is to actually get used to speak the new language. This is a problem that Artikulate approaches with a simple idea: to learn the correct pronunciation of a word or even a longer phrase, the learner listens to a native speaker recording, repeats and recordings it, and finally compares both recordings to improve herself/himself with the next try.

Since a while, Artikulate is shipped in the KDE Education module. Yet it is one of the few still Qt4-based applications. Actually, this is something that should change :) Most parts of the application is already ported to Qt5/KF5, yet the whole UI is still in an experimental porting stage, due to the invasive porting changes to QtQuick2. The next step is to finally complete this porting and even to go a step further. Since some time there are already quite promising mockups for a new UI around that were discussed here.

An excellent opportunity to work in this is the upcoming sprint in Randa. But since mere coding is maybe not enough to justify, why one should travel in the middle of the alps, the plans for that week are more ambitious:

Your help is Needed
Sprints like the upcoming sprint in Randa are essential to keep developers busy developing, to provide them with opportunities to discuss their projects' next steps, and give them the chance to exchange wisdom/experience/ideas. And probably not less important, to keep the community feeling alife that KDE is all about. You can help with even a small donation. How exactly, is explained here:

Randa Fundraiser-Banner-2015


28 Aug 2015 10:07pm GMT

New linter integration plugins for KDevelop

Hi there!
I've just moved some linter integration plugins to the KDE infrastructure (scratch repos), therefore making them generally available.
They are fairly simple plugins, all 3 of them are alike in that they just run an external tool, and bring the results (the issues found) into KDevelop's GUI. The found issues then will be in the problems toolview, in their own separate tab. The tools can check either a single file or all files in the project. You can see the workflow and configuration options in the videos included. There are also user's manuals and tech manuals in the docs directories of each repo.

kdev-clangcheck
This plugin integrates Clang's static code analysis feature, providing C/C++ static code analysis.

Git repository:
git://anongit.kde.org/scratch/laszlok/kdev-clangcheck.git

kdev-pylint
This plugin integrates a linter called Pylint, and as the name suggests it's a Python code analyzer.

Git repository:
git://anongit.kde.org/scratch/laszlok/kdev-pylint.git

kdev-jshint
This plugin integrates a linter called JSHint, and as the name suggests it's a Javascript code analyzer.

Git repository:
git://anongit.kde.org/scratch/laszlok/kdev-jshint.git


28 Aug 2015 4:03pm GMT

The Fiber Engine Poll, Updates, and Breeze

Some weeks ago I ran a poll to see what would be the preferred rendering engine for Fiber, and so I figure now is the time to post results. There was a surprising amount of misinformation/confusion running around about what each option potentially meant which I hope to clear up, but overall the results were so compelling I doubt stripping the misinformation and re-running the poll would return a different result.

Third Place: Port to CEF Later

"Porting to CEF later" was the lowest voted option at ~18% of the ballet, and in retrospect it makes sense since it's just a poor solution. The only upside is that it gets an obsolete implementation out the door (if that's an upside), but it makes things complicated during an important phase of the project by putting an engine change in motion while trying to flesh out deeply tied APIs. Not good.

Oddly some people wanted a WebEngine/CEF switch and took to this option as Fiber having such a switch. Considering CEF proper is based on Chromium/Blink (which is what WebEngine uses) it's a bit like asking to take two paths to the same destination; there are differences in the road but in the end both ways lead to Blink. There will be no switch for Cef/WebEngine because adding one would bring down the API potential to the lowest common denominator while increasing complexity to the most advanced method.

Runner up: Use WebEngine

"Use WebEngine" was the runner-up at 24% of the vote. The main prospect behind this is that it would result in a shipping browser fastest, but it also works under the assumption that it may increase code compatibility between Qt-based browsers - but the architecture of Fiber I believe will be very alien compared to contemporary solutions. If there are chances to collaborate I will, but I don't know how much of that will be possible.

There was also a segment that voted for WebEngine thinking CEFs was just a more complicated route to Chromium, being confused about the road to Servo.

Winner by a mile: Go Exclusively CEF

It's no surprise that in the end "Use CEF" trounced the remainder of the poll at 59% of respondents voting in favour of it - more than both other options combined or any individual option doubled. From the comments around the internet one of the biggest reasons for the vote is Servo as a major differentiating factor between other browsers, and also because it would help mitigate the Webkit/Blink monopoly forming on non-mozilla browsers for Linux.

This excites me as a web developer, and I'm likely to try pushing Servo as the default engine as it will likely be plenty good by the time Fiber is released. Sadly, I believe there were a few votes placed thinking that Fiber would ultimately usher in a "QCef" or "KCef" framework; and I don't think this will be the case.

On making a Frameworks 5 API I considered it as a super-interesting Frameworks addition, but after careful consideration I realised there just aren't too many projects which would benefit from what would be a substantial amount of work. Another issue is that I think the QWebEngine is appropriate for most projects, and that anything more is needless complication. The Qt developers have done a good job picking the right APIs to expose which suits common needs, and I imagine the additional complexity would only hurt projects adopting such a library; it's killing a mosquito with a cannon. Plus, QWebEngine will evolve in good time to fill any common needs that pop up.

What will Fiber do?

Fiber is going to go exclusively CEF. I'm in the process of fiddling CEF into the browser - but CEF is a bit of a beast and about 3/4 of my time is simply reading CEF documentation, examples, and reading the source code of open projects using the utility. My main concern is properly including CEF without requiring X11; it's possible, but the Linux example code isn't using Aura, and the implementation examples are GTK-based as well. Qt and KF5 have solutions, but I'm reseaching the best route to take.

In terms of what engine Fiber is using (Servo vs Blink) I'm going the generic route; you can drop in simple config files pointing to CEF-compatible executables, and when configuring profiles you can pick which engine you would like to use based on those files. This engine switch is already present on the command line and in the "Tuning" section of the profiles manager. This means you can have different profiles running different engines if you choose. There's a second command-line option which will launch a new instance of Fiber with the engine of your choice running one-time for testing purposes. For the purposes of the default, I'll probably push Servo.

CEF will not drive UI

Indirectly using CEF means QML may become the exclusive language of UI extensions, popups, and config dialogs. Mainly this is because of the additional abstraction and effort required to offer CEF in several contexts, but it also puts a much cleaner separation between browser and content and will likely make securing the system easier. Extensions will be required to offer pages in HTML.

If you're using QML, your writing chrome. If you're using HTML you're writing a page.

This is also more in-line with the Plasma Mobile guidelines, and though I severely doubt you'll see Fiber become a mobile browser any time soon this keeps the door open for the far future. In two years I'd rather not break a significant number of extensions for mobile inclusion; I'd rather just have things work, maybe with some minor layout tweaks.

There are real pros and cons to QML as the only way to extend the browser UI, and probably one of the largest I worry about is the fact that QML has a significantly smaller developer base than HTML. On the plus side QML is able adapt to platforms, meaning we might not need to divide extensions between desktop and mobile - that would simply boil down to layout tweaks. All this means is instead of having many extensions of questionable quality, we will aim to offer fewer but higher-quality extensions.

On Progress

Progress is steady. Probably an hour to two of work a night goes into the project, and extra time on weekends as freedom allows. It drives people nuts that I'm taking my dear sweet time on this, but when the groundwork is done there will be a solid base for others to help quickly build on.

I've introduced threading into some parts of Fibers management tools, and made significant improvements with how Fiber manages internal data caching for profile data. This all got started when I noticed a split-second of lag on a slider, and realised the long-term implications. Threading was introduced so when the database models are working they do not lag the main thread, and the layer which talks to the model now caches the data and only communicates with the model when one is out of sync. The next step will be to add some internal very coarse timers and event tools which will delay hard data saves until they can be batched efficiently or must be written, and possibly a check to prevent the saving of idenitcal data.

While this may not matter as much for the management tools I'll be applying these techniques on an extension-wide bases; this will save power, keep Fiber highly responsive, make it CPU wake friendly, and avoid hard drives wakeups - even when bad extensions might behave in "thrashing" behaviours. Ironically this first performance exercise has made me confident that even with many "slow" javascript-driven features, Fiber may become a highly performant browser by virtue of having extremely fine-tuned APIs which give blanket improvements.

One of the most annoying but necessary changes was porting Fiber from QMake to CMake. Originally I had the intention to prototype using QMake, switching to CMake later for the "real" work. As things would have it the prototype had simply evolved and I realised it would just be easier to port it. As I'm not terribly familiar with CMake this started off painfully, but once I realised what CMake was trying to encourage I fell in love and things just clicked.

During the CMake port I also took the opportunity to strip out vestigial or prototypical code and do some housekeeping, which certainly cleaned things up as I not only removed files but also disposed of bits of code too. I also removed all traces of WebEngine which I had used during the earliest prototype phase; the next time Google pops up, it'll be with CEF.

I've also started incorporating the first KF5 libraries into the project. The libraries are very well organised, and also well documented. Finally, I need to compliment Qt and state how amazing the toolkit is. Really. Some of the most notable changes were trivial by Qt making smart use of its internal structure, and even though I'm hardly a veteran developer Qt and it's extremely good documentation has allowed me to make smart, informed decisions. Really guys, good job.

On other projects

Moving away from Fiber, right now we're doing a lot of work on refining the Breeze theme for Plasma 5.5 in this thread, where we're running down paper-cuts on the design and building the next iteration of the style. Ideally, we'd like to see a much more consistent and well-defined visual structure. Later on we will start to address things like alignment issues, and start targeted papercut topics which will address specific visual issues. If you're interested, please read the entire thread as there is lots of design discussion and contribute your thoughts.

Remember, constructive feedback is the easiest contribution anyone can make to an open-source project!


28 Aug 2015 3:36am GMT

27 Aug 2015

feedPlanet KDE

Ankit Wagadre (ankitw)

DataPicker For LabPlot : GSoC Project 2015

My GSoC project was to develop Datapicker for Labplot, it is a tool which converts input graph in
a form of an image, into numbers. I coundn't able to post my last blog properly so this is my first
blog to the community.

Datapicker supports several graph types:
  • Cartesian (x, y)
  • Polar (r, Deg/Rad)
  • Logarithmic (x, ln(y))/ (ln(x), y)
Using dock-widget user can define local/logical coordinates of axis/reference points.
Datapicker provides all the zooming options that worksheet provides. And a few new options
like zoom-window that creates a small magnified window below the cursor has been added.

Datapicker supports multiple curve for the same graph. Each curve can have its own type of
x & y errors (No-error, symmetric, asymmetric), datasheet and a symbol style. The appearance
of the symbols to mark points can be changed via dock-widget . New options has been added
to support movement of points on the image through arrow keys.



The segment selection mode allows user to select automatic traced segments of curve. Tracing
is done by processing image on the basis of range of color attributes which can be modified in
the dock-widget to get better results. User can also use this mode just to remove background
and grid lines to clear image view.




Datapicker supports all types of errors No-error, Symmetric error, and Asymmetric error. Based on
types of errors each symbol generates error bar around it. Error bars are the movable object that
allows user to change their position and appearance as needed.





27 Aug 2015 10:23pm GMT

Kubuntu Wily Beta 1

The first Beta of Wily (to become 15.10) has now been released!

The Beta-1 images can be downloaded from: http://cdimage.ubuntu.com/kubuntu/releases/wily/beta-1/

More information on Kubuntu Beta-1 can be found here: https://wiki.kubuntu.org/WilyWerewolf/Beta1/Kubuntu

27 Aug 2015 9:21pm GMT

Legalese is vague: Always consult a lawyer

Jon recently published a blog post stating that you're free to create Ubuntu derivatives as long as you remove trademarks. I do not necessarily agree with this statement, primarily because of this clause in the IP rights policy :

Copyright

The disk, CD, installer and system images, together with Ubuntu packages and binary files, are in many cases copyright of Canonical (which copyright may be distinct from the copyright in the individual components therein) and can only be used in accordance with the copyright licences therein and this IPRights Policy.

From what I understand, Canonical is asserting copyright over various binaries that are shipped on the ISO, and they're totally in the clear to do so for any packages that end up on the ISO that are permissively licensed ( X11 for eg. ), because permissive licenses, unlike copyleft licenses, do not prohibit additional restrictions on top of the software. A reading of the GPL has the explicit statement :

4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.

Whereas licenses such as the X11 license explicitly allow sub licensing :

… including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software …

Depending on the jurisdiction you live in, Canonical *can* claim copyrights over the binaries that are produced in the Ubuntu archive. This is something that multiple other parties such as the SF Conservancy, FSF as well as Bradley Kuhn have agreed on.

So once again, all of this is very much dependent on where you live and where your ISO's are hosted. So if you're distributing an Ubuntu derivative, I'd very much recommend talking to a professional lawyer who'd best be able to advise you about how the policy affects you in your jurisdiction. It may very well be that you require a license, or it may be that you don't. I'm not a lawyer and AFAIK, neither is Jon.

Addendum/Afterthought :

Taken a bit more extreme, one could even argue that in order to be GPL compliant, derivatives should provide sources to all the packages that land on the ISO, and just passing off this responsibility to Canonical is a potential GPL violation.


27 Aug 2015 6:47pm GMT

[Howto] Accessing CloudForms updated REST API with Python

Red Hat CloudForms LogoCloudForms comes with a REST API which was updated to version 2.0 in CloudForms 3.2. It covers substantially more functions compared to v1.0 and also offers feature parity with the old SOAP interface. This post holds a short introduction to calling the REST API via Python.

Introduction

Red Hat CloudForms is a Manager to "manage virtual, private, and hybrid cloud infrastructures" - it provides a single interface to manage OpenStack, Amazon EC2, Red Hat Enterprise Virtualization Management, VMware vCenter and Microsoft System Center Virtual Machine Manager. Simply said it is a manager of managers. While CloudForms focusses on virtual environments of all kinds its abilities are not limited to simple deployment and starting/stopping VMs, but cover the entire business process and workflow surrounding larger deployments of virtual machines in large or distributed data centers. Tasks can be highly automated, charge backs and optimizer enable the admins to put workloads where they make most sense, an almost unbelievable amount of reports help operations and please the management and entire service catalogs can help provision not single VMs, but setups of interrelated instances. And of course, as with all Red Hat products and technologies, CloudForms is fully Open Source and based upon a community project: ManageIQ.

One of the major use cases of CloudForms is to keep an overview of all the various types of "clouds" used in production and the VMs running on them during the day to day work. Many companies have VMWare instances in their data centers, but also have a second virtual environment like RHEV or Hyper-V. Additionally they use public cloud offerings for example to cover the load during peak times, or integrate OpenStack to provide their own, private cloud. In such cases CloudForms is the one interface to rule them all, one interface to manage them. ;-)

CloudForms itself can be managed via Webinterface but also via the API. Up until recently, the focus was on a SOAP API. Since Ruby on Rails - the base for CloudForms - will not support SOAP anymore in the future the developers decided to switch to a REST API. With CloudForms 3.2 this move was in so far completed that the REST API reached feature parity. The API offers quite a lot of functions - besides gathering information it can also be used to trigger actions, define or delete services, etc.

Python examples

In general the API can be called by any REST compatible tool - which means by almost any HTTP client. For my tests with the new API I decided to use Python and more specifically iPython together with the requests and the json library. All examples are surrounded by a JSON dumps statement to prettify the output.

The REST authentication is provided by default HTTP means. The normal way is to authenticate once, get a token in return and use the token for all further calls. The default API url is https://cf.example.com/api, which shows which collections can be queried via the API, for example: vms, clusters, providers, etc. Please note that the role based access control of CloudForms is also present in the API: you can only query collections and and modify objects when you have proper rights to do so.

current_token=json.loads(requests.get('https://cf.example.com/api/auth',auth=("admin",'password')).text)['auth_token']
print json.dumps(json.loads(requests.get('https://cf.example.com/api',headers={'X-Auth-Token' : current_token}).text),sort_keys=True,indent=4,separators=(',', ': '))
{
    "collections": [
        {
            "description": "Automation Requests",
            "href": "https://cf.example.com/api/automation_requests",
            "name": "automation_requests"
        },
...

This shows that the basic access works. Next we want to query a certain collection, for example the vms:

print json.dumps(json.loads(requests.get('https://cf.example.com/api/vms',headers={'X-Auth-Token' : current_token}).text),sort_keys=True,indent=4,separators=(',', ': '))
...
    "count": 2,
    "name": "vms",
    "resources": [
        {
            "href": "https://cf.example.com/api/vms/602000000000007"
        },
        {
            "href": "https://cf.example.com/api/vms/602000000000006"
        }
    ],
    "subcount": 2
}

While all VMs are listed, the information shown above are not enough to understand which vm is actually which: at least the name should be shown. So we need to expand the information about each vm and afterwards add a condition to only show name and for example vendor:: ?expand=resources&attributes=name,vendor:

print json.dumps(json.loads(requests.get('https://cf.example.com/api/vms?expand=resources&attributes=name,vendor',headers={'X-Auth-Token' : current_token}).text),sort_keys=True,indent=4,separators=(',', ': '))
...
"resources": [
    {
        "href": "https://cf.example.com/api/vms/602000000000007",
        "id": 602000000000007,
        "name": "my-vm",
        "vendor": "redhat"
    },
    {
        "href": "https://cf.example.com/api/vms/602000000000006",
        "id": 602000000000006,
        "name": "myvm-clone",
        "vendor": "redhat"
    }
],

This works of course for 2 vms, but not if you manage 20.000. Thus it's better to use a filter:&filter[]='name="my-vm"'. Since filters use a lot of quotation marks depending on the amount of strings you use it is best to define a string containing the filter argument and afterwards add that one to the URL:

filter="name='my-vm'"
print json.dumps(json.loads(requests.get('https://cf.example.com/api/vms?expand=resources&attributes=name&filter[]='+filter',headers={'X-Auth-Token' : current_token}).text),sort_keys=True,indent=4,separators=(',', ': '))
...
"count": 2,
"name": "vms",
"resources": [
    {
        "href": "https://cf.example.com/api/vms/602000000000007",
        "id": 602000000000007,
        "name": "my-vm"
    }
],
"subcount": 1

Note the subcount which shows how many vms with the given name were found. If you want to combine more than one filter, simply add them to the URL: &filter[]='name="my-vm"'&filter[]='power_state=on'.

With a given href to the correct vm you can shut it down. Use the HTTP POST method and provide a JSON payload calling the action "stop".

print json.dumps(json.loads(requests.post('https://cf.example.com/api/vms/602000000000007',headers={'X-Auth-Token' : current_token},,data=json.dumps({'action':'stop'})).text),sort_keys=True,indent=4,separators=(',', ': '))
{
    "href": "https://cf.example.com/api/vms/602000000000007",
    "message": "VM id:602000000000007 name:'my-vm' stopping",
    "success": true,
    "task_href": "https://cf.example.com/api/tasks/602000000000097",
    "task_id": 602000000000097
}

If you want to call an action to more than one instance, change the href to the corresponding collection and include the actual hrefs for the vms in the payload in an resources array:

print json.dumps(json.loads(requests.post('https://cf.example.com/api/vms',headers={'X-Auth-Token' : current_token},data=json.dumps({'action':'stop', 'resources': [{'href':'https://cf.example.com/api/vms/602000000000007'},{'href':'https://cf.example.com/api/vms/602000000000006'}]})).text),sort_keys=True,indent=4,separators=(',', ': '))
"results": [
    {
        "href": "https://cf.example.com/api/vms/602000000000007",
        "message": "VM id:602000000000007 name:'my-vm' stopping",
        "success": true,
        "task_href": "https://cf.example.com/api/tasks/602000000000104",
        "task_id": 602000000000104
    },
    {
        "href": "https://cf.example.com/api/vms/602000000000006",
        "message": "VM id:602000000000006 name:'myvm-clone' stopping",
        "success": true,
        "task_href": "https://cf.example.com/api/tasks/602000000000105",
        "task_id": 602000000000105
    }
]

The last example shows how more than one call to the API are connected to each other: we call the API to scan a VM, get the task id and query the task id to see if the task was successfully called. So first we call the API to start the scan:

print json.dumps(json.loads(requests.post('https://cf.example.com/api/vms/602000000000006',headers={'X-Auth-Token' : current_token},verify=False,data=json.dumps({'action':'scan'})).text),sort_keys=True,indent=4,separators=(',', ': '))
{
    "href": "https://cf.example.com/api/vms/602000000000006",
    "message": "VM id:602000000000006 name:'my-vm' scanning",
    "success": true,
    "task_href": "https://cf.example.com/api/tasks/602000000000106",
    "task_id": 602000000000106
}

Next, we take the given id 602000000000106 and query the state:

print json.dumps(json.loads(requests.get('https://cf.example.com/api/tasks/602000000000106',headers={'X-Auth-Token' : current_token},verify=False).text),sort_keys=True,indent=4,separators=(',', ': '))
{
    "created_on": "2015-08-25T15:00:16Z",
    "href": "https://cf.example.com/api/tasks/602000000000106",
    "id": 602000000000106,
    "message": "Task completed successfully",
    "name": "VM id:602000000000006 name:'my-vm' scanning",
    "state": "Finished",
    "status": "Ok",
    "updated_on": "2015-08-25T15:00:20Z",
    "userid": "admin"
}

However, please note that "Finished" here means that the call of the task was successful - but not necessarily the task outcome itself. For that you would have to call the vm state itself.

Final words

The REST API of CloudForms offers quite some useful functions to integrate CloudForms with your own programs, scripts and applications. The REST API documentation is also quite extensive, and the community documentation for ManageIQ has a lot of API usage examples.

So if you used to call your CloudForms via SOAP you will be happy to find the new REST API in CloudForms 3.2. If you never used the API you might want to start today - as you have seen its quite simple to get results quickly.


Filed under: Business, Cloud, Fedora & RHEL, HowTo, Linux, Shell, Technology, Virtualization

27 Aug 2015 3:48pm GMT

Ubuntu Archive Still Free Software

"Ubuntu is entirely committed to the principles of free software development; we encourage people to use free and open source software, improve it and pass it on." is what used to be printed on the front page of ubuntu.com. This is still true but recently has come under attack when the project's main sponsor, Canonical, put up an IP policy which broke the GPL and free software licences generally by claiming packages need to be recompiled. Rather than apologising for this in the modern sense of the word by saying sorry, various staff members have apologised in an older sense of the word meaning to excuse. But everything in Ubuntu is free to share, copy and modify (or just free to share and copy in the case of restricted/multiverse). The archive admins wills only let in packages which comply to this and anyone saying otherwise is incorrect.

In this twitter post Michael Hall says "If a derivative distro uses PPAs it needs an additional license." But he doesn't say what there is that needs an additional licence, the packages already have copyright licences all, of them free software.

It should be very obvious that Canonical doesn't control the world and a licence is only needed if there is some law that allows them to restrict what others want to do. There's been a few claims on what that law might be but nothing that makes sense when you look at it. It's worth examining their claims because people will fall for them and that will destroy Ubuntu as a community project. Community projects depend on everyone having the freedom to do whatever they want with the code else nobody will give their time to a project that someone else will then control.

In this blog post Dustin Kirkland again doesn't say what needs a licence but says one is needed based on Geographical Indication. It's hard to say if he's being serious. A geographical indication (GI) is a sign used on products that have a specific geographical origin and possess qualities or a reputation that are due to that origin and are then assessed before being registered. There is no Geographical Indication registration in Ubuntu and it's completely irrelevant to everything. So lets move on.

A more dangerous claim you can see on this reddit post where Michael Hall claims "for permissively licensed code where you did not build the binary, there is no pre-existing right to redistribution of that binary". This is incorrect, everything in Ubuntu has a free software licence with an explicit right to redistribution. (Or a few bits are public domain where no licence is needed at all.) Let's take the libX11 as a random example, it gets shipped with a copyright file containing this licence:

"Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction"

so we do have permission. Shame on those who say otherwise. This applies to the source of course and so it applied to any derived work such as the binaries, which is why it's shipped with the binaries. It even says you can't remove remove the licence:
"The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software."
So it's free software and the licence requires it to remain free software. It's not copyleft, so if you combine it with another work which is not free software then the result is proprietary, but we don't do that in Ubuntu. The copyright owner could put extra restrictions on but nobody else can because it's a free world and you can't make me do stuff just because you say so, you have to have some legal way to restrict me first.
One of the items allowed by this X11 licence is the ability to "sublicense" which is just putting another licence on it, but you can't remove the original licence as it says in the part of the licence I quoted above. Once I have a copy of the work I can copy it all I want under the X11 licence and ignore your sublicence.
This is even true of works under the public domain or a WTFPL style licence, once I've got a copy of the work it's still public domain so I can still copy, share and modify it freely. You can't claim it's your copyright because, well, it's not.

In Matthew Garrett's recent blog post he reports that "Canonical assert that the act of compilation creates copyright over the binaries". Fortunately this is untrue and can be ignored. Copyright requires some creative input, it's not enough to run a work through a computer program. In the very unlikely case a court did decide that compiling a programme added some copyright then they would not decide that copyright was owned by the owners of the computer it ran on but on the copyright owners of the compiler, which is the Free Software Foundation and the copyright would be GPL.

In conclusion there is nothing which restricts people making derivatives of Ubuntu except the trademark, and removing branding is easy. (Even that is unnecessary unless you're trading which most derivatives don't, but it's a sign of good faith to remove it anyway.)

Which is why Mark Shuttleworth says "you are fully entitled and encouraged to redistribute .debs and .iso's". Lovely.

facebooktwittergoogle_pluslinkedinby feather

27 Aug 2015 2:50pm GMT

26 Aug 2015

feedPlanet KDE

Some fancy things in YAML-based QML replacement

As previously mentioned, I've been working on a YAML-based replacement for QML that would allow using JavaScript alternatives in QtQuick applications.

But, it is not only about a different syntax. It also allowed me to make some improvements to make the life easier.


Read more...

26 Aug 2015 10:30pm GMT

3-finger-drag on Linux

Spending some time in OS X, one cat get quite used to that 3-finger-drag, which makes selecting and dragging anything soo much easier - just put three fingers on the touchpad and you're in the drag gesture already.

I have quite missed this on Linux, but with the mtrack touchpad driver, it is now possible to have. The original code for mtrack is on GitHub, but it has been recently forked by p2rkw and this fork comes with many new features added, one of them is 3-fingers-drag.

If you haven't heard of mtrack yet, it's an alternative to synaptics driver, originally with the goal to provide proper multitouch for MacBooks on Linux via the kernel multitouch protocol, since synaptics didn't have that yet at that time. Now it has grown into rather good synaptics alternative with many features for great multitouch experience (it has all gestures support right in the driver - pinch-to-zoom, finger-rotate, up to 5-fingers-swipe in all 4 directions etc plus palm/thumb detection and option to ignore those and/or disable the whole touchpad when palm/thumb is detected), but to have an actually good experience, some time spent on fine-tuning the config is still required. I'm still in the process of fine-tuning mine, but I'll post it once I'm really happy with it.

So if you're using mtrack, be sure to update to p2rkw's fork and to use 3-fingers-drag, add this to your touchpad config (xorg.conf or xorg.conf.d/):

Option "SwipeDistance" "1"
Option "SwipeLeftButton" "1"
Option "SwipeRightButton" "1"
Option "SwipeUpButton" "1"
Option "SwipeDownButton" "1"
Option "SwipeClickTime" "0"
Option "SwipeSensitivity" "1000"

It's also possible to have 4-fingers-drag, but that seems like a waste of fingers :)

Additionally mtrack works well with easystroke which is a gesture recognition for Linux, so one can set the various multitouch gestures to virutal buttons and then handle those buttons with easystroke. So in theory, if you want 3-fingers-drag only for window movement, it should be possible to map the 3-fingers-drag to alt+drag which would mean that putting 3 fingers down would allow to drag the window from anywhere, not just it's titlebar.

I've been using mtrack for about half a year on my Macbook Pro and it misses just two things to be absolutely perfect - that kinetic scrolling (swipe two fingers fast and it scrolls fast) and good cursor acceleration (slow finger movement moves the cursor slower and vice-versa), but I understand this is something to be set in X/XInput.

But overall, I'm quite happy with mtrack...only if it could have runtime config :)

26 Aug 2015 5:31pm GMT

Virtue of Necessity. Canary, sublime your company.

The past July 16th I participated in the Tenerife LAN Party, in its section Tenerife Innova, invited by the Free Software Office of La Laguna University, included in the track titled (free translation from Spanish) Open Source from the Canary Islands, stories told in First Person.

This Free Software Office is well known in Spain for managing the biggest KDE deployment in Spain with 3k computers spread in several computer labs, laboratories and libraries, among other internal projects.

My talk (20 min) had as title something like: Virtue of Necessity. Canary, sublime your company. You can find the slides (in Spanish) in my site or in my Slideshare account.

What was the talk about?

The natural growth path for a software company is creating a site, grow until it reaches a point in which, a second production site is needed. Meanwhile, departments like sales and technical support might grow distributed. As the company grows, the number of production sites grow with it. The organization structure vary with the nature of the company, sometime production teams are replicated across different sites, sometimes different business units are divided per site and others a particular site host teams that take care of different products-(micro)services.

In general, if the company follows an "agile" approach, it will try to reduce the inter-sites communication needs by placing the team members together in a particular site. Based on my experience and how FOSS has been developed, depending on the market you are playing, turning your company into a distributed environment might be a smart move.

Let's start providing some context and definitions.

What do I mean by sublimation in this context?

Sublimation is a state change in which a substance transit from the solid state to the gas state without going through the liquid one.

What do I mean by a distributed environment?

In most agile literature, in fact in most software development management books, distributed environments are multi-site distribution. But in Open Source, we refer to a different environment. I have came out with the following (subjective) definition:

A distributed environment/organization:

Open Source as distributed environment

Free/Open Source Software has a geographically distributed nature. As you know, most of the relevant communities, no matter which size are we talking about, are formed by developers located in many different countries, working from home or a company site. If we take a look at the most relevant ones, they are truly global. Every tool, every process, has been designed (intentionally or not) having in mind this distributed nature.

Now that Open Source is everywhere, more and more companies are embracing it, participating on its development, collaborating in global communities. from the process perspective, they are being influenced by the Open Source way, including its distributed nature.

Many of the companies that are embracing Open Source are understanding that adapting to this new environment makes collaboration easier. It reduces the friction between community and internal processes. There is a long way to go but I believe it is unstoppable for a variety of reasons (out of the scope of the talk). We are starting to see more and more organization that are fully distributed and start-ups that are born with such structure in mind.

Canary Islands, a fragmented market.

The Canary Islands is a group of 7 islands in the Atlantic sea, with two major ones (900k people each) and 5 smaller, for a total of 2.1 million people and 11 million tourists per year. Obviously tourism is the main industry, so there are 6 international airports, two national ones and 10 harbours, half of which regularly receive big ships/cruises.

Data connectivity has improved a lot the last 10 years but, due to the difficult geography, it is unequally distributed across the islands. Even within the main islands, Tenerife and Gran Canaria, there is a significant percentage of surface with zero internet coverage.

So it is a very fragmented market and, although with first class communication infrastructure, travel across the islands takes a significant amount of time, it is expensive and connectivity might be a challenge. In general, the transportation strategy has been designed to bring people from Europe not for internal mobility.

This means that, as a software company, consolidation/growth in such market is tough, very tough, even if you focus on tourism.

Software companies there expand following the "natural" approach, which is by creating a software production centre in one of the main islands, providing support from there to the other islands. Until you consolidate your position, software companies cannot afford to have developers/technical support in the second main island. If service/support is required in one of the small islands, you simply travel there. The limitations that software companies has to face due to the market conditions, rarely allow you to create a second software development centre in the Canary Islands.

There are very few Spanish cities with daily direct planes from/to Canary Islands throughout the year. Madrid and Barcelona are the biggest markets but also the most expensive cities. The flight takes 2:30 hours to Madrid and 3 hours to Barcelona, which is a lot for European standards. So opening a second development centre in the mainland keeping the headquarters in the Canary Islands is a real challenge.

In other words, if you want to scale your business, you need to assume bigger risks than companies based in the continent, despite being a cheaper place and having plenty of professional due to the existence of two Universities.

... but,

In my talk I tried to show that all those limitations can be turned into advantages if organizations, early in their consolidation process, or even from the very beginning, adopt a distributed approach. These constrains offer a first class laboratory to experiment with some of the key variables that need to be managed when scaling up your company, while leaving aside some of the most complicated ones, related to great extend with the internationalization of the organization.

I made a call to sublimate your company, going from an "on-site" to a fully distributed state, ignoring the multi-site state. Even better, create your software company as a distributed environment since the very beginning.

Why sublimating your company in the Canary Islands?

I summarized the advantages of sublimating your company if you are based in the Canary Islands, Spain, in the following statements:

Which variables will be affected by subliming your company?

These are the most relevant variables to consider:

There are more but these ones are the ones that should be considered carefully before subliming your company in the Canary Islands. As you grow, internationalization will knock at your door very soon. There are other variables to consider in that case. They are not the scope of this talk:

Summary

The Canary Islands is a tougher market than the mainland of Spain. Adopting early in the software company life cycle a distributed nature allow you to adapt better and faster to this environment, preparing you better for later stages too, specially the internationalization phase. Sublimation provides you a competitive advantage, specially if you develop Open Source and participate in open collaborative environments.

There are a number of variables that should be carefully considered though. Managing them correctly is a requirement to succeed.
Agustin Benito Bethencourt (Toscalix) @toscalix Linkedin profile: http://linkedin.com/in/toscalix

26 Aug 2015 5:08pm GMT

An alternative to Linaro’s HWPacks

For the past couple of weeks I've been playing with a variety of boards, and a single problem kept raising its head over and over again, I needed to build test images quickly in order to be able to checkout whether or not these boards had the features that I wanted.

This lead me to investigating tools around building images for these boards. And the tools I came across for each of these boards were absymal to say the least. All of them were either very board specific or were not versatile enough for my needs. Linaro's HWPack's came very very close to what I needed, but still had some of the following limitations :

So with those 4 problems to solve, I set out writing my own replacement for Linaro's HWPack's , and lo and behold, you can find it here. ( I'm quite terrible at coming up with awesome names for my projects, so I chose the most simple and descriptive name I could think of ;)

Here's a sample config for the ODROID C1, a neat little board from HardKernel.

The rootfs section

You can specify a rootfs for your board in this section, it will take a url to the rootfs tar and optionally a md5sum for the tar.

The firmware section

We currently have 2 firmware backends for installing the firmware ( things like the kernel, and other board specific packages ). One is the tar backend which, like the rootfs section, takes a url to the firmware tar and optionally a md5sum and the Apt backend. I only have time to maintain these 2 backends, so I'd absolutely love it if someone could write more backends such as yum or pacman and send me a pull request.

The tar backend will copy everything from the boot/* folder inside the tar onto the first partition, and anything inside the firmware/* and modules/* folder into the rootfs's /lib folder. This is a bit implicit and I'm trying to figure out a way to make this better.

The apt backend can take multiple apt repos to be added to the rootfs and a list of packages to install afterwards.

The bootloader section

The bootloader has a :config section which will take a ERB file to be rendered and installed into both the rootfs and the bootfs ( if you have one ).

Here's a quote of the sample ERB file for the ODROID C1:

This allows me to dynamically render boot files depending on what kernel was installed on the image and what the UUID of the rootfs is. You can in fact access more variables as described here.

Moving on to the :uboot section of the bootloader, you can specify as many stages as you want to flash onto the image. Each stage will take a :file to flash and optionally :dd_opts, which are options that you might want to pass to dd when writing the bootloader. The stages are flashed in the sequence that is declared in config.yml and the files are searched for in the rootfs first, failing which they're searched for in the bootfs partition, if you have one.

The login section

The login section is quite self-explanatory and takes a user, a password for the user and a list of groups the user should be added to on the target image.

The login section is optional and can be skipped if your rootfs already has a pre-configured user.

At the moment I have configs for the ODROID C1, Cubox-I ( thanks to Solid Run for sending me a free-extra board! :) and the Raspberry Pi 2.

If you have questions send me an email or leave them in the comments below, and I'll try to answer them ASAP :).

If you end up writing a config for your board, please send me a PR with the config, that'd be most awesome.

PS: Some of the most awesome people I know are meeting up at Randa next month to work on bringing Touch to KDE. It'd be supremely generous of you if you could donate towards the effort.


26 Aug 2015 3:34pm GMT