21 Feb 2017

feedPlanet KDE

Making Movies with QML

One of the interesting things about working with Qt is seeing all the unexpected ways our users use the APIs we create. Last year I got a bug report requesting an API to set a custom frame rate for QML animations when using QQuickRenderControl. The reason was that the user was using QQuickRenderControl as an engine to render video output from Qt Quick, and if your target was say 24 frames per second, the animations were not smooth because of how the default animation driver behaves. So inspired by this use case I decided to take a stab at creating such an example myself.

screen-shot-2017-02-21-at-12-46-27

This may not be the most professional looking User Interface, but what it does is still pretty neat. The objective is to feed it an animated QML scene and it should output an image file for each frame of the animation. These images can then be converted into a video or animated image using an external tool. The challenge is that Qt Quick is a UI tool, not a movie generator.

The naive approach to this would be to create a QQuickWindow, set the window size to the output target size, and then grab the frame by calling QQuickWindow::grabWindow() each time the frameSwapped() signal is emitted. There are a couple of issues with this approach though. First is that the video would need to render in realtime. If you wanted to render an animation that was 5 minutes long, it would take 5 minutes because it would just be like recording your application for 5 minutes. The second issue is that under the best case scenario you would be rendering video at the refresh rate of your monitor. This would even require a reasonably powerful machine, because the QQuickWindow::grabWindow() call involves a glReadPixels call which is quite expensive. It is also problematic if you need to render at a different frame rate than your monitor refresh (which is what the user that inspired me was complaining about). So here is how I addressed both of these issues.

QQuickRenderControl

QQuickRenderControl is a magical class that lets you do all kinds of crazy things with Qt Quick content. For our purposes we will use it to render Qt Quick content to an offscreen surface as fast as we can. Rather than creating an on-screen QQuickWindow, we can create a dummy QQuickWindow and via render control we can render content to an QOpenGLFramebufferObject instead.

    // Setup Format
    QSurfaceFormat format;
    format.setDepthBufferSize(16);
    format.setStencilBufferSize(8);

    // Setup OpenGL Context
    m_context = new QOpenGLContext;
    m_context->setFormat(format);
    m_context->create();

    // Setup dummy Surface (to create FBO with)
    m_offscreenSurface = new QOffscreenSurface;
    m_offscreenSurface->setFormat(m_context->format());
    m_offscreenSurface->create();

    // Setup Render Control and dummy window 
    m_renderControl = new QQuickRenderControl(this);
    m_quickWindow = new QQuickWindow(m_renderControl);

    // Setup QML Engine
    m_qmlEngine = new QQmlEngine;
    if (!m_qmlEngine->incubationController())
        m_qmlEngine->setIncubationController(m_quickWindow->incubationController());

    // Finish it all off
    m_context->makeCurrent(m_offscreenSurface);
    m_renderControl->initialize(m_context);

The above gets QQuickRenderControl setup, then when the size is know and you can actually create the QOpenGLFramebuffer object and tell the dummy QQuickWindow thats where it will be rendering.

void MovieRenderer::createFbo()
{
    m_fbo = new QOpenGLFramebufferObject(m_size * m_dpr, QOpenGLFramebufferObject::CombinedDepthStencil);
    m_quickWindow->setRenderTarget(m_fbo);
}

And once that is done it's just a matter of loading up the QML content and rendering it. Unlike with QQuickWindow, QQuickRenderControl allows you to control when the steps of the rendering process occurs. In our case we want to render as fast as possible so this is what our rendering setup looks like:

void MovieRenderer::renderNext()
{

    // Polish, synchronize and render the next frame (into our fbo).
    m_renderControl->polishItems();
    m_renderControl->sync();
    m_renderControl->render();
    m_context->functions()->glFlush();

    m_currentFrame++;
 
    // Grab the contents of the FBO here ...

    if (m_currentFrame < m_frames) { 
        // Schedule the next update 
        QEvent *updateRequest = new QEvent(QEvent::UpdateRequest); 
        QCoreApplication::postEvent(this, updateRequest);
    } else { 
        //Finished cleanup();
    } 
} 
bool MovieRenderer::event(QEvent *event) { 
    if (event->type() == QEvent::UpdateRequest) {
        renderNext();
        return true;
    }
    return QObject::event(event);
}

The above sets up an event driven loop that will render as fast as possible while still handling events between frames, which is needed for progressing animations with Qt Quick.

Custom QAnimationDriver

The second issue we need to address is that the animation behavior is wrong. To remedy this we need a custom QAnimationDriver that enables us to advance animations at our own frame rate. The default behavior is to try and advance the animation's in steps as close as possible to the refresh rate of the monitor your application is running on. Since we never present the content we render to the screen that behavior doesn't make sense for us. Instead we can install our own QAnimationDriver which can be manually advanced each frame we generate based on a pre-determined frame rate. Here is the whole implementation of my custom Animation driver:

class AnimationDriver : public QAnimationDriver
{
public:
    AnimationDriver(int msPerStep)
        : m_step(msPerStep)
        , m_elapsed(0)
    {}

    void advance() override
    {
        m_elapsed += m_step;
        advanceAnimation();
    }
    qint64 elapsed() const override
    {
        return m_elapsed;
    }
private:
    int m_step;
    qint64 m_elapsed;
};

Now to use this you just need to install the new QAnimationDriver. When you call QAnimationDriver::install() it will replace the current one, so Qt Quick will then behave like we need it to. When we start the movie renderer we also install the custom AnimationDriver:

    m_frames = m_duration / 1000 * m_fps;
    m_animationDriver = new AnimationDriver(1000 / m_fps);
    m_animationDriver->install();

    // Start the renderer
    renderNext();

And finally since we control the render loop, we need to manually advance the animation driver. So before the end of the renderNext() method make sure to call:

m_animationDriver->advance();

And that is it. Now we can render as fast as possible, and our animation engine will step perfectly for the frame rate we are generate frames for. It is important to remember that you must process events after calling advance() on your animations though, because these are handled through the Qt Event and Signal and Slots system. If you don't do this, then you will generate the same frame many times.

Results

Once you run the MovieRenderer you end up with a folder full of images representing each frame. To prepare video files from the generated output I used ffmpeg:

ffmpeg -r 24 -f image2 -s 1280x720 -i output_%d.jpg -vcodec libx264 -crf 25 -pix_fmt yuv420p hello_world_24.mp4

In the above command it will generate a 720p video at 24 fps from a series of files called output_*.jpg. It would also be possible to create an example that either called this tool for you via QProcess, or even included an encoder library to generate the video directly. I went for the simplest approach using only what Qt had built-in for this example. Here are a few example movies I generated:

This first video is rendered at 60 FPS and the second is at 24 FPS. Notice how they animate at the same speed but one is smoother than the other. This is the intended behavior in action.

Well thats all I have to show, the rest is up to you. I've published the code for the QML Movie Renderer here so go check it out now! I hope this example inspires you as well to make other cool projects, and I look forward to seeing what new unexpected ways you'll be using Qt in the future.

The post Making Movies with QML appeared first on Qt Blog.

21 Feb 2017 3:12pm GMT

QStringView Diaries: Advances in QStringLiteral

This is the first in a series of blog posts on QStringView, the std::u16string_view equivalent for Qt. You can read about QStringView in my original post to the Qt development mailing-list, follow its status by tracking the "qstringview" topic on Gerrit and learn about string views in general in Marshall Clow's CppCon 2015 talk, aptly […]

The post QStringView Diaries: Advances in QStringLiteral appeared first on KDAB.

21 Feb 2017 12:30pm GMT

Plasma in a Snap?

…why not!

Shortly before FOSDEM, Aleix Pol asked if I had ever put Plasma in a Snap. While I was a bit perplexed by the notion itself, I also found this a rather interesting idea.

So, the past couple of weeks I spent a bit of time here and there on trying to see if it is possible.

img_20170220_154814

It is!

But let's start in the beginning. Snap is one of the Linux bundle formats that are currently very much en-vogue. Basically, whatever is necessary to run an application is put into a self-contained archive from which the application then gets run. The motivation is to isolate application building and delivery from the operating system building and delivery. Or in short, you do not depend on your Linux distribution to provide a package, as long as the distribution can run the middleware for the specific bundle format you can get a bundle from the source author and it will run. As an added bonus these bundles usually also get confined. That means that whatever is inside can't access system files or other programs unless permission for this was given in some form or fashion.

Putting Plasma, KDE's award-winning desktop workspace, in a snap is interesting for all the same reasons it is interesting for applications. Distributing binary builds would be less of a hassle, testing is more accessible and confinement in various ways can lessen the impact of security issues in the confined software.

With the snap format specifically Plasma has two challenges:

  1. The snapped software is mounted in a changing path that is different from the installation directory.
  2. Confining Plasma is a bit tricky because of how many actors are involved in a Plasma session and some of them needing far-reaching access to system services.

As it turns out problem 1, in particular, is biting Plasma fairly hard. Not exactly a great surprise, after all, relocating (i.e. changing paths of) an installed Plasma isn't exactly something we've done in the past. In fact, it goes further than that as ultimately Plasma's dependencies need to be relocatable as well, which for example Xwayland is not.

But let's talk about the snapping itself first. For the purposes of this proof of concept, I simply recycled KDE neon's deb builds. Snapcraft, the build tool for snaps, has built-in support for installing debs into a snap, so that is a great timesaver to get things off the ground as it were. Additionally, I used the Plasma Wayland stack instead of the X11 stack. Confinement makes lots more sense with Wayland compared to X11.

Relocatability

Relocatability is a tricky topic. A lot of times one compiles fixed paths into the binary because it is easy to do and it is somewhat secure. Notably, depending on the specific environment at the time of invocation one could be tricked into executing a malicious binary in $PATH instead of the desired one. Explicitly specifying the path is a well-understood safeguard against this sort of problem. Unfortunately, it also means that you cannot move your installed tree anywhere but where it was installed. The relocatable and safe solution is slightly more involved in terms of code as you need to resolve what you want to invoke relative from your location, it being more code and also not exactly trivial to get right is why often times one opts to simply hard-compile paths. This is a problem in terms of packing things into a relocatable snap though. I had to apply a whole bunch of hacks to either resolve binaries from PATH or resolve their location relative. None of these are particularly useful patches but here ya go.

Session

Once all relocatability issues were out of the way I finally had an actual Plasma session. Weeeh!

Confinement

Confining Plasma as a whole is fairly straightforward, albeit a bit of a drag since it's basically a matter of figuring out what is or isn't required to make things fly. A lot of logouts and logins is what it takes. Fortunately, snaps have a built-in mechanism to expose DBus session services offered by them. A full blown Plasma session has an enormous amount of services it offers on DBus, from the general purpose notification service to the special interest Plasma Activity service. Being able to expose them efficiently is a great help in tweaking confinement.

Not everything is about DBus though! Sometimes a snap needs to talk with a system service, and obviously, a workspace as powerful as Plasma would need to talk to a bunch of them. Doing advanced access control needs to be done in snapd (the thing that manages installed snaps). Snapd's interfaces control what is and is not allowed for a snap. To get Plasma to start and work with confinement a bunch of holes need to be poked in the confinement that are outside the scope of existing interface. KWin, in particular, is taking the role of a fairly central service in the Plasma Wayland world, so it needs far-reaching access so it can do its job. Unfortunately, interfaces currently can only be built with snapd's source tree itself. I made an example interface which covers most of the relevant core services but unless you build a snapd this won't be particularly easy to try

Summary

All in all, Plasma is easily bundled up once one gets relocatability problems out of the way. And thanks to the confinement control snap and snapd offer, it is also perfectly possible to restrict the workspace through confinement.

I did not at all touch on integration issues however. Running the workspace from a confined bundle is all nice and dandy but not very useful since Plasma won't have any applications it can launch as they either live on the system or in other snaps. A confined Plasma would know about neither right now.

There is also the lingering question of whether confining like this makes sense at all. Putting all of Plasma into the same snap means this one snap will need lots of permissions and interaction with the host system. At the same time it also means that keeping confinement profiles up to date would be a continuous feat as there are so many things offered and used by this one snap.

One day perhaps we'll see this in production quality. Certainly not today

mascot_konqi-app-dev

21 Feb 2017 12:25pm GMT

Plasma 5.9.2, Applications 16.12.2 and Frameworks 5.31.0 available in Chakra

This announcement is also available in Spanish and Taiwanese Mandarin.

The latest updates for KDE's Plasma, Applications and Frameworks series are now available to all Chakra users.

Included with this update, is an update of the ncurses, readline and gnutls related group of packages, as well as many other important updates in our core repository. Be aware that during this update, your screen might turn black. If that is the case and it does not automatically restore after some time, then please switch to tty3 with Ctrl+Alt+F3 and then switch back to the Plasma session with Ctrl+Alt+F7. If that does not work, please give enough time for the upgrade to complete before shutting down. You can check your cpu usage using 'top' after logging in within tty3. You can reboot within tty3 using 'shutdown --reboot'.

The Plasma 5.9.2 release provides additional bugfixes to the many new features and changes that were introduced in 5.9.0 aimed at enhancing users' experience:



Applications 16.12.2 include more than 20 recorded bugfixes and improvements to 'kdepim, dolphin, kate, kdenlive, ktouch, okular, among others.'.

Frameworks 5.31.0 include python bindings to many modules in addition to the usual bugfixes and improvements.

Other notable package upgrades and changes:

[core]
alsa-utils 1.1.3
bash 4.4.005
binutils 2.27
dhcpcd 6.11.5
dnsutils 9.11.1
ffmpeg 2.8.11
gawk 4.1.4
gdb 7.12
gnutls 3.5.8: If you have local or CCR packages that require it, they might need a rebuild
gstreamer 1.10.3
gutenprint 5.2.12
hunspell 1.6.0
jack 0.125.0
kdelibs 4.14.29
make 4.2.1
mariadb 10.1.21
mplayer 37916
ncurses 6.0+20170204: If you have local or CCR packages that require it, they might need a rebuild
php 7.0.15
postgresql 9.6.1
python2 2.7.13
readline 7.0.001: If you have local or CCR packages that require it, they might need a rebuild
samba 4.5.3
sqlite3 3.16.0
texinfo 6.3
util-linux 2.29
vim 8.0.0142
wpa_supplicant 2.6

[desktop]
fcitx-qt 5 1.1.0
libreoffice 5.2.5
nano 2.7.4
wireshark 2.2.4
qemu 2.8.0
screen 4.5.0

[gtk]
filezilla 3.24.0
thunderbird-kde 45.7.1

[lib32]
wine 2.2

It should be safe to answer yes to any replacement question by Pacman. If in doubt or if you face another issue in relation to this update, please ask or report it on the related forum section.

Most of our mirrors take 12-24h to synchronize, after which it should be safe to upgrade. To be sure, please use the mirror status page to check that your mirror synchronized with our main server after this announcement.

21 Feb 2017 1:07am GMT

20 Feb 2017

feedPlanet KDE

OpenStack Summit Boston: Vote for Presentations

The next OpenStack Summit takes place in Boston, MA (USA) in May (8.-11.05.2017). The "Vote for Presentations" period started already. All proposals are now again up for community votes. The period will end February 21th at 11:59pm PST (February 22th at 8:59am CEST).

This time I have submitted a proposal together with WDLabs:


This period the voting process changed again unique URLs to proposals seems to work again. So if you would like to vote for my talk use this link or search for the proposal (e.g. use the title from above or search for "Al-Gaaf"). As always: every vote is highly welcome!

As the last times I highly recommend to search also for "Ceph" or what ever topic your are interested in. You find the voting page here with all proposals and abstracts. I'm looking forward to see if and which of these talks will be selected.

20 Feb 2017 11:27pm GMT

Three new FOSS umbrella organisations in Europe

Last year, three new umbrella organisations for free and open-source software (and hardware) projects emerged in Europe. Their aim is to cater to the needs of the community by providing a legal entity for projects to join, leaving the projects free to focus on technical and community tasks. These organisations (Public Software CIC, [The Commons Conservancy], and the Center for the Cultivation of Technology) will take on the overhead of actually running a legal entity themselves.

Among other services, they offer to handle donations, accounting, grants, legal compliance, or even complex governance for the projects that join them. In my opinion (and, seemingly, theirs) such services are useful to these kinds of projects; some of the options that these three organisations bring to the table are quite interesting and inventive.

The problem

As a FOSS or OSHW project grows, it is likely to reach a point where it requires a legal entity for better operation - whether to gather donations, pay for development, handle finances, organise events, increase license predictability and flexibility by consolidating rights, help with better governance, or for other reasons. For example, when a project starts to hold assets - domain names, trade marks, or even just receives money through donations - that should not be the responsibility of one single person, but should, instead, be handled by a legal entity that aligns with the project's goals. A better idea is to have an entity to take over this tedious, but needed, overhead from the project and let the contributors simply carry on with their work.

So far, the options available to a project are either to establish its own organisation or to join an existing organisation, neither of which may fit well for the project. The existing organisations are either specialised in a specific technology or one of the few technology-neutral umbrella organisations in the US, such as Software in the Public Interest, the Apache Software Foundation, or the Software Freedom Conservancy (SFC). If there is already a technology-specific organisation (e.g. GNOME Foundation, KDE e.V., Plone Foundation) that fits a project's needs, that may well make a good match.

The problem with setting up a separate organisation is that it takes ongoing time and effort that would much better be spent on the project's actual goals. This goes double and quadruple for running it and meeting with the annual official obligations - filling out tax forms, proper reporting, making sure everything is in line with internal rules as well as laws, and so on. To make matters worse, failure to do so might result in personal liability for the project leaders that can easily reach thousands or tens of thousands of euros or US dollars.

Cross-border donations are tricky to handle, can be expensive if a currency change is needed, and are rarely tax-deductible. If a project has most of its community in Europe, it would make sense to use a European legal entity.

What is common between all three new European organisations is that none demand a specific outbound license for the projects they manage (as opposed to the Apache Software Foundation, for example), as long as it falls under one of the generally accepted free and open licenses. The organisations must also have internal rules that bind them to act in the public interest (which is the closest approximation to FOSS you can get when it comes to government authorities). Where they differ is the set of services they offer and how much governance oversight they provide.

Public Software CIC

Public Software CIC incorporated in February 2016 as a UK-based Community Interest Company. It is a fiduciary sponsor and administrative service provider for free and open source projects - what it calls public software - in Europe.

While it is not for profit, a Community Interest Company (CIC) is not a charity organisation; the other two new organisations are charities. In the opinion of Public Software's founders, the tax-deductibility that comes with a charitable status does not deliver benefits that outweigh the limitations such a status brings for smaller projects. Tax recovery on cross-border charitable donations is hard and expensive even where it is possible. Another typical issue with charities is that even when for-profit activities (e.g. selling T-shirts) are allowed, these are throttled by law and require more complex accounting - this situation holds true both for most European charities and for US 501(c)(3) charitable organisations.

Because Public Software CIC is not a charity, it is allowed to trade and has to pay taxes if it has a profit at the end of its tax year. But as Simon Phipps, one of the two directors, explained at a panel at QtCon / FSFE Summit in September 2016, it does not plan to have any profits in the first place, so that is a non-issue.

While a UK CIC is not a charity and can trade freely, by law it still has to strictly act for public benefit and, for this reason, its assets and any trading surplus are locked. This means that assets (e.g. trade marks, money) coming into the CIC are not allowed to be spent or used otherwise than in the interests of the public community declared at incorporation. For Public Software, this means the publicly open communities using and/or developing free and open-source software (i.e. public software). Compliance with the public interest for a CIC also involves approval and monitoring by the Commissioner for Community Interest Companies, who is a UK government official.

The core services Public Software CIC provides to its member projects are:

These are covered by the base fee - 10% of project's income. This percentage seems to have become the norm (e.g. SFC charges the same). Public Software will also offer additional services (e.g. registering and holding a trade mark or domain name), but for these there will be additional fees to cover costs.

On the panel at QtCon, Phipps mentioned that it would also handle grants, including coordinating and reminding its member projects of deadlines to meet. But it would not write reports for the grants nor would it give loans against future payments from grants. Because many (especially EU) grants only pay out after the sponsored project comes to fruition, a new project that is seeking these grants should take this restriction into consideration.

Public Software CIC already hosts a project called Travel Spirit as a member and has a few projects waiting in the pipeline. While its focus is mainly on newly starting projects, it remains open to any project that would prefer a CIC. At QtCon, Phipps said that he feels it would be the best fit for smaller-scale projects that need help with setting up governance and other internal project rules. My personal (and potentially seriously wrong) prediction is that Public Software CIC would be a great fit for newly-established projects where a complex mishmash of stake holders would have to be coordinated - for example public-private collaborations.

A distinct feature of Public Software CIC is that it distinguishes between different intangible assets/rights and has different rules for them. The basic premise for all asset types is that no other single organisation should own anything from the member project; Public Software is not interested in being a "front" for corporate open source. But then the differences begin. Public Software CIC is perfectly happy and fit to hold trade marks, domain names, and such for its member projects (in fact, if a project holds a trade mark, Public Software would require a transfer). But on the other hand, it holds a firm belief that copyright should not be aggregated by default and that every developer should hold the rights to their own contribution if they are willing.

Apart from FOSS, the Public Software CIC is also open to open-source hardware or any free-culture projects joining. The ownership constraint might in practice prove troublesome for hardware projects, though.

Public Software CIC does not want to actively police license/copyright enforcement, but would try to assist a member project if it became necessary, as far as funds allowed. In fact when a project signs the memorandum of understanding to join the Public Software CIC, the responsibility for copyright enforcement explicitly stays with the project and is not transferred to the CIC. On the other hand, it would, of course, protect the other assets that it holds for a project (e.g. trade marks).

If a project wants to leave at some point, all the assets that the CIC held for it have to go to another asset-locked organisation approved by the UK's Commissioner of CICs. That could include another UK CIC or charity, or an equivalent entity elsewhere such as a US 501(c)(3).

If all goes wrong with the CIC - due to a huge judgment against one of its member projects or any other reason - the CIC would be wound down and all the remaining member projects would be spun out into other asset-locked organisation(s). Any remaining assets would be transferred to the FSFE, which is also a backer of the CIC.

[The Commons Conservancy]

[The Commons Conservancy] (TCC) incorporated in October 2016 and is an Amsterdam-based Stichting, which is a foundation under Dutch law. TCC was set up by a group of technology veterans from the FOSS, e-science, internet-community, and digital-heritage fields. Its design and philosophy reflects lessons learned in over two decades of supporting FOSS efforts of all sizes in the realm of networking and information technology. It is supported by a number of experienced organisations such as NLnet Foundation (a grant-making organisation set up in the 1980s by pioneers of the European internet) and GÉANT (the European association of national education and research networks).

As TCC's chairman Michiel Leenaars pointed out in the QtCon panel, the main goal behind TCC is to create a no-cost, legally sound mechanism to share responsibility for intangible assets among developers and organisations, to provide flexible fund-raising capabilities, and to ensure that the projects that join it will forever remain free and open. For that purpose it has invented some rather ingenious methods.

TCC concentrates on a limited list of services it offers, but wants to perfect those. It also aims at being lightweight and modular. As such, the basic services it offers are:

TCC requires from its member projects only that their governance and decision-making processes are open and verifiable, and that they act in the public benefit. For the rest, it allows the member projects much freedom and offers modules and templates for governance and legal documents solely as an option. The organisation strongly believes that decisions regarding assets and money should lie with the project, relieving the pressure and dependency on individuals. It promotes best practices but tries to keep out of the project's decisions as much as possible.

TCC does not require that it hold intangible assets (e.g. copyrights, trade marks, patents, design rights) of projects, but still encourages that the projects transfer them to TCC if they want to make use of the more advanced governance modules. The organisation even allows the project to release binaries under a proprietary license, if needed, but under the strict condition that a full copy of the source code must forever remain FOSS.

Two of the advanced modules allow for frictionless sharing of intangible assets between member projects regardless whether the outbound licenses of these projects are compatible or not. The "Asset Sharing DRACC"] (TCC calls its documents "Directives and Regulatory Archive of [The Commons Conservancy]" or DRACC) enables developers to dedicate their contributions to several (or all) member projects at the same time. The "Programme Forking DRACC" enables easy sharing of assets between projects when a project forks, even though the forks might have different goals and/or outbound licenses.

As further example, the "Hibernation of assets DRACC" solves another common issue - namely how to ensure a project can flourish even after the initial mastermind behind it is gone. There are countless projects out there that stagnated because their main developer lost interest, moved on, or even died. In this module there are special rules in place to handle a project that has fallen dormant and how the community can revive a project afterwards to simply continue the development. There are more such optional rule sets available for projects to adopt; including rules how to leave TCC and join a different organisation.

This flexibility is furthered by the fact that by design TCC does not tie the project to any money-related services. To minimise risks, [The Commons Conservancy] does not handle money at all - its statutes literally even forbid it to open a bank account. Instead, it is setting up agreements with established charitable entities that are specialised in handling funds. The easiest option would be to simply use one of these charities to handle the project's financial back-end (e.g. GÉANT has opted for NLnet Foundation), but projects are free to use any other financial back-end if they so desire.

Not only is the service TCC offers compatible with other services, it is also free as in beer, so using TCC's services in parallel with some other organisation to handle the project's finances does not increase a project's costs.

TCC is able to handle projects that receive grants, but will not manage grants itself. There are plans to set up a separate legal entity to handle grants and other activities such as support contracts, but nothing is set in stone yet. For at least a subset of projects it would also be possible to apply for loans in anticipation of post-paid (e.g. EU) grants through NLnet.

A project may easily leave TCC whenever it wants, but there are checks and balances set in place to ensure that the project remains free and open even if it spins out to a new legal entity. An example is that a spun out (or "Graduated" as it is called in TCC) project leaves a snapshot of itself with TCC as a backup. Should the new entity fail, the hibernated snapshot can then be revived by the community.

TCC is not limited to software - it is very much open to hosting also open hardware and other "commons" efforts such as open educational resources.

TCC does not plan to be involved in legal proceedings - whether filing or defending lawsuits. Nor is it an interesting target, simply because it does not take in or manage any money. If anything goes wrong with a member project, the plan is to isolate that project into a separate legal entity and keep a (licensed) clone of the assets in order to continue development afterwards if possible.

Given the background of some of the founders of TCC (with deep roots in the beginnings of the internet itself), and the memorandum of understanding with GÉANT and NREN, it is not surprising that some of the first projects to join are linked to research and core network systems (e.g. eduVPN and FileSender). Its offering seems to be an interesting framework for already existing projects that want to ensure they will remain free and open forever; especially if they have or anticipate a wider community of interconnected projects that would benefit from the flexibility that TCC offers.

The Center for the Cultivation of Technology

The Center for the Cultivation of Technology (CCT) also incorporated in October 2016, as a German gGmbH, which is a non-profit limited-liability corporation. Further, the CCT is fully owned by the Renewable Freedom Foundation.

This is an interesting set-up, as it is effectively a company that has to act in public interest and can handle tax-deductible donations. It is also able to deal with for-profit/commercial work, as long as the profit is reinvested into its activities that are in public benefit. Regarding any activities that are not in the public interest, CCT would have to pay taxes. Of course, activities in the public interest have to represent the lion's share in CCT.

Its owner, the Renewable Freedom Foundation, in turn is a German Stiftung (i.e. foundation) whose mission is to "protect and preserve civil liberties, especially in the digital landscape" and has already helped fund projects such as Tor, GNUnet, and La Quadrature du Net.

While a UK CIC and a German gGmbH are both limited-liability corporations that have to act in the public interest, they have somewhat different legal and tax obligations and each has its own specifics. CCT's purpose is "the research and development of free and open technologies". For the sake of public authorities it defines "free and open technologies" as developments with results that are made transparent and that, including design and construction plans, source code, and documentation, are made available free and without licensing costs to the general public. Applying this definition, the CCT is inclusive of open-source hardware and potentially other technological fields.

Similar to the TCC, the CCT aims to be as lightweight by default as possible. The biggest difference, though, is that the Center for the Cultivation of Technology is first and foremost about handling money - as such its services are:

The business model is similar to that of PS CIC in that, for basic services, CCT will be taking 10% from incoming donations and that more costly tasks would have to be paid separately. There are plans to eventually offer some services for free, which would be covered by grants that CCT would apply for itself. In effect, it wants to take over the whole administrative and financial overhead from the project in order to allow the projects to concentrate on writing code and managing themselves.

Further still, the CCT has taken upon itself automation, as much as possible, both through processes and software. If viable FOSS solutions are missing, it would write them itself and release the software under a FOSS license for the benefit of other FOSS legal entities as well.

As Stephan Urbach, its CEO, mentioned on the panel at QtCon, the CCT is not just able to handle grants for projects, but is also willing to take over reporting for them. Anyone who has ever partaken in an EU (or other) grant probably agrees that reporting is often the most painful part of the grant process. The raw data for the reports would, of course, still have to be provided by the project itself. But the CCT would then take care of relevant logistics, administration, and writing of the grant reports. The company is even considering offering loans for some grants, as soon as enough projects join to make the operations sustainable.

In addition, the Center for the Cultivation of Technology has a co-working office in Berlin, where member projects are welcome to work if they need office space. The CCT is also willing to facilitate in-person meetings or hackathons. Like the other two organisations, it has access to a network of experts and potential mentors, which it could resort to if one of its projects needed such advice.

Regarding whether it should hold copyright or not, the Center for the Cultivation of Technology is flexible, but at the very beginning it would primarily offer holding other intangible assets, such as domain names and trade marks. That being said, at least in the early phase of its existence, holding and managing copyright is not the top priority. Therefore the CCT has for now deferred the decision regarding its position on license enforcement and potential lawsuit strategy. Accounting, budgeting, and handling administrative tasks, as well as automation of them all, are clearly where its strengths lie and this is where it initially wants to pour most effort into.

Upon a dissolution of the company, its assets would fall to Renewable Freedom Foundation.

Since the founders of CCT have deep roots in anonymity and privacy solutions such as Tor, I imagine that from those corners the first wave of projects will join. As for the second wave, it seems to me that CCT would be a great choice for projects that want to offload as much of financial overhead as possible, especially if they plan to apply for grants and would like help with applying and reporting.

Conclusion

2016 may not have been the year of the Linux desktop, but it surely is the year of FOSS umbrella organisations. It is an odd coincidence that at the same time three so different organisations have popped up in Europe - initially oblivious of each other - to provide much-needed services to FOSS projects.

Not only are FOSS projects spoiled for choice regarding such service providers in Europe, now, but it is refreshing to see that these organisations get along so well from the start. For example, Simon Phipps is also an adviser at CCT and I help with both CCT and TCC.

In fact, I would not be surprised to see, instead of bitter competition, greater collaboration between them, allowing each to specialise in what it does best and allowing the projects to mix-and-match services between them. For example, I can see how a project might want to pick TCC to handle its intangible assets, and at the same time use CCT to handle its finances. All three organisation have also stated that, should a project contact them that they feel would be better handled by one of the others, they would refer it to that organisation instead.

Since at least the legal and governance documents for CCT and TCC will be available on-line under a free license (CC0-1.0 and CC-By-4.0 respectively), cross-pollination of ideas and even setting up of new organisations would hereby be made easier. It may be early days for these three umbrella organisations, but I am quite optimistic about their usefulness and that they will fill in the gaps left open by the older US siblings and single-project organisations.

Update: TCC's DRACC are already publicly available on-line.

If a project comes to the conclusion that it might need a legal entity, now is a great time to think about it. At FOSDEM 2017 there was another panel with CCT, TCC, PS CIC, and SFC where further questions and comments were asked.


Disclaimer: At the time of writing, I am working closely with two of the organisations - as the General Counsel of the Center for the Cultivation of Technology, and as co-author of the legal and governance documents (the DRACC) of [The Commons Conservancy]. This article does not constitute the official position of either of the two organisations nor any other I might be affiliated with.

Note: This article first appeared in LWN on 1 February 2017. This here is a slightly modified and updated version of it.


hook out → coming soon: extremely exciting stuff regarding the FLA 2.0

20 Feb 2017 10:00pm GMT

foss-gbg on Wednesday

If you happen to be in Gothenburg on Wednesday you are most welcome to visit foss-gbg. It is a free event (you still have to register so that we can arrange some light food) starting at 17.00.

The topics are Yocto Linux on FPGA-based hardware, risk and license management in open source projects and a product release by the local start-up Zifra (an encryptable SD-card).

More information and free tickets are available at the foss-gbg site.

Welcome!

20 Feb 2017 6:08am GMT

17 Feb 2017

feedPlanet KDE

KStars 2.7.4 for Windows is released!

Glad to announce the release of KStars v2.7.4 for Windows 64bit. This version is built a more recent Qt (5.8) and the latest KF5 frameworks for Windows bringing more features and stability.


This release brings in many bugs fixes, enhancements for limited-resources devices, and improvements, especially to KStars premier astrophotography tool: Ekos. Windows users would be glad to learn that they can now use offline astrometry solver in Windows, thanks to the efforts of the ANSVR Local Astrometry.net solver. The ANSVR mimics the astrometry.net online server on your local computer; thus the internet not required for any astrometry queries.

After installing the ANSVR server and downloading the appropriate index files for your setup, you can simply change the API URL to use the ANSVR server as illustrated below:



In Ekos align module, keep the solver type to Online so it would use the local ANSVR server for all astrometry queries. Then you can use the align module as you would normally do. This release also features the Ekos Polar Alignment Assistant tool, a very easy to use spot-on tool to polar align your mount.

Clear skies!

17 Feb 2017 5:09pm GMT

Editing files as root

For years I have told people to not start Kate as root to edit files. The normal response I got was "but I have to edit this file". The problem with starting GUI applications as root is that X11 is extremely insecure and it's considerable easy for another application to attack this.

An application like Kate depends on libraries such as Qt. Qt itself disallows running as an setuid-app:

Qt is not an appropriate solution for setuid programs due to its large attack surface.

If Qt is not an appropriate solution for command line arguments running as root, it's also not an appropriate solution for running GUI applications. And Qt is just one of the dependencies of graphical applications. There is obviously also xcb, Xlib, OpenGL, xkbcommon, etc. etc.

So how can another application attack an application running as root? A year ago I implemented a simple proof of concept attack against Dolphin. The attack is waiting for dolphin getting started as root. As soon as it starts, it uses the XTest extension to fake input, enable the embedded konsole window and type into it.

This is just one example. The elephant in the room is string handling, though. Every X11 window has many window properties and every process can write to it. We just have to accept that string handling is complex and can easily trigger a crash.

Luckily there is no need for editing a file to run the editor as root. There is a neat tool called sudoedit. That does the magic of starting the editor as the user and takes care of storing the file as root when you save.

Today I pushed a change for Kate and KWrite which does no longer allow to be run as root. Instead it educates the user about how to do the same with sudoedit.

Now I understand that this will break the workflow for some users. But with a minor adjustment to your workflow you get the same. In fact it will be better, because the Kate you start is able to pick up your configured styling. And it will also work on Wayland. And most importantly it will be secure.

I am also aware that if you run an application which is malicious you are already owned. I think that we should protect nevertheless.

17 Feb 2017 5:09pm GMT

KDE Applications 17.04 Schedule finalized

It is available at the usual place https://community.kde.org/Schedules/Applications/17.04_Release_Schedule

Dependency freeze is in 4 weeks and Feature Freeze in 5 weeks, so hurry up!

17 Feb 2017 8:45am GMT

Boot to Qt on embedded HW using Android 7.0 and Qt 5.8

One can have real pain trying to create a demo setup or proof-of-concept for an embedded device. To ease the pain Qt for Device Creation has a list of supported devices where you can flash a "Boot to Qt" image and get your software running on the target HW literally within minutes.

Background

Back in 2014 we introduced a way to make an Android device boot to Qt without the need of a custom OS build. Android has been ported to several devices and the Android injection method made it possible to get all the benefits of native Qt applications on an embedded device with the adaptation already provided by Android.

The Android injection was introduced using Qt versions 5.3.1. whereas the supported Android versions were 4.2 and 4.4. It is not in our best interest that anyone would be forced to use older version of Qt, nor does it help if the Android version we support does not support the hardware that the developers are planning to use. I have good news as the situation has now changed.

Late last year we realized that there still is demand for Android injection on embedded devices so we checked what it takes to bring the support up to date. The target was to use Qt 5.8 to build Boot to Qt demo application and run it on a device that runs Android 7.0. The device of choice was Nexus 6 smartphone which was one of the supported devices for Android Open Source Project version 7.0.0.

The process

We first took the Android 7.0 toolchain and updated the Qt 5.4 Boot to Qt Android injection source code to match the updated APIs of Android 7.0. Once we could build Qt 5.4 with the toolchain, it was time to patch the changes all the way to Qt 5.8.
Since Qt version 5.4 there has been improved modularity in Qt and it became apparent during the process, e.g. the old Surfaceflinger integration was replaced with a platform plugin.

The results can be seen in the videos below.

The Boot to Qt Android injection is an excellent way to speed up the development and get your software to run on target hardware as early as possible. If you want to know more about the Boot to Qt and Android injection, don't hesitate to contact us.

The post Boot to Qt on embedded HW using Android 7.0 and Qt 5.8 appeared first on Qt Blog.

17 Feb 2017 8:30am GMT

Kubuntu 16.04.2 LTS Update Available

The second point release update to our LTS release 16.04 is out now. This contains all the bugfixes added to 16.04 since its first release in April. Users of 16.04 can run the normal update procedure to get these bugfixes. In addition, we suggest adding the Backports PPA to update to Plasma 5.8.5. Read more about it: http://kubuntu.org/news/plasma-5-8-5-bugfix-release-in-xenial-and-yakkety-backports-now/

Warning: 14.04 LTS to 16.04 LTS upgrades are problematic, and should not be attempted by the average user. Please install a fresh copy of 16.04.2 instead. To prevent messages about upgrading, change Prompt=lts with Prompt=normal or Prompt=never in the /etc/update-manager/release-upgrades file. As always, make a thorough backup of your data before upgrading.

See the Ubuntu 16.04.2 release announcement and Kubuntu Release Notes.

Download 16.04.2 images.

17 Feb 2017 3:59am GMT

A simple Rust GUI with QML

You may have heard of Rust by now. The new programming language that "pursuis the trifecta: safety, concurrency, and speed". You have to admit, even if you don't know what trifecta means, it sounds exciting.

I've been toying with Rust for a while and have given a presentation at QtCon comparing C++ and Rust. I've been meaning to turn that presentation into a blog post. This is not that blog post.

Here I show how you can use QML and Rust together to create graphical applications with elegant code. The example we're building is a very simple file browser. People that are familiar with Rust can ogle and admire the QML snippets. If you're a Qt and QML veteran, I'm sure you can read the Rust snippets here quite well. And if you're new to both QML and Rust, you can learn twice as much.

The example here is kept simple and poor in features intentionally. At the end, I'll give suggestions for simple improvements that you can make as an exercise. The code is available as a nice tarball and in a git repo.

Command-line Hello, world!

First we set up the project. We will need to have QML and Rust installed. If you do not have those yet, just continue reading this post and you'll be all the more motivated to go ahead and install them.

Once those two are installed, we can create a new project with Rust's package manager and build tool cargo.

[~/src]$ # Create a new project called sousa (it's a kind of dolphin ;-)
[~/src]$ cargo new --bin sousa
     Created binary (application) `sousa` project

[~/src]$ cd sousa

[~/src/sousa]$ # Add a dependency for the QML bindings version 0.0.9
[~/src/sousa]$ echo 'qml = "0.0.9"' >> Cargo.toml
[~/src/sousa]$ # Build, this will download and compile dependencies and the project.
[~/src/sousa]$ cargo build
    Updating registry `https://github.com/rust-lang/crates.io-index`
   Compiling libc v0.2.20
   Compiling qml v0.0.9
   Compiling lazy_static v0.2.2
   Compiling sousa v0.1.0 (file:///home/oever/src/sousa)
    Finished debug [unoptimized + debuginfo] target(s) in 25.39 secs

[~/src/sousa]$ # Run the program.
[~/src/sousa]$ cargo run
    Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
     Running `target/debug/sousa`
Hello, world!

The same without output:

cargo new --bin sousa
cd sousa
echo 'qml = "0.0.9"' >> Cargo.toml
cargo build
cargo run

The mix of Rust and QML lives! Of course the program is not using any QML yet. Let's fix that.

Hello, world! with QML

Now that we have a starting point we can start adding some QML. Let's change src/main.rs from a command-line Hello, world to a graphical Hello, world! application.

main.rs before

fn main() {
    println!("Hello, world!");
}

Some explanation for the people reading Rust code for the first time: things that look like functions but have a name that ends with ! are macros. Forget everything you know about C/C++ macros. Macros in Rust are elegant and powerful. We will see this below when we mock moc.

main.rs after

extern crate qml;

use qml::*;

fn main() {
    // Create a new QML Engine.
    let mut engine = QmlEngine::new();

    // Bind a message to the QML enviroment.
    let message = QVariant::from("Hello, world!");
    engine.set_property("message", &message);

    // Load some QML
    engine.load_data("
        import QtQuick 2.0
        import QtQuick.Controls 1.0

        ApplicationWindow {
            visible: true
            Text {
                anchors.fill: parent
                text: message
            }
        }
    ");
    engine.exec();
}

Modules in Rust are called "crates". This example uses QML bindings that currently have version number 0.0.9. So the API may change.

In the example above, the QML is placed literally in the code. Literal strings in Rust can span multiple lines.

Usually you do not need to specify the type of a variable, you can just type let (for immutable objects) or let mut for mutable ones. Like in C++, & is used to pass an object by reference. You have to use the & in the function definition, but also when calling the function (unless your variable is a reference already).

The QML code has an ApplicationWindow with a Text. The message, Hello, world! is passed to the QML environment as a QVariant. This is the first time in our program that information goes between Rust and QML.

Hello, world!

Hello, world!

Like above, the application can be run with cargo run.

Splitting the code

Let's make this code a bit more maintainable. The QML is moved to a separate file src/sousa.qml which we load from Rust.

import QtQuick 2.0
import QtQuick.Controls 1.0

ApplicationWindow {
    visible: true
    Text {
       anchors.fill: parent
       text: message
    }
}

You can see the adapted Rust code below. In debug mode, the file is read from the file system. In release mode, the file is embedded into the executable to make deployment easier.

extern crate qml;

use qml::*;

fn main() {
    // Create a new QML Engine.
    let mut engine = QmlEngine::new();

    // Bind a message to the QML enviroment.
    let text = QVariant::from("Hello, world!");
    engine.set_property("message", &text);

    // Load some QML
#[cfg(debug_assertions)]
    engine.load_file("src/sousa.qml");
#[cfg(not(debug_assertions))]
    engine.load_data(include_str!("sousa.qml"));
    engine.exec();
}

The statements #[cfg(debug_assertions)] and #[cfg(not(debug_assertions))] are conditional compilation for the next expression. So when you run cargo run, the QML file will be read from disk and with cargo run --release, the QML will be inside the executable. In debug mode it is convenient to avoid recompilation for changes to the QML code.

Listing the contents of a folder

Now that we've created an application that combines Rust and QML let's go a step further and list the contents of a directory instead of a simple message.

QML has a ListView that can display the contents of a ListModel. The ListModel can be filled by the Rust code. First we create a simple Rust structure that contains information about files.

Q_LISTMODEL_ITEM!{
    pub QDirModel<FileItem> {
        file_name: String,
        is_dir: bool,
    }
}

Q_LISTMODEL_ITEM! ends on a !, so it's a macro. Rust macros use pattern matching on the content of a macro. The matched values are used to generate code. The macro system is not unlike C++ templates, but with a more flexible sytax and simpler rules.

On the QML side, we'd like to show the file names. Directory names should be shown in italic.

ApplicationWindow {
    visible: true

    ListView {
        anchors.fill: parent
        model: dirModel
        delegate: Text {
            text: file_name
            font.italic: is_dir
        }
    }
}

The ListView shows data from a ListModel that we'll define later.

The delegate in the ListView is a kind of template. When an entry in the list is visible in the UI, the delegate is the UI component that shows that entry. The delegate that is shown here is very simple. It is just a Text that shows the file name.

Next, we need to connect the information on the file_system to the model. That is done in two steps.

Instead of binding a Hello, world! message to the QML environment, we create an instance of our QDirModel and bind it to the QML environment.

    // Create a model with files.
    let dir_str = ".";
    let current_dir = fs::canonicalize(dir_str)
        .expect(&format!("Could not canonicalize {}.", dir_str));
    let mut dir_model = QDirModel::new();
    list_dir(&current_dir, &mut dir_model).expect("Could not read directory.");
    engine.set_and_store_property("dirModel", &dir_model);

The model is initialized with the current directory. That directory is canonicalized. That means it is made absolute and symbolic links are resolved. This function may fail and Rust forces us to deal with that. If there is an error in fs::canonicalize(dir_str), the returned result is an error instead of a value. The function expect() takes the error and an additional message, prints it and stops the current thread or program in a controlled way. Rust is a safe programming language because of features like this where potential problems are prevented at compile-time.

The last missing piece is the function list_dir that reads entries in a directory and places them in the QDirModel.

fn list_dir(dir: &Path, model: &mut QDirModel) -> io::Result<()> {
    // get iterator over readable entries
    let entry_iter = fs::read_dir(dir)?.filter_map(|e| e.ok());

    model.clear();
    for entry in entry_iter {
        if let Ok(metadata) = entry.metadata() {
            if let Ok(file_name) = entry.file_name().into_string() {
                model.append_item(FileItem {
                    file_name: file_name,
                    is_dir: metadata.is_dir(),
                });
            }
        }
    }
    Ok(())
}

There is a lot happening in the first line of this function. An iterator is taken over the contents of a directory. If the reading of the directory fails, the function stops and returns an Err. This is coded by the ? in fs::read_dir(dir)?. When reading each entry, another error may occur. If that happens the iterator returns an Err. We choose here to skip over the erroneous reads; we filter them out with filter_map(|e| e.ok()).

Next, the entries are added to the model in a for loop. Again we see code that deals with possible errors. Reading the metadata for a file may give an error. We choose to skip entries with such errors. Only the entries for which Ok is returned are handled.

The UI should display the file name. Rust uses UTF-8 internally and the file name can be be nearly any sequence of bytes. If the entry is not a valid UTF-8 string, we ignore that entry here. Another option would be to keep the byte array (Vec<u8>) and use a lossy representation of the file name in the user interface that leaves out the parts that cannot be represented in UTF-8.

In other programming languages, it'd be easier to handle these cases sloppily. In Rust we have to be explicit. This explicit code is safer and more understandable for the next programmer reading it.

And here is the result of cargo run. A directory listing with two files and two folders.

a listing of files

a listing of files

A simple file browser

Listing only one fixed directory is no fun. We want to navigate to other directories by clicking on them. We'd like to have an object that can receive the name of a folder that it should enter and update the directory listing.

To achieve that we need a staple from the Qt stable: QObject. A QObject can send signals and receive signals. Signals are received in slots. When programming in C++, a special step is needed during compilation: the program moc generates code from the C++ headers.

Thanks to macroergonomics, Rust has more powerful macros and can skip this extra step. The syntax to define a QObject is simple in Rust and C++. This is our QDirLister:

pub struct DirLister {
    current_dir: PathBuf,
    model: QDirModel,
}
Q_OBJECT!{
    pub DirLister as QDirLister {
        signals:
        slots:
            fn change_directory(dir_name: String);
        properties:
    }
}

The macro Q_OBJECT takes the struct DirLister and wraps it in another struct QDirLister that has signals, slots and properties.

Our simple QDirLister defines only one slot, change_directory, that will receive signals from the QML code when a directory name is clicked. Here is the implementation:

impl QDirLister {
    fn change_directory(&mut self, dir_name: String) -> Option<&QVariant> {
        let new_dir = if dir_name == ".." {
            // go to parent if there is a parent
            self.current_dir.parent().unwrap_or(&self.current_dir).to_owned()
        } else {
            self.current_dir.join(dir_name)
        }; 
        if let Err(err) = list_dir(&new_dir, &mut self.model) {
            println!("Error listing {}: {}",
                     self.current_dir.to_string_lossy(),
                     err));
            return None;
        }
        // listing the directory succeeded so update the current_dir
        self.current_dir = new_dir;
        None
    }
}

If the directory is .., we move up one directory with parent(). Again we have to explicitly handle the case that there is no parent directory. We choose to stay on the same directory in that case.

If the directory is not .., we join() the directory name to the current_dir. We update the model with a new directory listing and print an error and stay on the current directory if that fails.

QDirLister has to be hooked up to the QML code. We add this snippet to the fn main() that we defined earlier.

    // Create a DirLister and pass it to QML
    let dir_lister = DirLister {
        model: dir_model,
        current_dir: current_dir.into(),
    };
    let q_dir_lister = QDirLister::new(dir_lister);
    engine.set_and_store_property("dirLister", q_dir_lister.get_qobj());

And this is how we use it from QML:

import QtQuick 2.0
import QtQuick.Controls 1.0

ApplicationWindow {
    visible: true

    ListView {
        anchors.fill: parent
        model: dirModel
        delegate: Text {
            text: file_name
            font.italic: is_dir

            MouseArea {
                anchors.fill: parent
                cursorShape: is_dir ? Qt.PointingHandCursor : Qt.ArrowCursor
                onClicked: {
                    if (is_dir) {
                        dirLister.change_directory(file_name);
                    }
                }
            }
        }
    }
}

To receive mouse input in QML, there needs to be a MouseArea. When it is clicked (onClicked), it calls a bit of Javascript that sends the file_name to the dirLister via the slot change_directory.

our file browser

our file browser

Conclusion

Hooking up QML and Rust is elegant. We've created a simple file browser with one QML file, sousa.qml, one Rust file, main.rs and one package/build file Cargo.toml.

There are many nice QML user interfaces out there that can be repurposed on top of Rust code. QML can be visually edited with QtCreator. QML can be used for mobile and desktop applications. It's very nice that this wonderful method of creating user interfaces can be used with Rust.

To the C++ programmers: I hope you enjoyed the Rust code and find some inspiration from it. Because Rust is a new language it can introduce innovative features that cannot be easily added to C++. Rust and C++ can be mixed in one codebase as is done in Firefox.

Rust has many more wonderful features than can be covered in this blog. You can read more in the Rust book.

Assignments

I promised some assignments. Here they are.

  1. Show an error dialog when a directory cannot be shown. (Hint: the code is already in the git repo and shows a QML feature that we did not use yet: signals.

  2. Show the file size in the file listing.

  3. Do not make directories clickable if the user has no permission to open them.

  4. Open simple files like pictures and text files when clicked by showing them in a separate pane.

17 Feb 2017 12:00am GMT

16 Feb 2017

feedPlanet KDE

Cutelyst 1.4.0 released, C100K ready.

Yes, it's not a typo.

Thanks to the last batch of improvements and with the great help of jemalloc, cutelyst-wsgi can do 100k request per second using a single thread/process on my i5 CPU. Without the use of jemalloc the rate was around 85k req/s.

This together with the EPoll event loop can really scale your web application, initially I thought that the option to replace the default glib (on Unix) event loop of Qt had no gain, but after increasing the connection number it handle them a lot better. With 256 connections the request per second using glib event loop get's to 65k req/s while the EPoll one stays at 90k req/s a lot closer to the number when only 32 connections is tested.

Beside these lovely numbers Matthias Fehring added a new Session backend memcached and a change to finally get translations to work on Grantlee templates. The cutelyst-wsgi got -socket-timeout, -lazy, many fixes, removal of usage of deprecated Qt API, and Unix signal handling seems to be working properly now.

Get it! https://github.com/cutelyst/cutelyst/archive/r1.4.0.tar.gz

Hang on FreeNode #cutelyst IRC channel or Google groups: https://groups.google.com/forum/#!forum/cutelyst

Have fun!


16 Feb 2017 7:20pm GMT

Ekos Polar Alignment Assistant Tool

When setting up a German Equatorial Mount (GEM) for imaging, a critical aspect of capturing long-exposure images is to ensure a proper polar alignment. A GEM mount has two axis: Right Ascension (RA) axis and Declination (DE) axis. Ideally, the RA axis should be aligned with the celestial sphere polar axis. A mount's job is to track the stars motion around the sky, from the moment they rise at the eastern horizon, all the way up across the median, and westward until they set.


In long exposure imaging, a camera is attached to the telescope where the image sensor captures incoming photons from a particular area in the sky. The incident photons have to strike the same photo-site over and over again if we are to gather clear and crisp image. Of course, actual photons do not behave in this way: optics, atmosphere, seeing quality all scatter and refract photons in one way or another. Furthermore, photons do not arrive uniformly but follow a Poisson distribution. For point-like sources like stars, a point spread function describes how photons are spatially distributed across the pixels. Nevertheless, the overall idea we want to keep the source photons hitting the same pixels. Otherwise, we might end up with an image plagued with various trail artifacts.

Since mounts are not perfect, they cannot perfectly keep track of object as it transits across the sky. This can stem from many factors, one of which is the mis-alignment of the mount's Right Ascension axis with respect to the celestial pole axis. Polar alignment removes one of the biggest sources of tracking errors in the mount, but other sources of error still play a factor. If properly aligned, some mounts can track an object for a few minutes with only deviation of 1-2 arcsec RMS.

However, unless you have a fancy top of the line mount, then you'd probably want to use an autoguider to keep the same star locked in the same position over time. Despite all of this, if the axis of the mount is not properly aligned with the celestial pole, then even a mechanically-perfect mount would lose tracking with time. Tracking errors are proportional to the magnitude of the misalignment. It is therefore very important for long exposure imaging to get the mount polar aligned to reduce any residual errors as it spans across the sky.

Several polar-alignment aids exist today, including, but not limited to:

1. Polar scope built-in your mount.
2. Using drift alignment from applications like PHD2.
3. Dedicated hardware like QHY's PoleMaster.
4. Ekos Legacy Polar Alignment tool: You need to take exposure of two different points in the sky to measure the drift and find out polar error in each axis (Altitude & Azimuth)
5. SharpCap Polar Alignment tool.

Out of the above, the easiest to use are probably QHY's PoleMaster and SharpCap's Polar alignment tool. However both software are exclusive to Windows OS only. KStars users have long requested support for an easy to use Polar Alignment helper in Ekos leveraging its astrometry.net backend.

During the last couple of weeks, I worked on developing Ekos Polar Alignment Assistant Tool (PAA). I started with a simple mathematical model consisting of two images rotated by a an arbitrary degree. A sample illustration of this is below:



Given two points, we can calculate the arc length from the rotation angle, and hence the radius. Therefore, it is possible to find two circle solutions that would match this, one of which would be the mount's actual RA axis within the image. Finding out which solution is the correct one turned out to be challenging, and even the mount's own rotation angle cannot be fully trusted. To be able to uniquely draw a circle, you need 3 points. So it was suggested by Gerry Rozema, one of INDI venerable developers, to capture 3 images to uniquely identify the circle without involving a lot of fancy math.

Since it relies on astrometry.net, PAA has more relaxed requirements than other tools making it accessible to more users. You can use your own primary or guide camera, given they have wide-enough FOV for the astrometry solver.

Moreover, the assistant can automatically capture, solve, and even rotate the mount for you. All you have to do is to make the necessary adjustments to your mount.

The new PAA works by capturing and solving three images. It is technically possible to rely on two images only as described above, but three images improves the accuracy of the solution. After capturing each, the mount rotates by a fixed amount and another image is captured and solved.



Since the mount's true RA/DE are resolved by astrometry, we can construct a unique circle from the three centers found in the astrometry solutions. The circle's center is where the mount rotates about (RA Axis) and ideally this point should coincide with the celestial pole. However, if there is a mis-alignment, then Ekos draws a correction vector. This correction vector can be placed anywhere in the image. Next the user refreshes the camera feed and applies correction to the mount's Altitude and Azimuth knobs until the star is located in the designated cross-hair. It's that easy!

Ekos PAA is now in Beta and tests/feedback are highly appreciated.


16 Feb 2017 1:45pm GMT

Atelier now have a Logo! =D

Yay! Now we have a logo! What do you think about it? Atelier is making around six months of development, and now is time to give you some updates. AtCore is on it's way to becoming stable, and I'm working on Atelier interface, so we can connect to AtCore and do some magic to everything [...]


16 Feb 2017 1:21am GMT