14 Dec 2019

feedPlanet KDE

This week in KDE: building up to something big

We've got some really big things planned and in progress for Plasma 5.18 and Frameworks, and work proceeds smoothly. None of it is quite done yet, but we did land a number of nice bugfixes and user interface polish for issues that have been irritating people for years…and mere days! Have a look:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

How You Can Help

Do you like to categorize things? So do I! Then why not try your hand at triaging bugs? KDE's faithful users file about 25 of them every day, and they all need to be looked at, categorized, moved to the correct products, marked appropriately, closed if the issue has already been fixed-in short, triaged. It's fun and easy and can be done in 5-minute spurts during boring periods of the day. Triaging bugs is a super helpful way to relieve some of the developers' burdens if you're not a developer yourself-developers vastly prefer writing code and fixing bugs to triaging them! For more information, check out https://community.kde.org/Guidelines_and_HOWTOs/Bug_triaging!

More generally, have a look at https://community.kde.org/Get_Involved and find out more ways to help be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don't have to already be a programmer, either. I wasn't when I got started. Try it, you'll like it! We don't bite!

Finally, consider making a tax-deductible donation to the KDE e.V. foundation.

14 Dec 2019 11:08pm GMT

Preparing foss-north 2020

Next year's foss-north will take place March 29 - 31, with the training day on April 1. Preparations are under way, and now we need your participation to make this event as great as the past years.

The preparations are under way and we've opened the Call for Papers. We truly believe that we bring together the best audience with the best speakers. Being a part of this is a great experience, so make sure to get your talk proposal submitted.

Another part of the foss-north experience is the community day. The day before the actual conference, a large set of community groups arrange workshops, hackatons, dev sprints, even mini conferences. This year we've already confirmed the participation of KDE, FreeBSD, and "something embedded" arranged by Endian (last year they did a full day workshop on the Zephyr Project).

If you want to be a part of the community day - don't hesitate to reach out to info@foss-north.se. We help with a venue, food, and promotion. All you need to have is a cause!

In addition to this we are, of course, on the look out for sponsors. If you want to support us, or even take part in the conference with a booth, please join our Call for Sponsors. Make sure to tell your employer that they should sponsor - all sponsor packages include free tickets, so that way you can both participate in the event, and help us making this possible.

Between all of this we're also working on the infrastructure. I'd like to extend a big thanks to Magnus Hagander from Postgresql. He is helping the migration to their pgeu-system system. This will give us a single system integrating the features we need - tickets, sponsors, scheduling, accounting. So no more Google Forms, Eventbrite, and manual coordination of systems. If you like css, html, and such, you're more than welcome to help. Some pages still has rough edges.

Long story short: join us at foss-north 2020 - it will be fun! Take the opportunity to see Gothenburg end of March in 2020.

14 Dec 2019 11:23am GMT

KDE Itinerary @ 36C3

I'll be attending the Chaos Communication Congress this year finally for the first time, after having failed to obtain a ticket in the past. This week I got the actual ticket document, and seeing it contain an UIC 918.3 barcode I of course had to make KDE Itinerary support this ticket too :)

36C3 Ticket

The 36C3 ticket is a nice example for recently added features in KDE Itinerary, and also sufficiently special to show the flexibility of the extractor infrastructure. It's a PDF file with two barcodes, a QR code needed for entrance, and an Aztec code with an UIC 918.3 payload for local public transport to the venue.

The QR code is fairly useless for our purposes, it basically just contains a random number. Good from a privacy point of view, but therefore providing nothing the extractor can work with. Not so the second one, that contains some very specific markers we can use for selecting the extractor, such as the unassigned UIC operator code "9997" and the passenger name "36C3".

So, using the following filter expression in the extractor metadata gives us single PDF pages with one 36C3 ticket each to process:

    "filter": [ {
        "match": "uic:9997",
        "property": "reservationFor.provider.identifier",
        "type": "JsonLd"
    } ],
    "type": "Pdf"

The extractor script needs to do a few unusual things though. First, the generic extraction assumes this to be a train ticket, as that's what UIC 918.3 barcodes are usually used for. We want this to be an event ticket though. So rather than the usual approach of augmenting the results from the generic extraction, we simply create an entirely new object and return that, replacing the existing results.

KMail showing information about an attached 36C3 ticket using the KDE Itinerary plug-in. KMail Itinerary plug-in detecting 36C3 tickets.

The other special thing is the fact that we have two ticket tokens (barcodes) here, which we both need to preserve. That's where the recently added multi-ticket support comes in. For this we need to generate two identical event tickets, with separate ticket tokens and ticket names. The slightly hacky part is finding the second barcode, as due to its non-descriptive content we need to do that manually. So this requires iterating over all images on the same PDF page:

var images = pdf.pages[Context.pdfPageNumber].images;
for (var i = 0; i < images.length; ++i) {
    var code = Barcode.decodeQR(images[i]);
    if (code) {
        res.reservedTicket.ticketToken = "qrcode:" + code;

The full extractor script can be found here.

KDE Itinerary multi-ticket selector for a 36C3 ticket. KDE Itinerary showing both the 36C3 entrance and public transport tickets.

At 36C3

If you are attending 36C3 and are interested in Free Software/Open Data travel/mobility/digital assistant stuff or related/adjacent topics, let's meet! I'm particular interested in widening the set of people we collaborate with on this, bringing KDE, Wikidata and Navitia together has already gotten us quite far, but I think there's more we can achieve :)

It looks like I'll also be presenting the work we did around KDE Itinerary on the WikipakaWG stage, but the exact date and time has yet to be determined.


Special thanks to the nice people who helped me to finally get my hands on a ticket for this event, really looking forward to this!

14 Dec 2019 10:30am GMT

12 Dec 2019

feedPlanet KDE

Meetings and Conferences

This week has seen some interesting (to me at least) Calamares development, because I was at a Blue Systems meeting in Germany and had the opportunity to sit with people from KDE neon and Nitrux. Meetings like this devolve to "14 hour working day interspersed with big meals" which is great for getting things done. Dinner is also about half shop-talk.

What got done:

From there Aleix Pol and I sped off to Belgium for GNU Health CON. Aleix gave a talk on Kirigami and UIs for mobile devices. I might give a talk about Pine64 hardware (which I'm hardly qualified to do, but willing). Until then I'm running the KDE booth at the event, with a Plasma Mobile phone (not a PinePhone).

One of the things we did for this booth is slap together a presentation to run on the monitor at the booth. This is a miniature QML application that runs a slideshow; the slideshow is easy to extend by adding more Component blocks to the file. It's not quite the equivalent of reading in a Markdown file, but pretty close. At 8k of QML source (probably 80% of that is spaces for tidy indentation, too) it's handy to have around. We'll add it to the promo wiki once we're done.

Writing the slides was fun, I got to use the new emoji picker, and Aleix did all the heavy QML lifting. I really need to learn more of that.

The conference lasts through Sunday, and then it's back home to prepare one last Calamares release for 2019 (with the translation improvements listed above).

12 Dec 2019 11:00pm GMT

Latte bug fix release v0.9.5

Latte Dock v0.9.5 has been released containing important fixes and improvements!

Go get v0.9.5 from, download.kde.org* or store.kde.org*

* archive has been signed with gpg key: 325E 97C3 2E60 1F5D 4EAD CF3A 5599 9050 A2D9 110E


HowTo Make Latte Panels and Dektop Icons to Not Ovelap?

Since Latte started as project three years ago there was a big annoyance for plasma users; desktop icons and plasma dialogs such as Widgets Explorer can overlap in many cases. The reason why this is not easily fixable is because currently there is no way to inform Plasma about external docks and panels geometries. An initial discussion about this can be found at https://phabricator.kde.org/T10172 . As I dont have the time to play with this, a workaround exists now that can help you make your life a bit easier if you find it annoying. You will need Latte >= 0.9.5 and a new plasma widget I created called Panel Transparency Button. A small demonstration follows:

- youtube presentation -


You can find Latte at Liberapay if you want to support, Donate using Liberapay

or you can split your donation between my active projects in kde store.

12 Dec 2019 3:34pm GMT

KDE’s releases debranding

A new step in KDE's branding has happened today, or rather debranding. The old dump of everything we made used to be called just "KDE" and then some projects wanted to release on their own timetable so calling it "KDE" became less accurate. After a while our flagship Plasma project wanted to release on its own and lots of projects did their own release too but many wanted that faff taken care of for them still so those projects got called "KDE Applications". But that didn't quite fit either because there were many plugins and libraries among them and many Applications from KDE which were not among them. So today we removed that brand too and just make releases from a release service, which are source tars that are not very interesting to end users so they get a boring factual release page.

And to keep our users informed the Monthly Apps Update is now published direct on kde.org and covers both self released and release service releases.

And as our website enters the 21 century we now updated the way the stories are published so now anyone can edit or propose patches to them in Git writing Markdown. So if you know of any new features or developments in our apps which will be released by this time in January then send us a patch.

12 Dec 2019 2:18pm GMT

Qt Creator 4.11 is released

Qt Creator 411 released

We are happy to announce the release of Qt Creator 4.11.0! From the beta blog post:

12 Dec 2019 11:21am GMT

10 Dec 2019

feedPlanet KDE

Interview candidates with an Open Source background

I often say that there are two actions that defines the line management role: one-on-ones and hiring people. This is specially true in growing organizations. If you nail these actions, you have a great chance to influence your colleagues and organization, the ultimate goal, in my view, for a line manager.

One of the common strategies to speed up the journey from being an Open Source contributor to become a Good Open Source citizen is to hire talent with a solid Open Source background. The process of hiring such talent is different from what most organizations and recruiters are used to. That is so true that we have now companies specialized in hiring these profiles.

One key part of the hiring process is the interview.

The article is another one of those I am writing the last couple of years about management topics, based on my experience working in Open Source as manager and consultant. More specifically, it is an attempt to describe some of the key points that hiring managers with little or no experience in hiring Open Source talent need to consider to increase their hit rate.

As usual, I would appreciate if you add in the comments section or to me directly your experience, criticisms or missing points. I would add them to this article as update.

Evaluate the candidate's work in advance

If you only have room in your hard disk for one takeaway, it should be this point.

Most of your candidate's work is public. Take the time to evaluate it or have somebody else doing it for you, before the interview. If the candidate have public talks, watch the videos or check the slides. If she has written articles, read them. If she has code available, study it, test it, install it.

Having the candidate's work in the open is a double edge sword. It is an advantage for the hiring organization because it can reduce significantly the hiring process, since you can evaluate in detail many of the candidate's skills in advance, so the interview process can be reduced to a single interview, for instance (I strongly recommend this, by the way).

At the same time, if the hiring manager do not take the time to evaluate the candidate's work or even worse, if she acts during the interview as if that work is not public, trying to technically evaluate the candidate prior or during the interview, the failure is almost guaranteed.

I always mention how wrong it is to ask a music group with published records or videos of concerts on their website if they know how to sing, or even worse, ask them for an audition during the process to hire them for your birthday party. Watch their performances, skip the audition and go directly to other relevant aspects. Their work is public, did you check it?

By the way, the world is significantly larger than GitHub, thanks God. The candidate might have a strong Open Source background without having a great GitHub profile… or no profile at all.

Interview goal

The interview is not just an opportunity to evaluate if the candidate fits in your organization, it is a chance to attract the professional and her colleagues. The candidates work, processes used, tooling… is public. Your organization's most likely not. Who has more to inform about?

As result of this reality, the interview should look more like hiring a senior manager/expert than it is about hiring an engineer.

In summary, the goal of the interview shifts. It should be balanced between what both, the organization and the candidate looks for. This shift represents a challenge for hiring managers with limited experience in hiring senior talent.

Candidate's future upstream participation

During the interview, be clear about the amount of time the candidate will have to invest in upstream work, her current work.

Remember that there are plenty of developers who prefer to keep their work in the open as a personal activity. On the contrary, some others want to make a living out of their participation upstream. The best way to ensure a sustainable relation over time is to be transparent about the expectations related with the effort in internal work vs work done in the open in which projects.

If by working in your organization, the developer will have to reduce its current involvement in the open, specially in specific periods of time, say it clearly. If you cannot guarantee that the technologies the candidate works with in the open will be part of the company's tools set, state it clearly. If your organization has plans to move towards Open Source and you bring the candidate to speed up that journey, make sure the candidate understands the timelines and expectations.

No matter how experienced you are as hiring manager, there is always room for a candidate to surprise you about her motivations to participate in the interview and potentially join your organization. This is most likely true with professionals coming from an environment you might not be very familiar with.

Tooling, languages, technologies…

One of the great advantages of being an Open Source professional is that, because you use open source tooling, you can take them with you throughout your career, which makes you very efficient. You do not depend on having any corporation paying them for you or having to change them regularly, limited by commercial criteria. In other words, Open Source professionals has made their career supported in specific tooling and associated practices they have mastered and very often helped to polish over time. Some of them love to try new ones and some love to focus their energy in other aspects, not in the tooling or associated processes themselves.

In some cases, this specific point is so important for the candidate that using those tools and associated practices represents a requirement to join your organization.

You will need to consider and potentially embrace this reality. If you cannot or do not want to use those technologies or tools, make the point during the interview but…

Are you open to reconsider such decisions if somebody prove there is something better? Is it something that can be reconsidered next year, when the current support contract for toll X expires? Do you have any specific department or project where you are willing to try new tools, processes, languages…? Are you open to assume that you might have made a mistake in such decision or that those decisions will not stand the test of time?

Many Open Source professionals, for instance, consider that, doing Open Source with Open Source tools is significantly better that using proprietary ones. It make sense. At scale, adapting the tooling to the needs of the environment and integrate them with other tooling to improve the development experience is essential. Have full control of such evolution is another interesting point… there are many other strategic and practical reasons.

Learn from the candidate about those arguments during the interview.

Very often hiring managers see the candidate's position in this front as religious or philosophical. Instead I would take it as an opportunity to improve and innovate within their organizations. I suggest to embrace such experience the candidate can bring to your organization as starting point during the interview and take the conversation from there, instead of questioning her criteria and define this point as a no-go.

Recommendations and referrals

Open Source is about trust and networking based on the work the potential candidate does within one or several communities. In many cases, the candidate recommendations and referrals come from peers from those communities.

If you are a hiring manager, take those recommendations as seriously as any traditional referral from a senior manager or a relevant figure in any big corporation. They are frequently equally valuable.

In Open Source, you are judged by your work and your influence, not by your position in any hierarchy of any organization.

Get help for the interview

If you have little or no experience with Open Source, bring somebody that do have it to the interview, in addition to the technical expert, specially if you are interviewing engineers. It is not relevant if such person is not an expert in the candidate's knowledge area of expertise.

I am not a technical person and, when I join such interviews, the hiring manager often learn new things about the candidate because we both share a similar work culture. This is specially true when the candidate ask questions to evaluate if the company is a good fit for her. In addition, it often plays in favor of the company to show to the candidate that there is Open Source knowledge internally and professionals that feel comfortable working there.

Assume the candidate knows about your organization already

Do not underestimate how much candidates with solid Open Source background know about your company before the interview, even if your organization is not an Open Source contributor.

These kind of professionals might learn about your organization not just by interacting directly with your engineers or evaluating your products, but also by interacting with your suppliers and partners in the open, in some cases on regular basis. They might know about the hardware you use, the versions of specific software you ship, tools you use, release strategies you have, updates policies, etc. They might point at clear deficiencies and strengths of your products and services as well.

I suggest to ask the candidates what do they know about your organization and why do they want to join, during the interview. It is also interesting to ask about their evaluation/perception of your organization as a whole and your products/services compared to others.

It is not only about getting information from the candidate but also about informed perceptions. Such conversation might help hiring managers to tune the description about what the organization does and avoid providing superfluous information during the interview. It also helps to understand if the candidate is one of those who want to work in industry leaders or help you to become one which is something I am always interested to learn about them.

Why do you need a CV?

Even today, I still come across former colleagues and great Open Source professionals who do not have a CV. I have helped several of them throughout the years to create one or update it.

For an Open Source professional, their work speak for themselves. It is a better presentation that any CV will ever be. It is the outdated culture, your organization tradition, the one that should change and adapt to this "not so new" way of "filtering candidates". It is simply better for all parties involved.

Many great Open Source professionals have a mediocre CVs or no CV at all because they never needed it. For some is even a way to filter organizations they do not want to work with. If you need my CV to filter me as candidate or evaluate my work, I am not interested in working with you.

Instead of questioning the quality of their CV, take the opportunity to ask yourself why do you or your organization need a CV when the candidate's work is publicly available. It might be a great introspective exercise. Maybe you end up only needing a bunch of links and his contact information instead of a formal CV.

Myth: Open Source developers are not good communicators

Open Source professionals, specially developers, are used to having a great exposure to other professionals worldwide. In general, they need to become good communicators to succeed within their communities. It is common sense.

Open Source developers as people incapable to communicate is a myth. Such perception often depends on what you understand by communicating and to whom, but specially to myopia from people not coming from a technical background.

Evaluate the candidate's written communication skills. Check the mails they send to dissent with others or describe their position, read the blog posts they write summarizing their latest work, check their presentations, their slides… Check how they interact with their users, code maintainers… You will soon realize they interact with profiles or groups of people on regular basis your engineers do not and overcoming greater limitations.

It is true in my experience though that, in general many Open Source developers do not do great interviews. But that is mostly because they do very few of them. They do not go to 10 interviews with four different companies to get a job. They do not need to. And what is more important, throughout their careers, interviews are not used to evaluate them as engineers. Again, their work speak for themselves.

When talking to managers unfamiliar with Open Source about this point I always ask them how many professionals they know that has done good interviews but are incapable to stand behind their opinions in a meeting with senior colleagues or customers. Or how many people in their teams excel at describing complex ideas in simple words, in a 10 min presentation for instance. How many of them did a great interview?

Public exposure to peers and users in Open Source projects are an outstanding learning environment for developing certain communication skills. Be careful with this myth.

English level

In Open Source, most of the communication among developers is in written form (letś include code here). Most developers have a better level of written English than spoken.

I always recommend to be careful when evaluating the candidate English skills only through the interview. To have a full perspective of the candidate skills, I always recommend to check her written communications and articles. If they have videos of their talks available, review them. You might perceive noticeable differences between their written and spoken English levels.

When doing an interview, it is very hard to be quick and provide sharp answers without thinking when you do not use English in regular conversations on daily basis. I have experience this myself.

My point here is that you might evaluate his English skills as poor based in the interview and, in many cases, they have solid written skills so they need little time to develop the required minimum skills to speak English fluently.

Agile fans tend to underestimate how important it is to have a good written level in English compared to the spoken/conversation level. In Open Source (develop software at scale), the trend is exactly the opposite, for good reasons.

Remote Work

Many professionals prefer to live by the beach in a cheaper place than to commute for an hour to go to the office in a big city. It should not come to a surprise that market is moving towards developing software in distributed environments.

The number of "remote first" and "remote friendly" companies is growing in number and size at a fast pace. Open Source is about highly distributed, asynchronous and high latency environments. Some of the larger software engineering projects has grown and mature in such environment.

The candidate has probably made a career out of mastering working on such environment.

Instead of questioning the efficiency of such environment during the interview (common practice among remote work detractors, like radical agilists), take the opportunity to learn about it. If you think that co-located, synchronous and low latency environments are more efficient, ask about how current tasks could be performed remotely, how ceremonies can be adapted, how testing can be performed remotely. Learn from the candidate about how to develop software at scale, how he has acquire such experience and levels of efficiency in such environment.

I have worked in organizations where even testing was done remotely, in a heavily distributed environment, for instance. Technology and creativity has made Open Source the best solution for developing software at scale. Even if your organization is small, the candidate skills might become an asset.

The network effect

Working in the open provides any professional the possibility of building a large network. The interview process might well be the first and only direct interaction that such professionals might have with your company. If the experience is not satisfactory, her network will know.

Would you recommend a person you trust or respect to apply to a company that did not provide you a good experience during the hiring process? And I am not talking about hiring them. I have had good experiences with companies that chose a different candidate during the process. We are grownups. We can take rejection.

By ignoring the above points (or others i skipped), you are not just simply making harder for your organization to hire Open Source talent. You are shouting with a speaker your deficiencies.

It is not a coincidence that some corporation have such a hard time hiring good Open Source professionals and some others are so successful. It is partly because of the networking effect.

Do it right a couple of times and more talent will come. On the contrary, fail badly and you will pay the prize for some time.

How to move the candidate out of her comfort zone

Part of any interview' goal is to get out of the candidate things she did not plan to show or surface.

People used to work in the open, with high levels of exposure, develop a great sense of correctness and politeness when expressing opinions. That does not mean they are not passionate about what they do or that they do not have "radikal opinions". They are and you want to know it.

Ask the candidate for her opinions about tools, technologies, projects from your competitors or technologies that the project they participate on are struggling with. I believe these kind of questions are a fair way to move the candidate out of her comfort zone and get their "essence".

Just be aware of the trends and debates the candidate and/or the community she participates on have had about such topics. Do not make the mistake to make statements that are a no-go for the candidate simply because you did not prepare the interview well enough.

Avoid the typical myths around Open Source. Instead, ask for how do they see their community project in five years and which are the challenges and deficiencies they need to overcome, for instance.


Radical exposure to peers, upstream and users, communicate with them in the open, working with public code, sharing knowledge with professionals from different industries and global organizations, etc. are candidates characteristics that cannot and shouldn't be ignored during the hiring process, specially during the interview.

Instead of applying the same template that has worked fine in the past for your organization, hiring managers will need to evolve it or even change it in order to attract people with solid Open Source profile. This is yet another of those transformations and culture changes that any organization will need to go through to fully embrace Open Source.

To me is very simple, if you want to become a Good Open Source Citizen, you will need to hire people with a strong Open Source background to speed up the journey. Either you adapt your hiring processes, like the interviews, or you will most likely fail in such goal.

To adapt, hiring managers and experts need to develop new habits. That can only come if they question the current ones. This evolution will take place faster and with a smaller impact for their organizations if they get help.

Sadly, I think in Open Source we have paid little attention to this challenge.

10 Dec 2019 12:08pm GMT

Qt for MCUs 1.0 is now available

Qt for MCUs 1 Email Header with CTA vB

We are out of Alpha and Beta! Qt for MCUs first release is now available.

10 Dec 2019 11:09am GMT

Legislating is patch review

Patch review is a process by which newcomers and experts debate proposed changes to a codebase-a textual description of how a particular human-created system is to function. In KDE, we use Phabricator for this, but we're switching to GitLab soon. Both serve the same purpose: to provide a forum where proposed changes can be discussed, revised, and decided upon.

It occurred to me recently that this sounds a bit like the process of lawmaking. Politicians propose bills (patches) that amend their government's code of laws (codebase) which are passed through committees and hearings (the review process) and eventually get voted on (reviewed) and either pass (get merged) or require revision (go around for updates), or fail (get abandoned).

Pictured: typical 18th-century patch review process

I'm reasonably confident that there's little overlap between politicians and software enthusiasts. In my home country of the USA for example, most of our federal government politicians are former lawyers, businesspeople, or educators. "Software engineer" is listed as a "more unusual" former profession.

This strikes me as a shame, since the process of transforming a proposal for improvement on a systemic scale into a permanent alteration of the rules that affects everyone is quite unfamiliar to lawyers, businesspeople, and educators, but it's quite natural so software people. We do it every day. Likewise, software people like us tend to have little experience in the lawmaking process. We act like we invented patch review, but our governments have been doing it for hundreds of years! The overlap got me thinking: perhaps there is something that each group can learn from one another.

Have a constitution

Governments write constitutions to make their foundational principles clear and obvious. That way, everybody knows which ideas are central to the society's identity, which which ones are off-limits.

Lesson for software engineers: Make your software's guiding principles, explicit, not implicit. People often figure this out organically, but it's much easier if your software has a constitution-like document and clearly indicates which features are non-negotiable when it comes to proposing their implementation or removal.

Don't neglect trust

If you have a bad relationship with the people reviewing your patch, they will suspect your motives, nitpick your changes, and generally react with low enthusiasm. Even if your patch is a good one, the reviewers' opinion of it will be clouded by their judgment of you. Therefore, don't neglect your social relationships and act like a jerk, or else the whole process basically doesn't work.

Lesson for politicians: don't ignore or damage your social relationships with your colleagues or else your entire job is a big waste of time. Adopt a mindset that legislation is a collaboration rather than a majority-rules deathmatch or an opportunity to make speeches on a stage. Also, arrive with pure motives. If you're there to try to tilt the playing field towards your favored groups, the people who represent the opposite side will notice and oppose you at every turn, and you're likely to have a frustrating and unproductive career full of outrage-filled press conferences but not much real accomplishment.

Review in stages

In governments, often bills undergo review by multiple committed before they're presented to the full body for debate and voting. This is good, because it gives a chance for obvious mistakes to be corrected in advance of the final vote.

Lesson for software engineers: use a multi-step patch review process, with relevant experts in control at each step of the way. For example, the big-picture software architects should review a patch for to make sure it conceptually makes sense in the first place; then backend programmers should dive into its technical implementation; the UI designers should go over its user interface, and so on.

Keep patches small

Large patches are hard to review and fill the reviewers with a sense of dread. They touch many things and therefore have more opportunities to change something in a way that a stakeholder will object to. They often get bogged down in process and conceptual arguments. For these reasons, it's best to keep patches small and focused, and split a large change into a series of individually-manageable patches that each depend on one another, known as a dependency chain.

Lesson for politicians: avoid 1,000 page mega-bills. If a bill needs to be enormous in order to work, there's probably a deeper conceptual issue with it that everyone senses.

Have an institutional memory

Records of how bills are moved along in the lawmaking process are kept meticulously. This preserves institutional memory, so that newcomers don't make the same mistakes that their older colleagues and forefathers already learned from.

Lesson for software engineers: Keep records of why decisions were made-and even more importantly, why they were reverted. This prevents the phenomenon of newcomers who propose the same changes and repeat the commit/regress/revert cycle.

Make reversion easy

When a patch causes regressions, it can be reverted. Oftentimes it's better to fix it, but if the fixes are too invasive or the regressions outnumber the benefits, it may be a better idea to revert the change and try again. Making reversion easy promotes a culture of innovation and experimentation. People won't be as worried about merging things, because if they cause problems, it's easy to undo the changes. Change becomes playful and fun, rather than consequential and scary.

Lesson for politicians: Don't make it too hard to repeal bad laws. When a newly-passed law causes problems in a society, it's tempting to try to amend it to fix the problems, and sometimes this works. But sometimes it just needs to be re-done from scratch, like a bad patch in software. Being willing to repeal laws that aren't working defuses tension. That said…

Don't rush

Bills and patches that get through their processes quickly are often problematic, riddled with unseen regressions and unanticipated consequences. This is much less common in governments, because the lawmaking process usually has deliberate safeguards put in place to ensure that a bill is not transformed into a law too quickly before there's been adequate time for debate.

Lesson for software engineers: Take your time so you don't push out buggy, regression-filled software. However…

Don't make your users live on the master branch

Rushing isn't such a huge deal as long as you have a QA process and discrete releases. These tools provide time for regressions to be fixed and rough edges to be smoothed out. When patches can be evaluated in a safe sandbox of sorts and subsequently tweaked before their effects are released to users, it's not so bad to move quickly. But you can't expose your users to the churn stirred up by a fast process; it needs to be contained internally.

Lesson for politicians: You don't need so much process surrounding lawmaking if you don't roll out all approved changes immediately. Before new bills take effect, let them simmer for a while in a "release branch" where they can undergo QA so that regressions can be found before they're inflicted on unsuspected citizens (users)!

As software people, there are lessons we can take from our governments' successes (and more often these days it seems, their failures), because this aspect of our professions overlaps quite a bit. It also exposes an uncomfortable truth: changing the rules and behaviors of a system that effects everyone is inherently political. That's why we invented patch review processes: to make sure that important voices are heard, that the system doesn't become inhumane for people who depend on it, and that its overall trajectory is positive.

Personally I'm a lot more sanguine about the prospect of this in software than government right now, and I think that's something that needs to change. The efficacy and positive societal impacts of our governments' lawmaking seems to be at a bit of an ebb at this moment in time. But there may come a point in time when our experience in patch review becomes useful on a larger stage, and benefits not only users of KDE software, but also the people of the world. We shouldn't shy away from politics. Our everyday experiences in KDE are in fact the prefect preparation! Far from being distant and scary, it's something we're engaging in-and succeeding at-every time we contribute to KDE.

10 Dec 2019 3:57am GMT

09 Dec 2019

feedPlanet KDE

A better Qt because of Open Source and KDE

KDE and Qt have a legal framework for protecting Open Source Qt, named KDE Free Qt Foundation. I am one of two KDE people on its board. To explain a bit what we are doing, I have written a document "How the KDE Free Qt Foundation strengthens Qt".

Background is the wish of The Qt Company to change some of the contract provisions. It is still a bit unclear which ideas exactly they are pursuing, but we will need to have a public consultation on them in any case.

My document is aimed as a reference to the current status quo, and to the history of the foundation. It is quite lengthy, but I hope that some of you find the content interesting enough to read it nevertheless.


The development framework Qt is available both as Open Source and under paid license terms. Two decades ago, when Qt 2.0 was first released as Open Source, this was exceptional. Today, most popular developing frameworks are Free/Open Source Software1. Without the dual licensing approach, Qt would not exist today as a popular high-quality framework.

There is another aspect of Qt licensing which is still very exceptional today, and which is not as well-known as it ought to be. The Open Source availability of Qt is legally protected through the by-laws and contracts of a foundation.

The KDE Free Qt Foundation was created in 1998 and guarantees the continued availability of Qt as Free/Open Source Software2. When it was set up, Qt was developed by Trolltech, its original company. The foundation supported Qt through the transitions first to Nokia and then to Digia and to The Qt Company.

In case The Qt Company would ever attempt to close down Open Source Qt, the foundation is entitled to publish Qt under the BSD license. This notable legal guarantee strengthens Qt. It creates trust among developers, contributors and customers.

The KDE Free Qt Foundation is a cooperation between The Qt Company on the one hand and KDE on the other hand. KDE is one of the largest Free Software communities for general purpose end-user software, founded in 1996. In case of ties, KDE has an extra vote, ensuring that The Qt Company does not have a veto on decisions.

My in-depth presentation below provides an overview of the history of the Foundation and describes its importance for Qt today. It explains in detail why the existence of the Foundation has a positive influence on the long-term market success of Qt.


1 I use the terms "Open Source" and "Free Software" interchangeably here. Both have a long history, and the exact differences between them do not matter for the purposes of this text.

2 From the statues of the KDE Free Qt Foundation:
"The purpose of the Foundation is to secure the availability and practicability of the Qt toolkit for developing free software."

09 Dec 2019 8:27pm GMT

Interview with teteotolis

Could you tell us something about yourself?

Hello! My name is teteotolis. I am a graduate of Athens School of Fine Arts (Greece) and I am a professional illustrator/comic artist/visual artist.

I live in Greece, I eat gyro with suvlaki and I say Opa! sometimes.

Do you paint professionally, as a hobby artist, or both?

I am a professional, but I really enjoy drawing and painting for myself too. So both?

What genre(s) do you work in?

Mainly illustrations and comics with fantasy and folklore themes.

Whose work inspires you most - who are your role models as an artist?

First some of my favorite old master painters: Caravaggio, Michelangelo, Leonardo da Vinci, Rembrandt, John William Waterhouse, Alphonse Mucha.

Some of my favorite modern day artists: Xavier Houssin, Kevinhogart, Fatemeh Haghnejad, Anninosart, Domnamanolarou, Nefeli_ekati, Kienan Lafferty, Samuelyouart, Ahmed Aldoori, Wlop and David Revoy.

My role models huh? I never thought about it… I admire a lot of people and their ideals, as well as philosophers, but I don't think that I have a role model…

How and when did you get to try digital painting for the first time?

In my second year of art school, by myself. I had a potato drawing tablet back then, with no pressure sensitivity!

I did some research, got my first actual tablet and downloaded an actual "painting program"!

What makes you choose digital over traditional painting?

Reason 1: So I worked with traditional media, oils, acrylics, charcoal, pigments (powder), for all the years (5+2) on my art school. Did you know that most of these things are highly toxic and poisonous? Yeah, turns out that a lot of colors are made by processing heavy metals like cadmium or titanium, others are made by actual poisons like hydrocyanide… Fun huh?
I don't need this on my skin or lungs.

Reason 2: Digital art is instant, convenient, productive and shareable.

Reason 3: It's fun.

On the other hand, though, I still use traditional media from time to time. Traditional art is more "somatic" than digital art (which is more technical): the experience is different. The textures, the feel of touch, the smell of paper, the VIBRANCY of the colors are on a different level.

The accessible technology we have today on monitors, tablets and digital printing has a lot of years ahead to be able to reproduce on monitor or paper what traditional media does with a simple brushstroke.

Here's an example: Have you ever seen true yellow on a monitor? Or gold? Or true turquoise?

How did you find out about Krita?

Back in April of 2016 I watched a video on youtube, David Revoy painting a little witch "cooking" something (I love that guy).

What was your first impression?

"Woah! So this French guy uses this program to paint! It looks so natural! How!? I gotta try this!… OK, I'm here to stay." After that I made my first work on Krita and most of my webcomic "emery".

What do you love about Krita?

Krita is the best way to transition from traditional art to digital. It's simple, clean, powerful and it has EVERYTHING you'll need for digital illustration. It's the number 1 program that I recommend to my students and my go-to program to use for painterly digital artworks.

What do you think needs improvement in Krita? Is there anything that really annoys you?

There were some issues in the past, Krita 3 was a nightmare for me and for Wacom Cintiq Companion 2 users in general. (Small layer thumbnails, slow brush rendering, the pallet didn't respond at all, and the transition from pen to eraser was taking 5 seconds to register.) But you swiftly managed to resolve all the issues with the next updates!

What sets Krita apart from the other tools that you use?

It's powerful, innovative, it's made for digital art (not photo editing), it has one of the best brush engines and… its free!

If you had to pick one favorite of all your work done in Krita so far, what would it be, and why?

Circe the enchantress.

I liked the final result a lot, It was inspired by Mucha and I've won a big art contest thanks to her!

What techniques and brushes did you use in it?

Chiaroscuro, a lot of textures and natural looking brushes…some made by me.

Where can people see more of your work?

You can find my work here, here aaand here! Come say hi!

Anything else you'd like to share?

I have a webcomic (95% worked in Krita) called "emery", take a look! (https://tapas.io/series/emery)

09 Dec 2019 1:33pm GMT

Linux App Summit 2019!


I attended the first Linux App Summit cohosted by KDE and GNOME! The conference was held in Barcelona, Spain. The conference was organised by the local team, Barcelona Free Software.

The conference hosted quality talks all relevant to the subject of the Linux App Ecosystem - people from the entire spectrum of the Linux Desktop came to talk at the conference, right from packaging software like snap and flatpak devs to people involved in core Plasma and GTK - LAS is THE place to be if you want to see where the Linux App Ecosystem is heading to.

One of my favourite talks was by Tobias Bernard and Jordan Petridis on "There is no 'Linux' platform" describing the immense differences we have within the "Linux" platform that we effectively have to call them a separate 'platform' itself.

Another talk that stood out was the Day Two Keynote by Frank Karlitsche: "We all suck" ;) The talk didn't mince words in saying that the Linux App Ecosystem has a lot to catch up with in terms of organising technology, people, packaging for us, to really compete with the proprietary system and laid a few, I wouldn't say far-fetched but uneasy solutions (one of them being to merge both KDE and GNOME to form a 'super' organisation for all desktop needs (?) )

And the panel was a-w-e-s-o-m-e! Great insights by great folks on important topics like what the future holds for us and more diversity in communities.



There was a walking tour on Day 2 and Barcelona is, truly, a historic and a beautiful city. I had Paul with me along the tour and we talked a lot about Spain, Free Software and food!


I thank the KDE e.V for sponsoring my visit to LAS. Looking forward to seeing more people come to attend this event and make the Linux App Ecosystem, a success! Also, kudos to the organising team and all the volunteers.

09 Dec 2019 12:00am GMT

Akademy 2019

At this year's Akademy I had great moments with new and already known people. Akedemy gives me much power for hopefully the rest of the year. I really enjoyed the daytrip to the lake. It was calm and beautiful environment. The daytrip helped me to calm down again. Together with Leiner, Florian and Valorie we sat down to discuss issues for newcomers attending Akademy the first time while having an amazing lunch. Is it often hard to remember how hard it can be to attend the Akademy the first time without knowing lots of people. The outcome of this discussion will feed back to community after some more cleanup of our notes. Hopefully we can make the next Akademy even better for newcomers next year!

My highlights from the first two days of great talks are Kirogi and "Developers Italia". I really enjoyed seeing that Open Source reaches more and more domains and now you can even control your drone with Open Source named Kirogi. The software itself looks already quite usable and I'm looking forward what features we will see there in future...

"Developers Italia" was an eye opener, in how governments can change the laws so administrations must invest in Open Source. In Italy, administrations are forced to search for an existing solution in Open Source and then use this solution. If the software does not work for them they can pay developers to implement their needed features, but still the code will be owned by the administration and they need to publish the code afterwards under an Open Source license. I'm very interested to see how this will develop in future, because at the moment I still have the bad feeling that some big companies may have the ability and also the desire to destroy this revolutionary idea, with the result that only some big companies will get all the big grants, and the result will be bloated unusable Open Source software. But none the less, let's give the Italy administrations a warm welcome and give them a hand to become good Open Source citizens.

I also enjoyed the talk by Albert about the status of fuzzing KDE software. Albert explained, that the first Frameworks are covered by fuzzing, and the results that were found by the fuzzer. The first days and weeks spit out a lot of interesting issues, but nowadays, the fuzzer takes a lot of time to find new issues. So it is time now to add the next set ready to be fuzzed. I talked with Albert about what would be the most valuable parts of KDEPIM that should be covered by fuzzing. The first set is KMime, KContacts and KCalenderCore as they handle input without any user interaction.

As Qt6 is planned for next year, KDE needs to plan KDE Frameworks 6. In a BoF we discussed the timetable and also some things to do to prepare. Within the same day a Phabicator board was set up and the first review request was accepted. The first big task is not to get rid of KDE4Support and where possible remove deprecated Qt5 API.

Within the KDEPim BoF we discussed, what parts of KDEPIM are ready to become a Framework. Volker has done a really good job in getting KContacts and KCalendarCore into Frameworks. Then we looked at what external applications depend on KDEPIM and why they depend on our stuff. We can see clearly, that most of external applications want to use an address book. The solution we favor is that the external application use kpeople and kpeople has a plugin system where Akonadi can have a plugin. Bhushan and Nico are willing to implement this. It is great that we now also get a stronger connection to Plasma Mobile, as they are interested in a PIM solution for the phone too. We all know that the current stack with Akonadi is not ready to be pushed to the phone, but we can make sure that as much code as possible is reused on the phone. We named the two parts PIM dinosaurs (the desktop apps) and PIM mobile. The most important part is that we are in one team.

As there was also some time in between the BoFs. I spent time to make MemoryHole (encrypted mail headers) support ready for KMail. Together with some time after Akademy, we now have MemoryHole support for the MailViewer implemented. The next step, to implement the sending part, is still being developed. I also took the opportunity to talk with Jonathan and Daniel to get AppArmor support for Akonadi Server. The state before Akademy was that Neon and Debian had both implemented a AppArmor profile, and they were not same, so we sat down together and started to took into it. This unified profile is now ready and will be shipped with the next release 19.12.0.

The last topic I want to talk about is the privacy goal. This goal is surely not finished and we want to continue. At this point we now slowly starting to really work as a strong team of people. And we have a hard task, because privacy is difficult. This makes it hard to pinpoint big issues we can improve. Privacy is more about the small details, which made it hard to write a guide for developers. So we planned how to make it visible that we are not dead, so we want to do a bimonthly blog post series to show the world what we accomplished within those two months. Also we tried to identify the KDE Applications that are in the scope of the Privacy goal.

I'm not able to write everything that happened during Akademy, as it is simply too much. Talking and discussing things or just getting status updates of people's lives is so rich and give me a lot of energy for the year ahead until the next Akademy!

09 Dec 2019 12:00am GMT

08 Dec 2019

feedPlanet KDE

Contributing to KDE is easier than you think – Porting websites to Markdown

Hello all!

This will be a new series of blog posts explaining different ways to contribute to KDE in an easy-to-digest manner. I plan for this series to be parallel to my keyboard shortcuts analysis so that there can be content being published (hopefully) every week. I was also feeling a bit bad about the fact that this blog is available over planet.kde.org (a feed for blog posts made by KDE contributors that also shows a bit of their personal lives and projects), but my other series was focusing more on other DEs, despite also being a project to improve KDE.

The purpose of this series originated from how I feel about asking users to contribute back to KDE. I firmly believe that showing users how contributing is easier than they think is more effective than simply calling them out and directing them to the correct resources; especially if, like me, said user suffers from anxiety or does not believe they are up to the task, in spite of their desire to help back.

It is true that I had the initiative to contact Nate Graham and Carl Schwan through Reddit, but it is also true that, had they not shown me how contributing back can be done in several small, feasible ways too, I would likely not have started contributing back.

Out of respect and due to the current need for help with updating the KDE websites, my first post on this subject will document how to help Carl Schwan port older websites to Markdown, despite there being easier tasks than that. Currently, and as to my knowledge, Carl Schwan and Adrián Chaves Fernandez are the only two main KDE websites contributors, with help and mentorship from other KDE contributors such as Jonathan Riddell and, of course, the whole Promo team, who handles websites as well. This is quite the low number of contributors for such a huge amount of websites to be updated, you see; that's why your help would be much appreciated!

If you'd like to skip the introduction and go straight to the documentation, click here. This task is not difficult, but you should know what HTML/PHP tags are beforehand, as well as basic Markdown knowledge, although this documentation is comprehensive enough that you should be able to learn on-the-go.

The new website structure

Observe the difference between the older and newer layouts for two KDE Frameworks announcements, for instance:

Figure 1 - Old website layout, KDE Frameworks 5.52. Figure 2 - New website layout, KDE Frameworks 5.53.

It is easy to see the reason for KDE to port their websites: most older websites date from the KDE3 or KDE4 age. Nostalgia aside, anyone who has been browsing the web lately can see that Figure 1 shows an old-fashioned design.

In addition, several other pages have outdated information that could/should be updated.

The new website layout is based on Ken Vermette's Jekyll theme, Aether. You can see more information about this on Carl's blog. In addition, since I use a global dark theme for my system, namely Breeze Dark, the prefers-color-scheme that Carl implemented recently was automatically set to dark mode, which is really neat. If for any reason you want to see how it looks like with the opposite theme to what you're currently using, this can be overriden: on Firefox you can go to about:config and create a new integer modifier named ui.systemUsesDarkTheme and set its values as 1 or 0, and on Chrome you can press F12 to open the developer console, click the hamburger menu on the lower section, select the Rendering tab and change "Emulate CSS media feature prefers-color-scheme" to what you wish.

The current procedure for porting websites involves ruby, Jekyll, HTML, PHP and Markdown, as detailed below.

The procedure - Preparing your environment

The project I recently helped port to Markdown was KMyMoney, whose code and commits can be seen here. It hasn't been fully ported yet because of complicated code I cannot even grasp properly which Carl will change himself. I am not that knowledgeable about HTML, PHP and whatnot, you see; I've learned the bare minimum years ago. This does not hinder me from contributing, however.

Initially I had no idea that webpages could be made with Markdown, and so I asked Carl how the webpage would end up looking like once it gets ported, and so he pointed me to an already ported website, namely konsole.kde.org, where you can see a clean example right here.

It's nice and proper, ain't that right?

For setting the actual environment, on Ubuntu-based systems such as mine (KDE Neon Unstable), you only need to sudo apt install ruby ruby2.5-dev git g++.

The packages ruby, ruby2.5-dev and g++ will be needed for compiling the website (yeah, I know, I had no idea compiling websites was a thing!) and checking how it looks with Markdown as well as for installing Jekyll, whereas git serves to download the webpage itself.

The next step is then to simply download the webpage with git. In this case, Carl created a new branch called jekyll to include the new layout theme without affecting the current website, so you only need to do git clone git://anongit.kde.org/websites/kmymoney-org.git --branch jekyll. For those who are not very acquainted with git, in this case, git is the program that downloads repositories, clone is the download command itself, git://anongit.kde.org/websites/kmymoney-org.git is the repository itself and -branch jekyll asks for the jekyll branch Carl made.

The next step is this one: gem install bundler jekyll --user-install. Gem is a ruby command much akin to package repositories, and works parallel to cpan, pip and others. With this command, the ruby-bundler package will attempt to install jekyll, the theming engine for website creation, locally on your machine.

You should then move to the new kmymoney-org folder that should now exist on your home folder, so cd kmymoney-org.

The next command, bundle install --path vendor/bundle, will check if a file named Gemfile exists in your current folder, and if so, it will install all proper dependencies for the website to run properly, including KDE's Aether Jekyll theme configuration.

The last command is bundle exec jekyll serve. This effectively compiles the website, making it available locally on your browser. After running it, you should get a message like this:

Server address:
Server running… press ctrl-c to stop.

Which means the website should now be reachable by opening your browser and going to the server address mentioned, (or localhost:4000, which is easier to type).

Work itself - Porting PHP to Markdown

The first file I managed to port was download.md, however, the first to be pushed by Carl was appimage.md, which would replace appimage.php. First, open it and "Save as…" appimage.md so that you have a Markdown copy. Let's take a look on how it is:

$page_title = "the BEST Personal Finance Manager for FREE Users, full stop.";
include ( "header.inc" );

<h1>How to install and use the AppImage version?</h1>

For Linux users, who want to use a newer version than is available in their distro repository, we offer an AppImage version which reflects the current stable version including the latest fixes. It is build on a daily basis straight from the source.
Here's how you install and run it:

<ul style="width:60%;">
<li>Download the file from <a href="https://binary-factory.kde.org/job/KMyMoney_Stable_Appimage_Build/">https://binary-factory.kde.org/job/KMyMoney_Stable_Appimage_Build/</a> to any location you like.</li>
<li>Change into the directory where you downloaded the file.</li>
<li>Make the file executable with <b>chmod +x <i>name-of-downloaded-file</i></b>.</li>
<li>Execute the file with <b>./<i>name-of-downloaded-file</i></b>.</li>

For the very adventurous Linux user, there is an AppImage with the latest development version at <a href="https://binary-factory.kde.org/job/KMyMoney_Nightly_Appimage_Build/">https://binary-factory.kde.org/job/KMyMoney_Nightly_Appimage_Build/</a>.


Since the commands for preparation might not work with the current repository (which is up-to-date), you can simply copy all of this code into a file and name it appimage.php for training if you wish.

Let's go through each step individually, and it should be easier for us to port it. In the header section:

$page_title = "the BEST Personal Finance Manager for FREE Users, full stop.";
include ( "header.inc" );

The only thing we need to keep is "the BEST Personal Finance Manager for FREE Users, full stop.", so we can ignore most of it and replace it like so:

title: "the BEST Personal Finance Manager for FREE Users, full stop."
layout: page

This will allow the Jekyll theme to be applied on the header of the website automatically, so there's no need to include an extra file for the header like the PHP code does (include ( "header.inc" )).

If you save your file and go to localhost:4000/appimage, you should now be able to see how your Markdown page looks like! The command we ran before, bundle exec jekyll serve, updates the webpage automatically as you change its content. It's pretty handy to verify if you did something wrong or not.

The next part is very simple:

<h1>How to install and use the AppImage version?</h1>

Since <h1> and </h1> are the tags used for Heading/Headers, that is, the text style used for titles, then we search for the Markdown equivalent. A quick Google or DDG search shows that <h1> can be done with # and a space before a title, so:

# How to install and use the AppImage version?

The next step is even easier:

For Linux users, who want to use a newer version than is available in their distro repository, we offer an AppImage version which reflects the current stable version including the latest fixes. It is build on a daily basis straight from the source.

Here, <p> stands for Paragraph, so just normal text. In Markdown, there's no marker for that, you just need to remove the <p> and </p> tags.

(You are effectively free to improve the text as you port it, in fact. I just noticed while writing this post that build is a typo, it should be built. I reported this to Carl just now.)

For Linux users, who want to use a newer version than is available in their distro repository, we offer an AppImage version which reflects the current stable version including the latest fixes. It is built on a daily basis straight from the source.

The next part should be scary at first, but it's really not.

<ul style="width:60%;">
<li>Download the file from <a href="https://binary-factory.kde.org/job/KMyMoney_Stable_Appimage_Build/">https://binary-factory.kde.org/job/KMyMoney_Stable_Appimage_Build/</a> to any location you like.</li>
<li>Change into the directory where you downloaded the file.</li>
<li>Make the file executable with <b>chmod +x <i>name-of-downloaded-file</i></b>.</li>
<li>Execute the file with <b>./<i>name-of-downloaded-file</i></b>.</li>

The tag <ul> stands for Unordered List, that is, it is a bulleted list, so anything that goes inside it will be automatically bulleted. The tag <li> stands for List Item, and it is where the content for each item in the list should be. See how it looks:

Figure 3 - Unordered List.

The equivalent to that in Markdown is done by adding a simple * followed by a space and the content of each list item. We don't need to care about style="width:60%;" since Markdown will format it automatically in the desired way.

The tag <a href="link here">Sample Text</a> creates hypertext, that is, a link to another website, which in Figure 3 is rendered in blue. There are several Markdown equivalents for this, but the one I prefer is that one with reverse order, that is, [Sample Text](link here). It is just a matter of preference, I like how it looks.

This line is a bit more complicated:

Make the file executable with <b>chmod +x <i>name-of-downloaded-file</i></b>.

Any text enclosed within <b> and </b> will become bold, and any text enclosed within <i> and </i> will be in italics in HTML/PHP. So, in this line, chmod +x is bold because it is inside <b></b>, and name-of-download-file is both bold and in italics since it's inside <i></i>, which itself is inside <b></b>.

In Markdown, enclosing text between * will render it in italics, ** in bold, and *** in italics and bold. Since in this line I wanted to give emphasis to chmod +x and name-of-downloaded-file so as to make both different from the previous text, I decided to make chmod +x both bold and italics and name-of-download-file only in italics. Therefore, I enclosed the first in *** and the latter in *, like so:

***chmod +x*** *name-of-downloaded-file*

So the result is this:

* Download the file from [https://binary-factory.kde.org/job/KMyMoney_Stable_Appimage_Build/](https://binary-factory.kde.org/job/KMyMoney_Stable_Appimage_Build/) to any location you like.
* Change into the directory where you downloaded the file.
* Make the file executable with ***chmod +x*** *name-of-downloaded-file*.
* Execute the file with ***name-of-downloaded-file***.

The last part of the file should be pretty simple.


You can simply remove it; it will not be needed for Markdown because the Jekyll theme already takes care of the footer. You know, the part that includes adorable icons of Kiki and Konqi, and the rest of the page below them!

Alas, the result is this:

title: "the BEST Personal Finance Manager for FREE Users, full stop."
layout: page

# How to install and use the AppImage version?
For Linux users, who want to use a newer version than is available in their distro repository, we offer an AppImage version which reflects the current stable version including the latest fixes. It is built on a daily basis straight from the source.

Here's how you install and run it:

* Download the file from [https://binary-factory.kde.org/job/KMyMoney_Stable_Appimage_Build/](https://binary-factory.kde.org/job/KMyMoney_Stable_Appimage_Build/) to any location you like.
* Change into the directory where you downloaded the file.
* Make the file executable with ***chmod +x*** *name-of-downloaded-file*.
* Execute the file with ***name-of-downloaded-file***.

For the very adventurous Linux user, there is an AppImage with the latest development version at [https://binary-factory.kde.org/job/KMyMoney_Nightly_Appimage_Build/](https://binary-factory.kde.org/job/KMyMoney_Nightly_Appimage_Build/).

Don't forget to check if your file was saved with the .md extension, of course!

And that's it! It's not that complicated, is it? It was just fairly extensive since I wanted to explain each step properly for those who are not very acquainted with HTML/PHP and Markdown.

When your new Markdown webpage is ready, send it to Carl so he can push it to the repository.

The process of porting websites like this one is fairly easy I'd say. There will be several HTML and PHP files that are too complicated for newcomers, but that is not an issue: more experienced contributors like Carl and Adrián will be able to fix them instead. It really helps them if other contributors can do these small tasks, since there's A LOT of content to be ported.

I will likely create another blog post about porting websites to Markdown in the future, targeting some more complex cases and explaining how to edit the header files directly through _config.yml! Nothing too complicated, of course. But that will have to wait for now.

If you're interested in helping now that you know it's relatively easy, you can take a look on the Promo team wiki page and the Get Involved wiki page, contact us initially through the KDE Welcome Matrix room or directly through Telegram, Matrix or IRC! There's now also a Telegram group dedicated to KDE Websites. You can also join our Mailing List.

08 Dec 2019 8:39pm GMT

Advent of Code 2019

My work does not involve that much coding any more. I probably spend more time doing email, attending meetings, and preparing presentations than anything else these days. Still, my fingers itch if I don't get to write some code now and then.

This has resulted in small apps such as Mattemonster, where I pushed myself to get it into a presentable state so that I could publish it to Google Play. Any one with kids starting with maths should try the app - my son loves it!

It also results in me doing the Advent of Code for a third time in a row. It is a nice exercise in problem solving, basic data structures, and algorithms - something that I have way too few excuses to exercise with. I'm still frustrated with day 15 from last year. I also remember day 16 fondly.

This year I considered doing the AoC in Rust, to learn. But I ended up with Python to save time instead.

08 Dec 2019 12:23pm GMT