22 Aug 2017

feedFedora People

Casper: Unités systemd avec docker

Une fois encore le couple systemd-docker nous montre son efficacité... redoutable. J'ai pû monter un service en quelques minutes sans prise de tête ou complication imprévue; certes la partie zone DNS et reverse proxy était déjà en place, mais quand même. Je résumerais l'histoire en 4 étapes.


Le premier reflexe à avoir quand on veut monter un nouveau service, est de chercher sur la registry Docker officielle une image potable, et vérifier son Dockerfile pour voir si elle emballe pas quelques abérations. Moi je cherchais une image du service Searx, et je suis tombé sur une perle rare.


L'image docker est basée sur Alpine Linux 3.6, elle n'a pas de volumes donc pas de données persistantes, et elle n'expose qu'un seul port, plus simple tu meurs. Petit plus, elle n'embarque pas les dépendances de compilation, elle se contente de compiler le programme sans faire de choses tordues avant ou après. Le script run.sh procède visiblement à quelques ajustements de la config du programme, notament la génération d'une clé d'authentification, qui sera regénérée à chaque lancement. Soit.

Vici \o/

Il ne reste plus qu'à concrétiser tout ça dans son terminal...

# docker pull wonderfall/searx

L'unité systemd aura les fonctionnalités suivantes:

  1. Lancer le processus dans le container et créer le container
  2. Le container utilise le serveur DNS de OpenDNS (fixe un bug avec NetworkManager et dnsmasq)
  3. Le container écoute sur le port 8089 de l'interface localhost (important)
  4. L'url de base est passée en variable d'environnement
  5. Envoi du signal reload au processus cloisonné
  6. Tuer le processus dans son container et laisser le container à l'abandon
  7. un nouveau container avec un nouveau nom sera créé au prochain redémarrage

# cat /etc/systemd/system/searx-casper-site.service
Description=Searx search engine

ExecStart=/usr/bin/docker run -i --dns -p \
-e BASE_URL=https://search.casperlefantom.net \
ExecReload=/usr/bin/kill -HUP $MAINPID


J'insiste sur le fait que vos containers doivent écouter seulement sur l'interface localhost pour des raisons de sécurité. Si le container écoute sur l'interface ethernet, alors son port d'écoute sera accessible depuis n'importe où sur Internet. Firewalld ne pourra rien faire pour vous. J'ai 2 reverses proxy devant, je serais bien embêté si on pouvait les bypasser >-)

# systemctl daemon-reload
# systemctl enable searx-casper-site.service
# systemctl start searx-casper-site.service

Les logs du programme sont automatiquement récupérés par Journald. L'étape de suppression des containers sans processus actif est relègué à une opération de nettoyage manuel. Je n'ai malheureusement pas trouvé d'autre solution, des disques trop lent sont assez problématiques et provoquent des erreurs de système de fichier en cours d'utilisation.

22 Aug 2017 4:17am GMT

Sarup Banskota: Gaming Willpower

Watching my daily lifestyle evolve over the last two years, I have recently developed an amateur interest in human habits and willpower.

Habits are routines we go through without thinking much about because we're used to doing them successfully for a relatively long time. Willpower is the fuel that we can use to drive involuntary routines to completion. Habit is what drives us towards another cup of sugar drink against our meal plan's wishes, willpower is what helps prevents it.

Habits are formed through a repetitive anticipation trigger → routine → satisfaction cycle. A Starbucks builds the anticipation of good coffee, and we're programmed to walk towards it and grab a cuppa to win some satisfaction.

Strong willpower is often needed for routines whose results and success rates are not clear or instant. It is easier to form the habit of using a particular toothpaste every day, because we anticipate the cool-fresh feeling within minutes. It is difficult to form the habit of doing an intensive workout everyday at 6pm, because the anticipation is of pain and the result (weight loss, sexy legs, abs) is far away, making it less attractive and harder to visualise.

Stanford University psychologist Kelly McGonigal wrote a book on this topic. I read parts of it, and the key takeaway for me was that just like physical power:

Naturally, I wanted to take advantage of willpower peaks to get difficult things done. I also wanted to discover ways to boost my willpower when it was dropping. Through some reading and subsequent experimentation, what I found to work well for me is to list down which activities in my day consume more willpower than others, and to organise them around observed peaks. Ironically, willpower is also needed to follow this organised plan, because there is usually inertia that prevents one from frequently changing what they're currently doing.

Here are some example activities (classifications can wary from person to person):

So a good plan could start as follows:

Wake up (only if you know you've slept enough) - this usually consumes substantial willpower. To make up, we need to achieve a small victory now: maybe lay out clothes for office, make the bed, get the laundry started Follow it up with a few habit activities - brush teeth, breakfast (kept within easy reach). This period also allows time for forming a list of what are some unique high willpower activities for the day (leftover difficult work from yesterday etc) Time for a high willpower activity! Get cracking on JIRA-615 😉 Hopefully it works out and serves as a small victory. Small victories will usually elevate willpower levels again In the event of a high willpower activity not working out, that's when we have to be careful - small failures can be pretty dangerous for the mood. Therefore, a suitable thing to do now is a chore that doesn't involve decision making. e.g making a known bill payment, making a pre-decided lunch (more on this later). As we already know by now, a brainless chore will provide a small victory, and elevate willpower levels Recently, I'm trying to get better at keeping an inventory of brainless chores. When I experience a small failure and willpower is low, I pick one of the chores and strike them off to unlock a small victory.

Making weekly decisions in bulk during high willpower period is helpful. For example, recently I've been trying to plan in advance what meals I'm going to prepare through the week, and shop for ingredients with a defined shopping list at a time when I'm seeking a small victory. This saves me a few food decisions during the work week, keeps me well fed, and I get a free weekly brainless chore to exchange for small victory.

Call me crazy, but now I also go one level further by distributing breakfast and dinner ingredients in the home refrigerator and lunch in the office one. This allows for easy access when I need them, which means low willpower towards preparing meals. This in turn means I avoid skipping meals. This one habit has allowed me to not skip a single meal in the last one week (usually I always skip at least breakfast or lunch or both).

If you're feeling like giving the willpower gaming a try, here are the two key takeaways from this post:

Plan your day around your personal willpower peaks for maximum productivity. When willpower is high, do high willpower tasks. When willpower is low, aim for brainless tasks that lead to small victories Move towards making habits. Habits follow an anticipation trigger → routine → satisfaction cycle. By making triggers easily visible, and satisfaction better defined, you convert high willpower activities to lower willpower ones. Through practice, you can make the routine brainless - voila you just made a habit

22 Aug 2017 12:00am GMT

21 Aug 2017

feedFedora People

Adam Young: Customing the KubeVirt Manifests

My cloud may not look like your cloud. The contract between the application deployment and the Kubernetes installation is a set of manifest files that guide Kubernetes in selecting, naming, and exposing resources. In order to make the generation of the Manifests sane in KubeVirt, we've provided a little bit of build system support.

The manifest files are templatized in a jinja style. I say style, because the actual template string replacement is done using simmple bash scripting. Regardless of the mechanism, it should not be hard for a developer to understand what happens. I'll assume that you have your source code checked out in $GOPATH/src/kubevirt.io/kubevirt/

The template files exist in the manifests subdirectory. Mine looks like this:

haproxy.yaml.in             squid.yaml.in            virt-manifest.yaml.in
iscsi-demo-target.yaml.in   virt-api.yaml.in         vm-resource.yaml.in
libvirt.yaml.in             virt-controller.yaml.in
migration-resource.yaml.in  virt-handler.yaml.in

The simplest way to generate a set of actual manifest files is to run make manifests

make manifests
$ ls -l manifests/*yaml
-rw-rw-r--. 1 ayoung ayoung  672 Aug 21 10:17 manifests/haproxy.yaml
-rw-rw-r--. 1 ayoung ayoung 2384 Aug 21 10:17 manifests/iscsi-demo-target.yaml
-rw-rw-r--. 1 ayoung ayoung 1707 Aug 21 10:17 manifests/libvirt.yaml
-rw-rw-r--. 1 ayoung ayoung  256 Aug 21 10:17 manifests/migration-resource.yaml
-rw-rw-r--. 1 ayoung ayoung  709 Aug 21 10:17 manifests/squid.yaml
-rw-rw-r--. 1 ayoung ayoung  832 Aug 21 10:17 manifests/virt-api.yaml
-rw-rw-r--. 1 ayoung ayoung  987 Aug 21 10:17 manifests/virt-controller.yaml
-rw-rw-r--. 1 ayoung ayoung  954 Aug 21 10:17 manifests/virt-handler.yaml
-rw-rw-r--. 1 ayoung ayoung 1650 Aug 21 10:17 manifests/virt-manifest.yaml
-rw-rw-r--. 1 ayoung ayoung  228 Aug 21 10:17 manifests/vm-resource.yaml

Looking at the difference between, say the virt-api template and final yaml file:

$ diff -u manifests/virt-api.yaml.in manifests/virt-api.yaml
--- manifests/virt-api.yaml.in  2017-07-20 13:29:00.532916101 -0400
+++ manifests/virt-api.yaml     2017-08-21 10:17:10.533038861 -0400
@@ -7,7 +7,7 @@
     - port: 8183
       targetPort: virt-api
   externalIPs :
-    - "{{ master_ip }}"
+    - ""
     app: virt-api
@@ -23,14 +23,14 @@
       - name: virt-api
-        image: {{ docker_prefix }}/virt-api:{{ docker_tag }}
+        image: kubevirt/virt-api:latest
         imagePullPolicy: IfNotPresent
             - "/virt-api"
             - "--port"
             - "8183"
             - "--spice-proxy"
-            - "{{ master_ip }}:3128"
+            - ""
           - containerPort: 8183
             name: "virt-api"
@@ -38,4 +38,4 @@
         runAsNonRoot: true
-        kubernetes.io/hostname: {{ primary_node_name }}
+        kubernetes.io/hostname: master

make manifests, it turns out, just calls a bash script ./hack/build-manifests.sh. This script uses two files to determine the values to use for template string substitution. First, the defaults: hack/config-default.sh. This is where master_ip get the value of This file also gives priority to the $DOCKER_TAG environment variable. However, if you need to customize values further, you can create and manage them in the file hack/config-local.sh. The goal is that any of the keys from the -default file that are specified in the hack/config-local.sh will use the value from the latter file. The set of keys with their defaults (as of this writing) that you can customize are:

binaries="cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api cmd/virtctl cmd/virt-manifest"
docker_images="cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api cmd/virt-manifest images/haproxy images/iscsi-demo-target-tgtd images/vm-killer images/libvirt-kubevirt images/spice-proxy cmd/virt-migrator cmd/registry-disk-v1alpha images/cirros-registry-disk-demo"
optional_docker_images="cmd/registry-disk-v1alpha images/fedora-atomic-registry-disk-demo"
manifest_templates="`ls manifests/*.in`"

Not all of these are for Manifest files. The docker_images key is used in selecting the set of generating Docker images to generate in a command called from a different section of the Makefile. The network_provider is used in the Vagrant setup, and so on.However, most of the values are used in the manifest files. So, If I want to set a master IP Address of, I would have a hack/config-local.sh file that looks like this:

$  diff -u manifests/virt-api.yaml.in manifests/virt-api.yaml
--- manifests/virt-api.yaml.in  2017-07-20 13:29:00.532916101 -0400
+++ manifests/virt-api.yaml     2017-08-21 10:42:28.434742371 -0400
@@ -7,7 +7,7 @@
     - port: 8183
       targetPort: virt-api
   externalIPs :
-    - "{{ master_ip }}"
+    - ""
     app: virt-api
@@ -23,14 +23,14 @@
       - name: virt-api
-        image: {{ docker_prefix }}/virt-api:{{ docker_tag }}
+        image: kubevirt/virt-api:latest
         imagePullPolicy: IfNotPresent
             - "/virt-api"
             - "--port"
             - "8183"
             - "--spice-proxy"
-            - "{{ master_ip }}:3128"
+            - ""
           - containerPort: 8183
             name: "virt-api"
@@ -38,4 +38,4 @@
         runAsNonRoot: true
-        kubernetes.io/hostname: {{ primary_node_name }}
+        kubernetes.io/hostname: master

21 Aug 2017 5:02pm GMT

Justin M. Forbes: Do you have a laptop that isn't fully supported yet?

Sometimes it is a lot easier to debug some of these hardware support issues in person as opposed to over IRC or bugzilla. If you have a laptop with hardware that isn't working quite right, and happen to be heading to flock, bring it with you. I will be in the Kernel regression and perf testing session to help debug some of these. If you can't make that session, feel free to find me any time during the conference. If you don't have Fedora installed on these laptops, I will have USB keys with me to boot a live image for debugging purposes.

21 Aug 2017 5:01pm GMT

Fedora Magazine: Edit images with GNU Parallel and ImageMagick

Imagine you need to make changes to thousands or millions of images. You might write a simple script or batch process to handle the conversion automatically with ImageMagick. Everything is going fine, until you realize this process will take more time than expected.

After rethinking the process, you realize this task is taking so long because the serial method processes one image at a time. With that in mind, you want to modify your task to work in parallel. How can you do this without reinventing the wheel? The answer is simple: use GNU Parallel and the ImageMagick utility suite.

About GNU Parallel and ImageMagick

The GNU Parallel program can be used to execute jobs faster. If you use xargs or tee, you'll find parallel easy to use. It's written to have the same options as xargs. If you write loops in the shell, you'll find parallel can often replace most of the loops and finish the work faster, by running several jobs in parallel.

The ImageMagick suite of tools offers many ways to change or manipulate images. It can deal with lots of popular formats, such as JPEG, PNG, GIF, and more.

The mogrify command is part of this suite. You can use it to resize an image, blur, crop, despeckle, dither, draw on, flip, join, re-sample, and much more.

Using parallel with mogrify

These packages are available in the Fedora repositories. To install, use the sudo command with dnf:

sudo dnf install ImageMagick parallel

Before you start running the commands below, be aware the mogrify command overwrites the original image file. If you want to keep the original image, use the convert command (also part of ImageMagick) to write to a different image file. Or copy your originals to a new location before you mogrify them.

Try this one-line script to resize all your JPEG images to half their original size:

cd ~/Pictures; find . -type f | egrep "*.jpg" | parallel mogrify -resize 50% {}

If you wanted to convert these files instead, adding -new to the filename:

cd ~/Pictures; find . -type f | egrep "*.jpg" | parallel convert -resize 50% {} {.}-new.jpg

Resize all your JPEG images to a maximum dimension of 960×600:

cd ~/Pictures; find . -type f -iname "*.jpg" | parallel mogrify -resize 960x600 {}

Convert all your JPEG images to PNG format:

cd ~/Pictures; find . -type f | egrep "*.jpg" | parallel mogrify -format png {/}

For more information about the mogrify command and examples of usage, refer to this link. Enjoy!

Photo by John Salzarulo on Unsplash.

21 Aug 2017 8:00am GMT

Julita Inca Chiroque: Recap: Workshop of GNOME on Fedora at CONECIT 2017

CONECIT 2017 (held at UNAS in Tingo Maria, Peru) included in this edition several workshops and one of them was about Linux, with GNOME and Fedora. I must thank to the organization and the volunteers that helped me before the workshop. Special thanks to Jhon Fitzgerald during the installation of Fedora 26, updating packages and programs.

The workshop congrated students mostly from Ica, Pucallpa, Cañete and Tingo Maria. We started by showing and explain some foundations of GNOME and Fedora, I talked about the history of the, interfaces, applications, GSoC programs, channels of communication with IRC, bugzilla and GNOME Builder. During the workshop I had the pleasure to be helped by the local Lima team of GNOME + Fedora. Thanks Solanch Ccasa, Lizbeth Lucar, Toto Cabezas and Leyla Marcelo.

It was only two hours and because the low bandwith only allowed us to show the Newcomers guide, download and install the buider package. After that I prize all the participants who finished first some instructions I did on each topic mentioned. As you can see in pictures I am glad that the numbers of women are increasing in Linux workshops. We have shared a delicious cake for the 20th birthday of GNOME.

Thanks to the GNOME Foundation and Fedora LATAM for the financial support that let us spread some knowledge about these projects. Thanks CONECIT for trusting in our job, specially of one of the organizers of this great event, thanks so much Fatima Rouilon!

Filed under: FEDORA, τεχνολογια :: Technology Tagged: cake 20th GNOME, CONECIT, CONECIT 2017, fedora, GNOME, Julita Inca, Julita Inca Chiroque, Mad dog, Maddog, workshop

21 Aug 2017 4:45am GMT

20 Aug 2017

feedFedora People

veon: Just finished, almost done.

flock-almost done

The last revision of the slide for the workshop has been completed.

What do I talk about?

Oh yes, right,

It is with great pleasure that I announce my first involvement with the flock-2017 in Hyannis, Massachusetts, also as speaker.

I will come from Italy with my mentor and web site coordinator - Robert Mayr <robyduck> , and along with the person who initially inspired me to actively participate in the Fedora Project - Gabriele Trombini <mailga>.

I will be presenting with robyduck a Fedora Websites workshop, an overview on principal features of Fedora Websites and work on real issue ticket. Attendees learn how Fedora Websites are make, with which tools and how they can contribute.

What do I expect from the flock? Definitely a unique experience, to meet many of the developers and contributors of the Fedora Project. It will be a full immersion of ideas and experiences.

The Fedora Project has already given me so much and I hope I can learn a lot more.

A dream come true thanks to the Fedora Project.

20 Aug 2017 8:00pm GMT

Levente Kurusa: Paper review: Shadow Kernels

Shadow Kernels: A General Mechanism For Kernel Specialization in Existing Operating Systems

Application selectable kernel specializations by Chick et al.


Chick et al start their paper by noting that existing operating system share one single kernel .text section between all running tasks, and that this fact contradicts recent research which has proved from time to time that profile guided optimizations are beneficial. Their solution involves remapping the running kernel's page-tables on context switches and expose to user-space the ability to choose which "shadow kernel" to bind its process to. The authors have implemented their prototype using the Xen hypervisor and argue that, thus, it can be extended to any operating system that runs on Xen.


The authors argue that in a traditionally monolithic operating systems, system calls are fast because they don't require swapping page tables and flushing the TLB's caches. However, the disadvantage of such system is the fact that per-process optimization of the kernel is now impossible. To fix the discrepancy between relevant research on profile guided optimizations and the apparent lack of embracing it, they introduce "shadow kernels", a per-process optimization mechanism of the kernel's .text section.


The authors of this paper highlight three benefits of "shadow kernels" that have motivated them.

Firstly, the already mentioned recent research into profile-guided optimization. One of the unsolved issues of such optimization is that it must be based on a representative workload. They argue that, Shadow kernels allow applications executing on the same machine to each execute with their own kernel that is optimized with profile-guided optimization specific to that program. And thus, the problem about representative workload is solved, because you presumably know the profile of your own program and you no longer need to care about other processes running on the same machine.

Secondly, scoping probes. It is well known that Linux has multiple instrumentation primitives, for instance Kprobes and DTrace. The authors argue that when one process may want to be instrumented, every other process in the system is also impacted by the overhead of installing the primitive. In contemporary operating systems it is simply impossible to restrict the scope of a probe to a single, or a group of, process(es). Shadow kernels again present a solution here by replacing the pages of the affected process' kernel .text region.

Finally, the third factor that has motivated the authors is about overall optimization of the kernel and its fast paths. They argue that while security checks are in the kernel there is a strong case for trusted processes, which do not necessarily need the protections that are in place and in those cases the additional checks are a bottleneck` to their performance. With shadow kernels, it is possible to remove security checks from the address space of one process while leaving them intact in all other processes.


The most important parts about the shadow kernel design can be nicely summed in the authors' own words: An application can spawn a new shadow kernel through a call to a kernel module. This creates a copy-on-write version of the currently running kernel, which is mapped into the memory of the process that created it. As a process registers probes, the specialization mechanism makes modifications to the kernel's instruction stream. Due to the use of copy-on-write, every page that is modified is then physically copied, leaving the original kernel text un-touched. Modified functions are replaced using standard mechanisms of either rewriting the entire block if the "replacee" is shorter or the same length or using an unconditional jump that is easy to branch predict.

<figure> <figcaption>Overview of the design of shadow kernels</figcaption> </figure>

The above figure gives us a little overview on the actual architectural details of this novel technique.

One of the more interesting problems with this approach is dealing with kernel code that is not bound to a single process (think, kworkers, interrupts and schedulers). The authors mention that is difficult to just go ahead and remap the pages because other processes may want to augment the same page in a different way. The solution they propose is giving up isolation and using the code of a "union" shadow kernel that contains all of the probes.


Probably one of the most fascinating I've read in this paper is the fact their entire implementation is 250 lines of code entirely implemented as a Linux kernel module. Pretty much of the implementation is specific to the Linux kernel thus, and I don't think describing it here would be of much value, rather anyone interested can read the paper and find more detail about how the implementation adheres to the Design outlined above.


Furthering motivation, the authors show that probing the most popular kernel function called across all CPUs reduces single thread performance by 30%, and it keeps worsening if you probe the top three functions to 50%. They tested their setup monitoring the performance of memcached and installing probes in an unrelated process. From this result, it is clear that some technique to solve this, is worthwhile.

20 Aug 2017 12:00pm GMT

19 Aug 2017

feedFedora People

Robert Mayr: Flock 2017 – I’m waiting for you, Cape Cod!

I am very happy I was able to organize my family and holidays to attend Flock again, this will be my third edition after 2013 and 2015, where I had a great experience and made a lot of friends, so I am sure this year will be even better ;)
The flight already will be very nice, because this year I will travel with Gabriele Trombini (mailga) and a new entry of Flock, Andrea Masala (veon). Cape Cod is a real nice venue and although I will be very busy during the conference, I hope we will have a couple of hours to make some sightseeing.
I will be co-speaker in a session I normally gave for the last years, but I am happy Andrea will handle that this year for me. He helped out a lot during the last two releases and I hope he will do even more in the near future. Our workshop will be rather interesting, because we will put our hands on real tickets, look how to fix them and also answer questions about how we handle, develop or debug the websites we are managing.
My talk, given with Gabriele, is a bout the Mindshare initiative, a Council objective for 2017, which aims to retool outreach teams. You will probably already understand this will not affect only ambassadors, but all outreach teams in Fedora world. If you are interested in knowing more, or give your feedback to the plans we have, then come to my talk, I will be happy to open discussions even after the talk, maybe in front of a cold beer :D
Other sessions will see me directly involved, as for example the Council session, but I will also attend the Ambassador workshop-session. Not only because it is directly related to the Mindshare talk, but because as the actual FAmSCo chair I am very interested in this session.

See you all there, and thanks to Fedora to make this possible.

19 Aug 2017 7:55pm GMT

Daniel Vrátil: KDE PIM in Randa 2017

Randa Meetings is an annual meeting of KDE developers in a small village in Swiss Alps. The Randa Meetings is the most productive event I ever attended (since there's nothing much else to do but hack from morning until night and eat Mario's chocolate :-)) and it's very focused - this year main topic is making KDE more accessible.

Several KDE PIM developers will be present as well - and while we will certainly want to hear other's input regarding accessibility of Kontact, our main goal in Randa will be to port away from KDateTime (the KDE4 way of handling date and time in software) to QDateTime (the Qt way of handling date and time). This does not sound very interesting, but it's a very important step for us, as afterward, we will finally be free of all legacy KDE4 code. It is no simple task, but we are confident we can finish the port during the hackfest. If everything goes smoothly, we might even have time for some more cool improvements and fixes in Kontact ;-)

I will also close the KMail User Survey right before the Randa meetings so that we can go over the results and analyze them. So, if you haven't answered the KMail User Survey yet, please do so now and help spread the word! There are still 3 more weeks left to collect as many answers as possible. After Randa, I will be posting a series of blog posts regarding results of the survey.

And finally, please support the Randa Meetings by contributing to our fundraiser - the hackfest can only happen thanks to your support!

Konqi can't wait to go to Randa again!

You can read reports from my previous adventures in Randa Meetings in 2014 and 2015 here:

<iframe class="wp-embedded-content" data-secret="0hsystnfNv" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="http://www.dvratil.cz/2014/08/hacking-my-way-through-randa/embed/#?secret=0hsystnfNv" title=""Hacking my way through Randa" - Daniel Vrátil's blog" width="600"></iframe>

<iframe class="wp-embedded-content" data-secret="0EOiwzLHIS" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="http://www.dvratil.cz/2015/08/kde-pim-in-randa/embed/#?secret=0EOiwzLHIS" title=""KDE PIM in Randa" - Daniel Vrátil's blog" width="600"></iframe>

19 Aug 2017 1:06pm GMT

18 Aug 2017

feedFedora People

Matthias Clasen: Post-GUADEC distractions

Like everybody else, I had a great time at GUADEC this year.

One of the things that made me happy is that I could convince Behdad to come, and we had a chance to finally wrap up a story that has been going on for much too long: Support for color Emoji in the GTK+ stack and in GNOME.

Behdad has been involved in the standardization process around the various formats for color glyphs in fonts since the very beginning. In 2013, he posted some prototype work for color glyph support in cairo.

This was clearly not meant for inclusion, he was looking for assistance turning this into a mergable patch. Unfortunately, nobody picked this up until I gave it a try in 2016. But my patch was not quite right, and things stalled again.

We finally picked it up this year. I produced a better cairo patch, which we reviewed, fixed and merged during the unconference days at GUADEC. Behdad also wrote and merged the necessary changes for fontconfig, so we can have an "emoji" font family, and made pango automatically choose that font when it finds Emoji.

After guadec, I worked on the input side in GTK+. As a first result, it is now possible to use Control-Shift-e to select Emoji by name or code.

<video class="wp-video-shortcode" controls="controls" height="147" id="video-1879-1" preload="metadata" width="400"><source src="https://blogs.gnome.org/mclasen/files/2017/08/c-s-e.webm?_=1" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/08/c-s-e.webm</video>

This is a bit of an easter egg though, and only covers a few Emoji like ❤. The full list of supported names is here.

A more prominent way to enter Emoji is clearly needed, so i set out to implement the design we have for an Emoji chooser. The result looks like this:

As you can see, it supports variation selectors for skin tones, and lets you search by name. The clickable icon has to be enabled with a show-emoji-icon property on GtkEntry, but there is a context menu item that brings up the Emoji chooser, regardless.

I am reasonably happy with it, and it will be available both in GTK+ 3.92 and in GTK+ 3.22.19. We are bending the api stability rules a little bit here, to allow the new property for enabling the icon.

Working on this dialog gave me plenty of opportunity to play with Emoji in GTK+ entries, and it became apparent that some things were not quite right. Some Emoji just did not appear, sometimes. This took me quite a while to debug, since I was hunting for some rendering issue, when in the end, it turned out to be insufficient support for variation selectors in pango.

Another issue that turned up was that pango did place the text caret in the middle of Emoji's sometimes, and Backspace deleted them piece-meal, one character at a time, instead of all at once. This required fixes in pango's implementation of the Unicode segmentation rules (TR29). Thankfully, Peng Wu had already done much of the work for this, I just fixed the remaining corner cases to handle all Emoji correctly, including skin tone variations and flags.

So, what's still missing ? I'm thinking of adding optional support for completion of Emoji names like :grin: directly in the entry, like this:

<video class="wp-video-shortcode" controls="controls" height="450" id="video-1879-2" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2017/08/emoji-completion.webm?_=2" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/08/emoji-completion.webm</video>

But this code still needs some refinement before it is ready to land. It also overlaps a bit with traditional input method functionality, and I am still pondering the best way to resolve that.

To try out color Emoji, you can either wait for GNOME 3.26, which will be released in September, or you can get:

It was fun to work on this, I hope you enjoy using it! ❤

18 Aug 2017 9:25pm GMT

Fedora Badges: New badge: FrOSCon 2017 Attendee !

FrOSCon 2017 AttendeeYou visited the Fedora booth at FrOSCon 2017!

18 Aug 2017 7:55pm GMT

Richard Hughes: Shipping PKCS7 signed metadata and firmware

Over the last few days I've merged in the PKCS7 support into fwupd as an optional feature. I've done this for a few reasons:

Did you know GPGME is a library based around screen scraping the output of the gpg2 binary? When you perform an action using the libgpgme APIs you're literally injecting a string into a pipe and waiting for it to return. You can't even use libgcrypt (the thing that gpg2 uses) directly as it's way too low level and doesn't have any sane abstractions or helpers to read or write packaged data. I don't want to learn LISP S-Expressions (yes, really) and manually deal with packing data just to do vanilla X509 crypto.

Although the LVFS instance only signs files and metadata with GPG at the moment I've added the missing bits into python-gnutls so it could become possible in the future. If this is accepted then I think it would be fine to support both GPG and PKCS7 on the server.

One of the temptations for X509 signing would be to get a certificate from an existing CA and then sign the firmware with that. From my point of view that would be bad, as any firmware signed by any certificate in my system trust store to be marked as valid, when really all I want to do is check for a specific (or a few) certificates that I know are going to be providing certified working firmware. Although I could achieve this to some degree with certificate pinning, it's not so easy if there is a hierarchical trust relationship or anything more complicated than a simple 1:1 relationship.

So this is possible I've created a LVFS CA certificate, and also a server certificate for the specific instance I'm running on OpenShift. I've signed the instance certificate with the CA certificate and am creating detached signatures with an embedded (signed-by-the-CA) server certificate. This seems to work well, and means we can issue other certificates (or CRLs) if the server ever moves or the trust is compromised in some way.

So, tl;dr: (should have been at the top of this page…) if you see a /etc/pki/fwupd/LVFS-CA.pem appear on your system in the next release you can relax. Comments, especially from crypto experts welcome. Thanks!

18 Aug 2017 4:28pm GMT

Bodhi: Bodhi 2.10.0 released

Compatibility changes

This release of Bodhi has a few changes that are technically backward incompatible in some senses, but it was determined that each of these changes are justified without raising Bodhi's major version, often due to features not working at all or being unused. Justifications for each are given inline.


Bug fixes

Development improvements

Release contributors

The following developers contributed to Bodhi 2.10.0:

18 Aug 2017 2:49pm GMT

Ben Williams: F26-20170815 Updated ISOs released

We the Fedora Respins-SIG are happy to announce new F26-20170815 Updated Lives. (with Kernel 4.12.5-300).
This will be the First Set of updated isos for Fedora 26.

With this release we include F26-MD-20170815 which is a Multi-Desktop iso in support of Fosscon (FOSSCON is a Free and Open Source software conference held annually in Philadelphia PA. )

With F26 we are still using Livemedia-creator to build the updated lives.

To build your own please look at https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

With this new build of F26 Updated Lives will save you about 600 M of updates after install.

As always the isos can be found at http://tinyurl.com/Live-respins2

18 Aug 2017 2:43pm GMT

Tong Hui: Report for COSCUP 2017

In the early of the month,as a GNOME Foundation member, I participated in the 12th COSCUP (Conference for Open Source Coder, User & Promoter). From 1996 to 2017, COSCUP has made significant contribution for promoting free and open source in Taiwan. This dozen years or FOSS promoting made Taiwan as much as contributor grows faster than any other Asia country, so I would like to learn what make Taiwan FOSS career successfully, and also advocate GNOME in this conference.

There were thousands participants join COSCUP 2017, and more than 80 talks and workshop by hundreds of free and open source communities contributors and promoters.

As a GNOME Foundation member, together with Bin Li, we have a task to promote GNOME and collaborate with Local Free Desktop community in this COSCUP.

I also gave a short talk together with Mandy Wang in this COSCUP. We talked about how to recruit my girlfriend into FOSS and 'train' her become a GNOME contributor.

<figure class="wp-caption aligncenter" id="attachment_1329" style="width: 840px"><figcaption class="wp-caption-text">My talk with Mandy Wang (Photo by Vagabond, CC BY-SA 2.0)</figcaption></figure>

China-Taiwan contributors Meet-up

At BoF (Birds of Feather) session in this COSCUP, Mandy and me from mainland of China, together with Franklin Weng (KDE-TW), zerng07 and freedomknight from Taiwan who works much more on localization of GNOME and KDE. We had a local free desktop meet-up that night.

Firstly we reviewed what we done past years, and communicate what difficulties we met and how we solved. And then we chatted what we should do and need to do to promoting free desktop in China and Taiwan.

By chat with Taiwan contributor I learned so many experience, so it could help us to do more than before.

<figure class="wp-caption aligncenter" id="attachment_1328" style="width: 840px"><figcaption class="wp-caption-text">With some staff of COSCUP 2017 (Photo by Vagabond CC BY-SA 2.0)</figcaption></figure>

Finally, Thanks all hundreds of volunteers who working in COSCUP, make this event wonderful and awesome!

18 Aug 2017 10:02am GMT