16 Jul 2019

feedFedora People

Fedora Community Blog: Application service categories and community handoff

The Community Platform Engineering (CPE) team recently wrote about our face-to-face meeting where we developed a team mission statement and developed a framework for making our workload more manageable. Having more focus will allow us to progress higher priority work for multiple stakeholders and move the needle on more initiatives in a more efficient manner than how we are working right now.

During the F2F we walked through the process of how to gracefully remove ourselves from applications that are not fitting our mission statement. The next couple of months will be a transition phase as we want to ensure continuity and cause minimum disruption to the community. To assist in that strategy, we analysed our applications and came up with four classifications to which they could belong.

Application service categories

1. We maintain it, we run it

This refers to apps that are in our mission and we need to both actively maintain and host it. CPE will be responsible for all development and lifecycle work on those apps, but we do welcome contributors. This is business as usual for CPE and it has a predictable cost associated with it from a planning and maintenance perspective.

2. We don't maintain it, we run it

This represents apps that are in our infrastructure but we are not responsible for their overall maintenance. We provide power and ping at a server level and will attempt to restart apps that have encountered an issue . We are happy to host them but the maintenance of them, which includes development of new features and bug fixes, are no longer in our day to day remit. This represents light work for us, as the actual applications ownership resides outside of CPE, with our responsibility exclusively on lifecycle management of the app.

3. We don't maintain it, we don't run it

This represents an application that we need to move into a mode whereby somebody other than CPE needs to own it. This represents some work on CPE's side to ensure continuity of service and to ensure that an owner is found for it. Our Community OpenShift instance will be offered to host services here. Apps that fall into this category have mostly been in maintenance mode on the CPE side, but they keep "head space". So we want for them to live and evolve exclusively outside of CPE on a hosting environment that we can provide as a service. Here, we will provide the means to host an application and will fully support the Community PaaS but any app maintenance or lifecycle events will be in the hands of the people running the app, not the CPE team.

These are apps for which we are a main contributor and which drain time and effort. In turn, this is causing us difficulty in planning wider initiatives because of the unpredictable nature of the requests. Category 3 apps are where our ongoing work in this area is more historical than strategic.

Winding down apps

Category 3 apps ultimately do not fit within CPE's mission statement and our intention here is to have a maintenance wind-down period. That period will be decided on an app-by-app basis, with a typical wind down period being of the order of 1-6 months. For exceptional circumstances we may extend this out to a maximum of 12 months. That time frame will be decided in consultation with the next maintainer and the community at large to allow for continuity to occur. For apps that find themselves here, we are providing a community service in the form of Community OpenShift ("Communishift") that could become a home for those apps. However, the CPE team won't maintain a Service Level Expectation (SLE) for availability or fixes. Our SLE is a best effort to keep services and hardware on the air during working hours while being creative with staff schedules and time zones. We have this documented here for further information. Ideally they would have owners outside the team to which requests could be referred to, but would not be a CPE responsibility.

We are working on formalising the process of winding down an application by creating a Standard Operating Procedure (SOP). At a high level, that should include a project plan derived from consultation with the community. That may see work on the CPE team to get the application to a state of maintenance. That work could be on documentation, training, development of critical fixes / features or help with porting it to another location. Ultimately, the time spent on that kind of work is a fraction of the longer term maintenance cost. Our intention is to run all of the apps through the Fedora Council first, in case the Council prefers any specific alternatives to a particular service or app.

4. We turn it off

This represents applications that are no longer used or have been superseded by another application. This may also represent applications that were not picked up by the other members of the community. Turning off does not equate to a hard removal of the app and if an owner can be found or a case made as to why CPE should own it, we can revisit it.

Initial app analysis

To help us identify that path, at our F2F we have evaluated a first round of apps.

Category 1

For completeness we are highlighting one example of a Category 1 application that we will always aim to maintain and keep on the air. Bodhi is one such example as it is one of the core services used to build and deliver Fedora and was developed specifically around the needs of the Fedora project. This makes it one of a kind, there are no application out there that could be leveraged to replace it and any attempts to replace it with something else would have repercussions into the entire build process and likely the entire community.

Category 2

Wiki x 2 (This may be a Category 3 after further analysis) - CPE maintains two wiki instances, one for Fedora and one for CentOS. Both of them are used by the communities in ways that makes it currently impossible to remove. In Fedora's case it is also used by QA (Quality Assurance), making it an integral part of the Fedora release process and thus not something that can be handed to the community to maintain.

Category 3

Overall, the trend for these tools will be to move them to a steady-state of no more fixes/enhancements. The community will be welcome to maintain, and/or locate a replacement service that satisfies their requirements. Replacements can be considered by the Council for funding as appropriate.

Mailman/Hyperkitty/postorious - Maintaining this stack has cost the equivalent of an entire developer's time long-term. However, we recognize the imperative that projects have mailing lists for discussion and collaboration. No further features will be added here and based on the community needs an outside mailing list service can be contracted.

Elections - This application has been in maintenance mode for some time now. We recently invested some time in it to port it to python3, use a newer authentication protocol (OpenID Connect) and move it to openshift, while integrating Ben Cotton's work to add support for badges to elections. We believe elections is in a technical state that is compatible with a low-maintenance model for a community member who would like to take it over. As a matter of fact, we have already found said community member in the person of Ben Cotton (thank you Ben!).

Fedocal - This application has been in maintenance mode for some time. It has been ported to python3 (but hasn't had a release with python3 support yet). There is work in progress to port it to OpenID Connect and have it run in OpenShift. It still needs to be ported to fedora-messaging

Nuancier - This application has been in maintenance mode as well. It has been ported to python3 but needs to be ported to OpenID Connect, fedora-messaging and moved to OpenShift.

Badges - This application has been in maintenance mode for a while now. The work to port it to python3 has been started but it still needs to be ported to OpenID Connect, fedora-messaging and moved to openshift. We invested some time recently to identify the highest pain point of the application, log them in the issue tracker as user stories and start prioritizing them. We however, cannot commit to fixing them.

For Fedocal and Nuancier, we are thinking of holding virtual hackfest on Fridays for as long as there is work to do on them, advertise this to the community to try to spark interest in these applications in the hope that we find someone interested enough (and after these hackfest knowledgeable enough) to take over their maintenance.

Category 4

Pastebin - fpaste.org is a well known and used service in Fedora, however it has been a pain point for the CPE team for few years. The pastebin software that exist are most often not maintained, finding and replacing one is a full-time work for a few weeks. Finally, this type of service also comes with high legal costs as we are often asked to remove content from it, despite the limited time this content remains available. CentOS is also running a pastebin service but it has the same long term costs and a similar conversation will need to occur there

Apps.fp.o - This is the landing page available at https://apps.fedoraproject.org/. Its content has not been kept up to date and it overall needs some redesign. We may be open to give it up to a community member, but we do not believe that the gain is worth the time investment in finding that person

Ipsilon - Ipsilon is our identity provider. It supports multiple authentication protocol (OpenID 2.0, OpenID Connect, SAML 2.0, …) and multiple backends (FAS, LDAP/FreeIPA, htpasswd, system accounts…). While it was originally shipped as a tech preview in RHEL it no longer is and the team working on this application has also been refocused on other projects. We would like to move all our applications to use OpenID Connect or SAML 2.0 (instead of OpenID 2.0 with (custom) extensions) and replace FAS with an IPA-based solution, which in turn allows us to replace ipsilon by a more maintained solution, likely Red Hat Single Sign On. The dependencies are making this a long term effort. We will need to announce to the community that this means we will shut down the public OpenID 2.0 endpoints, which means that any community services that use this protocol need to be moved to OpenID Connect as well.

Over the coming weeks we will setup our process to begin the formal window of the items listed above that are in a 3 or a 4 state and will share that process and plan with the Fedora Council.

The post Application service categories and community handoff appeared first on Fedora Community Blog.

16 Jul 2019 6:34am GMT

Open Source Security Podcast: Episode 154 - Chat with the authors of the book "The Fifth Domain"

Josh and Kurt talk to the authors of a new book The Fifth Domain. Dick Clarke and Rob Knake join us to discuss the book, cybersecurity, US policy, how we got where we are today and what the future holds for cybersecurity.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/10497236/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

Comment on Twitter with the #osspodcast hashtag

16 Jul 2019 12:10am GMT

15 Jul 2019

feedFedora People

Charles-Antoine Couret: Résultats des élections de Fedora 06/19

Comme je vous le rapportais il y a peu, Fedora a organisé des élections pour renouveler partiellement le collège de ses organes FESCo, Mindshare et Council.

Le scrutin est comme toujours un vote par valeurs. Nous pouvons attribuer à chaque candidat un certain nombre de points, dont la valeur maximale est celui du nombre de candidat, et le minimum 0. Cela permet de montrer l'approbation à un candidat et la désapprobation d'un autre sans ambiguïté. Rien n'empêchant de voter pour deux candidats avec la même valeur.

Les résultats pour le Conseil sont (qui est le seul candidat) :

  # votes |  name
 176          Till Maas (till)

À titre indicatif le score maximal possible était de 184 * 1 votes soit 184.

Les résultats pour le FESCo sont (seuls les quatre premiers sont élus) :

  # votes |  name
695     Stephen Gallagher (sgallagh)
687         Igor Gnatenko (ignatenkobrain)
615     Aleksandra Fedorova (bookwar)
569     Petr Šabata (psabata)
525      Jeremy Cline
444     Fabio Valentini (decathorpe

À titre indicatif le score maximal possible était de 205 * 6 soit 1230.

Les résultats pour le Mindshare sont donc (seuls le premier est élu) :

  # votes |  name
221     Sumantro Mukherjee (sumantrom)
172     Luis Bazan (lbazan)

À titre indicatif le score maximal possible était de 178 * 2 soit 356.

Nous pouvons noter que globalement le nombre de votants pour chaque scrutin était proche aux alentours de 175-200 votants ce qui est un poil moins que la fois précédente (200-250 en moyenne). Les scores sont aussi plutôt éparpillés.

Bravo aux participants et aux élus et le meilleur pour le projet Fedora.

15 Jul 2019 6:13pm GMT

Gwyn Ciesla: Duplicity 0.8.01

Duplicity 0.8.01 is now in rawhide. The big change here is that it now uses Python 3. I've tested it in my own environment, both on it's own and with deja-dup, and both work.

Please test and file bugs. I expect there will be more, but with Python 2 reaching EOL soon, it's important to move everything we can to Python 3.



15 Jul 2019 3:10pm GMT

William Brown: CPU atomics and orderings explained

CPU atomics and orderings explained

Sometimes the question comes up about how CPU memory orderings work, and what they do. I hope this post explains it in a really accessible way.

Short Version - I wanna code!

Summary - The memory model you commonly see is from C++ and it defines:

  • Relaxed
  • Acquire
  • Release
  • Acquire/Release (sometimes AcqRel)
  • SeqCst

There are memory orderings - every operation is "atomic", so will work correctly, but there rules define how the memory and code around the atomic are influenced.

If in doubt - use SeqCst - it's the strongest guarantee and prevents all re-ordering of operations and will do the right thing.

The summary is:

  • Relaxed - no ordering guarantees, just execute the atomic as is.
  • Acquire - all code after this atomic, will be executed after the atomic.
  • Release - all code before this atomic, will be executed before the atomic.
  • Acquire/Release - both Acquire and Release - ie code stays before and after.
  • SeqCst - Stronger consistency of Acquire/Release.

Long Version … let's begin …

So why do we have memory and operation orderings at all? Let's look at some code to explain:

let mut x = 0;
let mut y = 0;
x = x + 3;
y = y + 7;
x = x + 4;
x = y + x;

Really trivial example - now to us as a human, we read this and see a set of operations that are linear by time. That means, they execute from top to bottom, in order.

However, this is not how computers work. First, compilers will optimise your code, and optimisation means re-ordering of the operations to achieve better results. A compiler may optimise this to:

let mut x = 0;
let mut y = 0;
// Note removal of the x + 3 and x + 4, folded to a single operation.
x = x + 7
y = y + 7;
x = y + x;

Now there is a second element. Your CPU presents the illusion of running as a linear system, but it's actually an asynchronous, out-of-order task execution engine. That means a CPU will reorder your instructions, and may even run them concurrently and asynchronously.

For example, your CPU will have both x + 7 and y + 7 in the pipeline, even though neither operation has completed - they are effectively running at the "same time" (concurrently).

When you write a single thread program, you generally won't notice this behaviour. This is because a lot of smart people write compilers and CPU's to give the illusion of linear ordering, even though both of them are operating very differently.

Now we want to write a multithreaded application. Suddenly this is the challenge:

We write a concurrent program, in a linear language, executed on a concurrent asynchronous machine.

This means there is a challenge is the translation between our mind (thinking about the concurrent problem), the program (which we have to express as a linear set of operations), which then runs on our CPU (an async concurrent device).

Phew. How do computers even work in this scenario?!

Why are CPU's async?

CPU's have to be async to be fast - remember spectre and meltdown? These are attacks based on measuring the side effects of CPU's asynchronous behaviour. While computers are "fast" these attacks will always be possible, because to make a CPU synchronous is slow - and asynchronous behaviour will always have measurable side effects. Every modern CPU's performance is an illusion of async black magic.

A large portion of the async behaviour comes from the interaction of the CPU, cache, and memory.

In order to provide the "illusion" of a coherent synchronous memory interface there is no seperation of your programs cache and memory. When the cpu wants to access "memory" the CPU cache is utilised transparently and will handle the request, and only on a cache miss, will we retrieve the values from RAM.

(Aside: in almost all cases more CPU cache, not frequency will make your system perform better, because a cache miss will mean your task stalls waiting on RAM. Ohh no!)

CPU -> Cache -> RAM

When you have multiple CPU's, each CPU has it's own L1 cache:

CPU1 -> L1 Cache -> |              |
CPU2 -> L1 Cache -> | Shared L2/L3 | -> RAM
CPU3 -> L1 Cache -> |              |
CPU4 -> L1 Cache -> |              |

Ahhh! Suddenly we can see where problems can occur - each CPU has an L1 cache, which is transparent to memory but unique to the CPU. This means that each CPU can make a change to the same piece of memory in their L1 cache without the other CPU knowing. To help explain, let's show a demo.

CPU just trash my variables fam

We'll assume we now have two threads - my code is in rust again, and there is a good reason for the unsafes - this code really is unsafe!

// assume global x: usize = 0; y: usize = 0;

THREAD 1                        THREAD 2

if unsafe { *x == 1 } {          unsafe {
    unsafe { *y += 1 }              *y = 10;
}                                   *x = 1;

At the end of execution, what state will X and Y be in? The answer is "it depends":

  • What order did the threads run?
  • The state of the L1 cache of each CPU
  • The possible interleavings of the operations.
  • Compiler re-ordering

In the end the result of x will always be 1 - because x is only mutated in one thread, the caches will "eventually" (explained soon) become consistent.

The real question is y. y could be:

  • 10
  • 11
  • 1

10 - This can occur because in thread 2, x = 1 is re-ordered above y = 10, causing the thread 1 "y += 1" to execute, followed by thread 2 assign 10 directly to y. It can also occur because the check for x == 1 occurs first, so y += 1 is skipped, then thread 2 is run, causing y = 10. Two ways to achieve the same result!

11 - This occurs in the "normal" execution path - all things considered it's a miracle :)

1 - This is the most complex one - The y = 10 in thread 2 is applied, but the result is never sent to THREAD 1's cache, so x = 1 occurs and is made available to THREAD 1 (yes, this is possible to have different values made available to each cpu …). Then thread 1 executes y (0) += 1, which is then sent back trampling the value of y = 10 from thread 2.

If you want to know more about this and many other horrors of CPU execution, Paul McKenny is an expert in this field and has many talks at LCA and others on the topic. He can be found on twitter and is super helpful if you have questions.

So how does a CPU work at all?

Obviously your system (likely a multicore system) works today - so it must be possible to write correct concurrent software. Cache's are kept in sync via a protocol called MESI. This is a state machine describing the states of memory and cache, and how they can be synchronised. The states are:

  • Modified
  • Exclusive
  • Shared
  • Invalid

What's interesting about MESI is that each cache line is maintaining it's own state machine of the memory addresses - it's not a global state machine. To coordinate CPU's asynchronously message each other.

A CPU can be messaged via IPC (Inter-Processor-Communication) to say that another CPU wants to "claim" exclusive ownership of a memory address, or to indicate that it has changed the content of a memory address and you should discard your version. It's important to understand these messages are asynchronous. When a CPU modifies an address it does not immediately send the invalidation message to all other CPU's - and when a CPU recieves the invalidation request it does not immediately act upon that message.

If CPU's did "synchronously" act on all these messages, they would be spending so much time handling IPC traffic, they would never get anything done!

As a result, it must be possible to indicate to a CPU that it's time to send or acknowledge these invalidations in the cache line. This is where barriers, or the memory orderings come in.

  • Relaxed - No messages are sent or acknowledged.
  • Release - flush all pending invalidations to be sent to other CPUS
  • Acquire - Acknowledge and process all invalidation requests in my queue
  • Acquire/Release - flush all outgoing invalidations, and process my incomming queue
  • SeqCst - as AcqRel, but with some other guarantees around ordering that are beyond this discussion.

Understand a Mutex

With this knowledge in place, we are finally in a position to understand the operations of a Mutex

// Assume mutex: Mutex<usize> = Mutex::new(0);

THREAD 1                            THREAD 2

{                                   {
    let guard = mutex.lock()            let guard = mutex.lock()
    *guard += 1;                        println!(*guard)
}                                   }

We know very clearly that this will print 1 or 0 - it's safe, no weird behaviours. Let's explain this case though:


    let guard = mutex.lock()
    // Acquire here!
    // All invalidation handled, guard is 0.
    // Compiler is told "all following code must stay after .lock()".
    *guard += 1;
    // content of usize is changed, invalid req is queue
// Release here!
// Guard goes out of scope, invalidation reqs sent to all CPU's
// Compiler told all proceeding code must stay above this point.

            THREAD 2

                let guard = mutex.lock()
                // Acquire here!
                // All invalidations handled - previous cache of usize discarded
                // and read from THREAD 1 cache into S state.
                // Compiler is told "all following code must stay after .lock()".
            // Release here!
            // Guard goes out of scope, no invalidations sent due to
            // no modifications.
            // Compiler told all proceeding code must stay above this point.

And there we have it! How barriers allow us to define an ordering in code and a CPU, to ensure our caches and compiler outputs are correct and consistent.

Benefits of Rust

A nice benefit of Rust, and knowing these MESI states now, we can see that the best way to run a system is to minimise the number of invalidations being sent and acknowledged as this always causes a delay on CPU time. Rust variables are always mutable or immutable. These map almost directly to the E and S states of MESI. A mutable value is always exclusive to a single cache line, with no contention - and immutable values can be placed into the Shared state allowing each CPU to maintain a cache copy for higher performance.

This is one of the reasons for Rust's amazing concurrency story is that the memory in your program map to cache states very clearly.

It's also why it's unsafe to mutate a pointer between two threads (a global) - because the cache of the two cpus' won't be coherent, and you may not cause a crash, but one threads work will absolutely be lost!

Finally, it's important to see that this is why using the correct concurrency primitives matter - it can highly influence your cache behaviour in your program and how that affects cache line contention and performance.

For comments and more, please feel free to email me!

Shameless Plug

I'm the author and maintainer of Conc Read - a concurrently readable datastructure library for Rust. Check it out on crates.io!


What every programmer should know about memory (pdf)

Rust-nomicon - memory ordering

15 Jul 2019 2:00pm GMT

14 Jul 2019

feedFedora People

Lennart Poettering: ASG! 2019 CfP Re-Opened!

<large>The All Systems Go! 2019 Call for Participation Re-Opened for ONE DAY!</large>

Due to popular request we have re-opened the Call for Participation (CFP) for All Systems Go! 2019 for one day. It will close again TODAY, on 15 of July 2019, midnight Central European Summit Time! If you missed the deadline so far, we'd like to invite you to submit your proposals for consideration to the CFP submission site quickly! (And yes, this is the last extension, there's not going to be any more extensions.)

ASG image

All Systems Go! is everybody's favourite low-level Userspace Linux conference, taking place in Berlin, Germany in September 20-22, 2019.

For more information please visit our conference website!

14 Jul 2019 10:00pm GMT

Luya Tshimbalanga: HP, Linux and ACPI

Majority of HP hardware running on Linux and even Microsoft reported an issue related to a non-standard compliant ACPI. Notable message below repeats at least three times on the boot:

4.876549] ACPI BIOS Error (bug): AE_AML_BUFFER_LIMIT, Field [D128] at bit offset/length 128/1024 exceeds size of target Buffer (160 bits) (20190215/dsopcode-198)
[ 4.876555] ACPI Error: Aborting method \HWMC due to previous error (AE_AML_BUFFER_LIMIT) (20190215/psparse-529)
[ 4.876562] ACPI Error: Aborting method \_SB.WMID.WMAA due to previous error (AE_AML_BUFFER_LIMIT) (20190215/psparse-529)

The bug is a known for years from which Linux kernel team are unable to fix without the help of vendor i.e. HP. Here is a compilation of reports:

The good news is some errors seems harmless. Unfortunately, such errors displayed the quirks approach used by vendors to support Microsoft Windows system thus doing bad practice. One of case how such action lead to an issue to even the officially supported operating system on HP hardware.

The ideal will be for HP to provide a BIOS fix for their affected hardware and officially support the Linux ecosystem much like their Printing department. Linux Vendor Firmware Service will be a good start and so far Dell is the leader in that department. American Megatrends Inc, the company developing BIOS/UEFI for HP made the process easier so it is a matter to fully enable the support.

14 Jul 2019 5:35pm GMT

13 Jul 2019

feedFedora People

Mark J. Wielaard: bzip2 1.0.8

We are happy to announce the release of bzip2 1.0.8.

This is a fixup release because the CVE-2019-12900 fix in bzip2 1.0.7 was too strict and might have prevented decompression of some files that earlier bzip2 versions could decompress. And it contains a few more patches from various distros and forks.

bzip2 1.0.8 contains the following fixes:

Patches by Joshua Watt, Mark Wielaard, Phil Ross, Vincent Lefevre, Led and Kristýna Streitová.

This release also finalizes the move of bzip2 to a community maintained project at https://sourceware.org/bzip2/

Thanks to Bhargava Shastry bzip2 is now also part of oss-fuzz to catch fuzzing issues early and (hopefully not) often.

13 Jul 2019 7:38pm GMT

12 Jul 2019

feedFedora People

Fedora Infrastructure Status: All systems go

Service 'Pagure' now has status: good: Everything seems to be working.

12 Jul 2019 10:20pm GMT

Fedora Infrastructure Status: There are scheduled downtimes in progress

Service 'Pagure' now has status: scheduled: scheduled outage: https://pagure.io/fedora-infrastructure/issue/7980

12 Jul 2019 8:59pm GMT

Fedora Community Blog: FPgM report: 2019-28

Fedora Program Manager weekly report on Fedora Project development and progress

Here's your report of what has happened in Fedora Program Management this week. I am on PTO the week of 15 July, so there will be no FPgM report or FPgM office hours next week.

I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.


Upcoming meetings

Fedora 31




Submitted to FESCo

Approved by FESCo

The post FPgM report: 2019-28 appeared first on Fedora Community Blog.

12 Jul 2019 8:56pm GMT

Matthias Clasen: Settings, in a sandbox world

GNOME applications (and others) are commonly using the GSettings API for storing their application settings.

GSettings has many nice aspects:

And it has different backends, so it can be adapted to work transparently in many situations. One example for where this comes in handy is when we use a memory backend to avoid persisting any settings while running tests.

The GSettings backend that is typically used for normal operation is the DConf one.


DConf features include profiles, a stack of databases, a facility for locking down keys so they are not writable, and a single-writer design with a central service.

The DConf design is flexible and enterprisey - we have taken advantage of this when we created fleet commander to centrally manage application and desktop settings for large deployments.

But it is not a great fit for sandboxing, where we want to isolate applications from each other and from the host system. In DConf, all settings are stored in a single database, and apps are free to read and write any keys, not just their own - plenty of potential for mischief and accidents.

Most of the apps that are available as flatpaks today are poking a 'DConf hole' into their sandbox to allow the GSettings code to keep talking to the dconf daemon on the session bus, and mmap the dconf database.

Here is how the DConf hole looks in the flatpak metadata file:


[Session Bus Policy]


Ideally, we want sandboxed apps to only have access to their own settings, and maybe readonly access to a limited set of shared settings (for things like the current font, or accessibility settings). It would also be nice if uninstalling a sandboxed app did not leave traces behind, like leftover settings in some central database.

It might be possible to retrofit some of this into DConf. But when we looked, it did not seem easy, and would require reconsidering some of the central aspects of the DConf design. Instead of going down that road, we decided to take advantage of another GSettings backend that already exists, and stores settings in a keyfile.

Unsurprisingly, it is called the keyfile backend.


The keyfile backend was originally created to facilitate the migration from GConf to GSettings, and has been a bit neglected, but we've given it some love and attention, and it can now function as the default GSettings backend inside sandboxes.

It provides many of the isolation aspects we want: Apps can only read and write their own settings, and the settings are in a single file, in the same place as all the application data:


One of the things we added to the keyfile backend is support for locks and overrides, so that fleet commander can keep working for apps that are in flatpaks.

For shared desktop-wide settings, there is a companion Settings portal, which provides readonly access to some global settings. It is used transparently by GTK and Qt for toolkit-level settings.

What does all this mean for flatpak apps?

If your application is not yet available as a flatpak, and you want to provide one, you don't have to do anything in particular. Things will just work. Don't poke a hole in your sandbox for DConf, and GSettings will use the keyfile backend without any extra work on your part.

If your flatpak is currently shipping with a DConf hole, you can keep doing that for now. When you are ready for it, you should

Note that this is a one-time migration; it will only happen if the keyfile does not exist. The existing settings will be left in the DConf database, so if you need to do the migration again for whatever reason, you can simply remove the the keyfile.

This is how the migrate-path key looks in the metadata file:


Closing the DConf hole is what makes GSettings use the keyfile backend, and the migrate-path key tells flatpak to migrate settings from DConf - you need both parts for a seamless transition.

There were some recent fixes to the keyfile backend code, so you want to make sure that the runtime has GLib 2.60.6, for best results.

Happy flatpaking!

Update: One of the most recent fixes in the keyfile backend was to correct under what circumstances GSettings will choose it as the default backend. If you have problems where the wrong backend is chosen, as a short-term workaround, you can override the choice with the GSETTINGS_BACKEND environment variable.

Update 2: To add the migrate-path setting with flatpak-builder, use the following option:


12 Jul 2019 6:19pm GMT

Richard Hughes: GNOME Software in Fedora will no longer support snapd

In my slightly infamous email to fedora-devel I stated that I would turn off the snapd support in the gnome-software package for Fedora 31. A lot of people agreed with the technical reasons, but failed to understand the bigger picture and asked me to explain myself.

I wanted to tell a little, fictional, story:

In 2012 the ISO institute started working on a cross-vendor petrol reference vehicle to reduce the amount of R&D different companies had to do to build and sell a modern, and safe, saloon car.

Almost immediately, Mercedes joins ISO, and starts selling the ISO car. Fiat joins in 2013, Peugeot in 2014 and General Motors finally joins in 2015 and adds support for Diesel engines. BMW, who had been trying to maintain the previous chassis they designed on their own (sold as "BMW Kar Koncept"), finally adopts the ISO car also in 2015. BMW versions of the ISO car use BMW-specific transmission oil as it doesn't trust oil from the ISO consortium.

Mercedes looks to the future, and adds high-voltage battery support to the ISO reference car also in 2015, adding the required additional wiring and regenerative braking support. All the other members of the consortium can use their own high voltage batteries, or use the reference battery. The battery can be charged with electricity from any provider.

In 2016 BMW stops marketing the "ISO Car" like all the other vendors, and instead starts calling it "BMW Car" instead. At about the same time BMW adds support for hydrogen engines to the reference vehicle. All the other vendors can ship the ISO car with a Hydrogen engine, but all the hydrogen must be purchased from a BMW-certified dealer. If any vendor other than BMW uses the hydrogen engines, they can't use the BMW-specific heat shield which protects the fuel tank from exploding in the event on a collision.

In 2017 Mercedes adds traction control and power steering to the ISO reference car. It is enabled almost immediately and used by nearly all the vendors with no royalties and many customer lives are saved.

In 2018 BMW decides that actually producing vendor-specific oil for it's cars is quite a lot of extra work, and tells all customers existing transmission oil has to be thrown away, but now all customers can get free oil from the ISO consortium. The ISO consortium distributes a lot more oil, but also has to deal with a lot more customer queries about transmission failures.

In 2019 BMW builds a special cut-down ISO car, but physically removes all the petrol and electric functionality from the frame. It is rebranded as "Kar by BMW". It then sends a private note to the chair of the ISO consortium that it's not going to be using ISO car in 2020, and that it's designing a completely new "Kar" that only supports hydrogen engines and does not have traction control or seatbelts. The explanation given was that BMW wanted a vehicle that was tailored specifically for hydrogen engines. Any BMW customers using petrol or electricity in their car must switch to hydrogen by 2020.

The BMW engineers that used to work on ISO Car have been shifted to work on Kar, although have committed to also work on Car if it's not too much extra work. BMW still want to be officially part of the consortium and to be able to sell the ISO Car as an extra vehicle to the customer that provides all the engine types (as some customers don't like hydrogen engines), but doesn't want to be seen to support anything other than a hydrogen-based future. It's also unclear whether the extra vehicle sold to customers would be the "ISO Car" or the "BMW Car".

One ISO consortium member asks whether they should remove hydrogen engine support from the ISO car as they feel BMW is not playing fair. Another consortium member thinks that the extra functionality could just be disabled by default and any unused functionality should certainly be removed. All members of the consortium feel like BMW has pushed them too far. Mercedes stop selling the hydrogen ISO Car model stating it's not safe without the heat shield, and because BMW isn't going to be supporting the ISO Car in 2020.

12 Jul 2019 12:51pm GMT

Fedora Magazine: What is Silverblue?

Fedora Silverblue is becoming more and more popular inside and outside the Fedora world. So based on feedback from the community, here are answers to some interesting questions about the project. If you do have any other Silverblue related questions, please leave it in the comments section and we will try to answer them in a future article.

What is Silverblue?

Silverblue is a codename for the new generation of the desktop operating system, previously known as Atomic Workstation. The operating system is delivered in images that are created by utilizing the rpm-ostree project. The main benefits of the system are speed, security, atomic updates and immutability.

What does "Silverblue" actually mean?

"Team Silverblue" or "Silverblue" in short doesn't have any hidden meaning. It was chosen after roughly two months when the project, previously known as Atomic Workstation was rebranded. There were over 150 words or word combinations reviewed in the process. In the end Silverblue was chosen because it had an available domain as well as the social network accounts. One could think of it as a new take on Fedora's blue branding, and could be used in phrases like "Go, Team Silverblue!" or "Want to join the team and improve Silverblue?".

What is ostree?

OSTree or libostree is a project that combines a "git-like" model for committing and downloading bootable filesystem trees, together with a layer to deploy them and manage the bootloader configuration. OSTree is used by rpm-ostree, a hybrid package/image based system that Silverblue uses. It atomically replicates a base OS and allows the user to "layer" the traditional RPM on top of the base OS if needed.

Why use Silverblue?

Because it allows you to concentrate on your work and not on the operating system you're running. It's more robust as the updates of the system are atomic. The only thing you need to do is to restart into the new image. Also, if there's anything wrong with the currently booted image, you can easily reboot/rollback to the previous working one, if available. If it isn't, you can download and boot any other image that was generated in the past, using the ostree command.

Another advantage is the possibility of an easy switch between branches (or, in an old context, Fedora releases). You can easily try the Rawhide or updates-testing branch and then return back to the one that contains the current stable release. Also, you should consider Silverblue if you want to try something new and unusual.

What are the benefits of an immutable OS?

Having the root filesystem mounted read-only by default increases resilience against accidental damage as well as some types of malicious attack. The primary tool to upgrade or change the root filesystem is rpm-ostree.

Another benefit is robustness. It's nearly impossible for a regular user to get the OS to the state when it doesn't boot or doesn't work properly after accidentally or unintentionally removing some system library. Try to think about these kind of experiences from your past, and imagine how Silverblue could help you there.

How does one manage applications and packages in Silverblue?

For graphical user interface applications, Flatpak is recommended, if the application is available as a flatpak. Users can choose between Flatpaks from either Fedora and built from Fedora packages and in Fedora-owned infrastructure, or Flathub that currently has a wider offering. Users can install them easily through GNOME Software, which already supports Fedora Silverblue.

One of the first things users find out is there is no dnf preinstalled in the OS. The main reason is that it wouldn't work on Silverblue - and part of its functionality was replaced by the rpm-ostree command. Users can overlay the traditional packages by using the rpm-ostree install PACKAGE. But it should only be used when there is no other way. This is because when the new system images are pulled from the repository, the system image must be rebuilt every time it is altered to accommodate the layered packages, or packages that were removed from the base OS or replaced with a different version.

Fedora Silverblue comes with the default set of GUI applications that are part of the base OS. The team is working on porting them to Flatpaks so they can be distributed that way. As a benefit, the base OS will become smaller and easier to maintain and test, and users can modify their default installation more easily. If you want to look at how it's done or help, take a look at the official documentation.

What is Toolbox?

Toolbox is a project to make containers easily consumable for regular users. It does that by using podman's rootless containers. Toolbox lets you easily and quickly create a container with a regular Fedora installation that you can play with or develop on, separated from your OS.

Is there any Silverblue roadmap?

Formally there isn't any, as we're focusing on problems we discover during our testing and from community feedback. We're currently using Fedora's Taiga to do our planning.

What's the release life cycle of the Silverblue?

It's the same as regular Fedora Workstation. A new release comes every 6 months and is supported for 13 months. The team plans to release updates for the OS bi-weekly (or longer) instead of daily as they currently do. That way the updates can be more thoroughly tested by QA and community volunteers before they are sent to the rest of the users.

What is the future of the immutable OS?

From our point of view the future of the desktop involves the immutable OS. It's safest for the user, and Android, ChromeOS, and the last macOS Catalina all use this method under the hood. For the Linux desktop there are still problems with some third party software that expects to write to the OS. HP printer drivers are a good example.

Another issue is how parts of the system are distributed and installed. Fonts are a good example. Currently in Fedora they're distributed in RPM packages. If you want to use them, you have to overlay them and then restart to the newly created image that contains them.

What is the future of standard Workstation?

There is a possibility that the Silverblue will replace the regular Workstation. But there's still a long way to go for Silverblue to provide the same functionality and user experience as the Workstation. In the meantime both desktop offerings will be delivered at the same time.

How does Atomic Workstation or Fedora CoreOS relate to any of this?

Atomic Workstation was the name of the project before it was renamed to Fedora Silverblue.

Fedora CoreOS is a different, but similar project. It shares some fundamental technologies with Silverblue, such as rpm-ostree, toolbox and others. Nevertheless, CoreOS is a more minimal, container-focused and automatically updating OS.

12 Jul 2019 8:00am GMT

Porfirio A. Páiz - porfiriopaiz: repos

Software Repositories

Once we solved the problem of getting connected to the Internet and how to launch a terminal, you might want to install all the software you use.

The software comes from somewhere, on Fedora these are called Software Repositories, next I detail which are the ones I enable on all my Fedora installs apart of the officials that comes preinstalled and enabled by default.

Open a terminal and enable some of these.


RPM Fusion is a repository of add-on packages for Fedora and EL+EPEL maintained by a group of volunteers. RPM Fusion is not a standalone repository, but an extension of Fedora. RPM Fusion distributes packages that have been deemed unacceptable to Fedora.

More about RPMFusion on its official website: https://rpmfusion.org/FAQ

su -c 'dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm'

Fedora Workstation Repositories

From the Fedora wiki page corresponding to Fedora Workstation Repositories:

The Fedora community strongly promotes free and open source resources. The Fedora Workstation, in its out of the box configuration, therefore, only includes free and open source software. To make the Fedora Workstation more usable, we've made it possible to easily install a curated set of third party (external) sources that supply software not included in Fedora via an additional package.

Read more at: https://fedoraproject.org/wiki/Workstation/Third_Party_Software_Repositories

Please note that this will only install the *.repo files, it will not enable the provided repos:

su -c 'dnf install fedora-workstation-repositories'

Fedora Rawhide's Repositories

Rawhide is the name given to the current development version of Fedora. It consists of a package repository called "rawhide" and contains the latest build of all Fedora packages updated on a daily basis. Each day, an attempt is made to create a full set of 'deliverables' (installation images and so on), and all that compose successfully are included in the Rawhide tree for that day.

It is possible to install its repository files and just temporarily enable it for just a single transaction, let us say, to simple install or upgrade a single package and its dependencies, maybe, to give a try to its new version that is not currently available on any of the stable and maintained versions of Fedora.

This is useful when a bug was fixed on Rawhide but it has not landed yet on the stable branch of Fedora and the urge for it cannot wait.

Again, this will just install the *.repo file under /etc/yum.repos.d/, this will not enable it. Later we will see how to handle, disable and enable this repositories for just one transaction.

More on Rawhide on its wiki page: https://fedoraproject.org/wiki/Releases/Rawhide

su -c 'dnf install fedora-repos-rawhide'


Copr is an easy-to-use automatic build system providing a package repository as its output.

Here are some of the repos I rely on for some packages:


Remarkable is a free fully featured markdown editor.

su -c 'dnf -y copr enable neteler/remarkable'


Gajim is a Jabber client written in PyGTK, currently it provides support for the OMEMO encryption method which I use. This repo provides tools and dependencies not available in the official Fedora repo.

su -c 'dnf -y copr enable philfry/gajim'


QGIS is a user friendly Open Source Geographic Information System.

su -c 'dnf -y copr enable dani/qgis'


This provides the .NET CLI tools and runtime for Fedora.

su -c 'dnf copr enable dotnet-sig/dotnet'


Few weeks ago I decided to give a try to VSCodium, a fork of VSCode, here is how to enable its repo for Fedora.

First import its gpg key, so you can check the packages retrieved from the repo:

su -c 'rpm --import https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg'

Now create the vscodium.repo file:

su -c "tee -a /etc/yum.repos.d/vscodium.repo << 'EOF'


Now check that all the repos has been successfully installed and some of them enabled by refreshing the dnf metadata.

su -c 'dnf check-update'

Thats all, in the next post will see how to enable some of this repos, how temporarilly disable and enable some other for just a single transaction, how to install or upgrade certain packages from an specific repo and many repo administration tasks.

12 Jul 2019 5:32am GMT

Fedora Magazine: Firefox 68 available now in Fedora

Earlier this week, Mozilla released version 68 of the Firefox web browser. Firefox is the default web browser in Fedora, and this update is now available in the official Fedora repositories.

This Firefox release provides a range of bug fixes and enhancements, including:

Updating Firefox in Fedora

Firefox 68 has already been pushed to the stable Fedora repositories. The security fix will be applied to your system with your next update. You can also update the firefox package only by running the following command:

$ sudo dnf update --refresh firefox

This command requires you to have sudo setup on your system. Additionally, note that not every Fedora mirrors syncs at the same rate. Community sites graciously donate space and bandwidth these mirrors to carry Fedora content. You may need to try again later if your selected mirror is still awaiting the latest update.

12 Jul 2019 1:33am GMT