12 Nov 2018

feedFedora People

Richard Hughes: More fun with libxmlb

A few days ago I cut the 0.1.4 release of libxmlb, which is significant because it includes the last three features I needed in gnome-software to achieve the same search results as appstream-glib.

The first is something most users of database libraries will be familiar with: Bound variables. The idea is you prepare a query which is parsed into opcodes, and then at a later time you assign one of the ? opcode values to an actual integer or string. This is much faster as you do not have to re-parse the predicate, and also means you avoid failing in incomprehensible ways if the user searches for nonsense like ]@attr. Borrowing from SQL, the syntax should be familiar:

g_autoptr(XbQuery) query = xb_query_new (silo, "components/component/id[text()=?]/..", &error);
xb_query_bind_str (query, 0, "gimp.desktop", &error);

The second feature makes the caller jump through some hoops, but hoops that make things faster: Indexed queries. As it might be apparent to some, libxmlb stores all the text in a big deduplicated string table after the tree structure is defined. That means if you do <component component="component">component</component> then we only store just one string! When we actually set up an object to check a specific node for a predicate (for instance, text()='fubar' we actually do strcmp("fubar", "component") internally, which in most cases is very fast…

Unless you do it 10 million times…

Using indexed strings tells the XbMachine processing the predicate to first check if fubar exists in the string table, and if it doesn't, the predicate can't possibly match and is skipped. If it does exist, we know the integer position in the string table, and so when we compare the strings we can just check two uint32_t's which is quite a lot faster, especially on ARM for some reason. In the case of fwupd, it is searching for a specific GUID when returning hardware results. Using an indexed query takes the per-device query time from 3.17ms to about 0.33ms - which if you have a large number of connected updatable devices makes a big difference to the user experience. As using the indexed queries can have a negative impact and requires extra code it is probably only useful in a handful of cases. In case you do need this feature, this is the code you would use:

xb_silo_query_build_index (silo, "component/id", NULL, &error); // the cdata
xb_silo_query_build_index (silo, "component", "type", &error); // the @type attr
g_autoptr(XbNode) n = xb_silo_query_first (silo, "component/id[text()=$'test.firmware']", &error);

The indexing being denoted by $'' rather than the normal pair of single quotes. If there is something more standard to denote this kind of thing, please let me know and I'll switch to that instead.

The third feature is: <bStemming; which means you can search for "gaming mouse" and still get results that mention games, game and Gaming. This is also how you can search for words like Kongreßstraße which matches kongressstrasse. In an ideal world stemming would be computationally free, but if we are comparing millions of records each call to libstemmer sure adds up. Adding the stem() XPath operator took a few minutes, but making it usable took up a whole weekend.

The query we wanted to run would be of the form id[text()~=stem('?') but the stem() would be called millions of times on the very same string for each comparison. To fix this, and to make other XPath operators faster I implemented an opcode rewriting optimisation pass to the XbMachine parser. This means if you call lower-case(text())==lower-case('GIMP.DESKTOP') we only call the UTF-8 strlower function N+1 times, rather than 2N times. For lower-case() the performance increase is slight, but for stem it actually makes the feature usable in gnome-software. The opcode rewriting optimisation pass is kinda dumb in how it works ("lets try all combinations!"), but works with all of the registered methods, and makes all existing queries faster for almost free.

One common question I've had is if libxmlb is supposed to obsolete appstream-glib, and the answer is "it depends". If you're creating or building AppStream metadata, or performing any AppStream-specific validation then stick to the appstream-glib or appstream-builder libraries. If you just want to read AppStream metadata you can use either, but if you can stomach a binary blob of rewritten metadata stored somewhere, libxml is going to be a couple of orders of magnitude faster and use a ton less memory.

If you're thinking of using libxml in your project send me an email and I'm happy to add more documentation where required. At the moment libxmlb does everything I need for fwupd and gnome-software and so apart from bugfixes I think it's basically "done", which should make my manager somewhat happier. Comments welcome.

12 Nov 2018 3:49pm GMT

Dan Walsh: Container Labeling

An issue was recently raised on libpod, the github repo for Podman.

"container_t isn't allowed to access container_var_lib_t"

Container policy is defined in the container-selinux package. By default containers run with the SELinux type "container_t" whether this is a container launched by just about any container engine like: podman, cri-o, docker, buildah, moby. And most people who use SELinux with containers from container runtimes like runc, systemd-nspawn use it also.

By default container_t is allowed to read/execute labels under /usr, read generically labeled content in the hosts /etc directory (etc_t).

The default label for content in /var/lib/docker and /var/lib/containers is container_var_lib_t, This is not accessible by containers, container_t, whether they are running under podman, cri-o, docker, buildah ... We specifically do not want containers to be able to read this content, because content that uses block devices like devicemapper and btrfs(I believe) is labeled container_var_lib_t, when the containers are not running.

For overlay content we need to allow containers to read/execute the content, we use the type container_share_t, for this content. So container_t is allowed to read/execute container_share_t files, but not write/modify them.

Content under /var/lib/containers/overlay* and /var/lib/docker/overlay* is labeled container_share_ by default.

$ grep overlay /etc/selinux/targeted/contexts/files/file_contexts
/var/lib/docker/overlay(/.*)? system_u:object_r:container_share_t:s0
/var/lib/docker/overlay2(/.*)? system_u:object_r:container_share_t:s0
/var/lib/containers/overlay(/.*)? system_u:object_r:container_share_t:s0
/var/lib/containers/overlay2(/.*)? system_u:object_r:container_share_t:s0
/var/lib/docker-latest/overlay(/.*)? system_u:object_r:container_share_t:s0
/var/lib/docker-latest/overlay2(/.*)? system_u:object_r:container_share_t:s0
/var/lib/containers/storage/overlay(/.*)? system_u:object_r:container_share_t:s0
/var/lib/containers/storage/overlay2(/.*)? system_u:object_r:container_share_t:s0

The label container_file_t is the only type that is writeable by containers. container_file_t is used when the overlay mount is created for the upper directory of an image. It is also used for content mounted from devicemapper and btrfs.

If you volume mount in a directory into a container and add a :z or :Z the container engines relabeled the content under the volumes to container_file_t.

Failure to read/write/execute content labeled container_var_lib_t is expected.

When I see this type of AVC, I expect that this is either a volume mounted in from /var/lib/container or /var/lib/docker or a mislabeled content under and overlay directory like /var/lib/containers/storage/overlay.


To solve these, I usually recommend running

restorecon -R -v /var/lib/containers
restorecon -R -v /var/lib/docker

Or if it is a volume mount to use the :z, or :Z/

12 Nov 2018 2:01pm GMT

Kiwi TCMS: Kiwi TCMS 6.2.1

We're happy to announce Kiwi TCMS version 6.2.1! This is a small release that includes some improvements and bug-fixes. You can explore everything at https://demo.kiwitcms.org!

Supported upgrade paths:

5.3   (or older) -> 5.3.1
5.3.1 (or newer) -> 6.0.1
6.0.1            -> 6.1
6.1              -> 6.1.1
6.1.1            -> 6.2 (or newer)

Docker images:

kiwitcms/kiwi       latest  24338088bf46    956.8 MB
kiwitcms/kiwi       6.2     7870085ad415    957.6 MB
kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955.7 MB
kiwitcms/kiwi       6.1     b559123d25b0    970.2 MB
kiwitcms/kiwi       6.0.1   87b24d94197d    970.1 MB
kiwitcms/kiwi       5.3.1   a420465852be    976.8 MB

Changes since Kiwi TCMS 6.2


Bug fixes

  • Fix InvalidQuery, field TestCase.default_tester cannot be both deferred and traversed using select_related at the same time. References Issue #346


  • Pylint fixes (Ivaylo Ivanov)
  • Remove JavaScript and Python functions in favor of existing JSON-RPC
  • Remove vendored-in js/lib/jquery.dataTables.js which is now replaced by the npm package datatables.net (required by Patternfly)



  • https://demo.kiwitcms.org is using a new SSL certificate with serial number 46:78:80:EA:80:A4:FC:65:17:E4:59:EC:1D:C2:27:47
  • Version 6.2.1 has been published to PyPI to facilitate people who want to deploy Kiwi TCMS on Heroku. Important: PyPI package will be provided as a convenience for those who know what they are doing. Valid bugs and issues will be dealth with accordingly. As we do not deploy from a PyPI tarball we ask you to provide all the necessary details when reporting issues! If you have no idea what all of this means then stick to the official Docker images!

How to upgrade

If you are using Kiwi TCMS as a Docker container then:

cd Kiwi/
git pull
docker-compose down
docker pull kiwitcms/kiwi
docker pull centos/mariadb
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate

Don't forget to backup before upgrade!

WARNING: kiwitcms/kiwi:latest and docker-compose.yml will always point to the latest available version! If you have to upgrade in steps, e.g. between several intermediate releases, you have to modify the above workflow:

# starting from an older Kiwi TCMS version
docker-compose down
docker pull kiwitcms/kiwi:<next_upgrade_version>
edit docker-compose.yml to use kiwitcms/kiwi:<next_upgrade_version>
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate
# repeat until you have reached latest

Happy testing!

12 Nov 2018 12:45pm GMT

Fedora Magazine: Model the brain with the NEST simulator on Fedora

The latest version of the NEST simulator is now available in Fedora as part of the NeuroFedora initiative. NEST is a standard tool used by computational neuroscientists to make large scale computer models of the brain that are needed to investigate among other things, how the brain processes information.

The NEST Eco-system

NEST offers a wide range of ready-to-use models, excellent documentation, and is supported by a thriving Open Source development community.

It provides a simple Python interface which makes it really easy to use.
In addition, it is designed so it can be run on both laptops and super computing clusters. That way it can be used to make models that range from a few neurons to those that include millions of neurons. For reference, the human brain contains 86 billion neurons on average!

It is possible to build such clusters using the Message Passing Interface (MPI), and clusters must be built separately to support it.

Install NEST

To make it easier for the users, we provide various variants of NEST. For example, to install a version that doesn't use MPI for use on a workstation/laptop, one can use:

$ sudo dnf install nest python3-nest

Install NEST with MPI support

Fedora includes two implementations of MPI: MPICH and OpenMPI, and NEST has been built for both. For the MPICH version, one simply installs the mpich variants:

$ sudo dnf install nest-mpich python3-nest-mpich

For OpenMPI, the commands are similar:

$ sudo dnf install nest-openmpi python3-nest-openmpi

Finally the following command loads the MPI environment modules in order to activate the correct NEST variant:

$ module load mpi/mpich-x86_64  # mpi/openmpi-x86_64 for openmpi

Next, NEST uses some environment variables, which can set up by sourcing the nest_vars.sh file:

$ which nest_vars.sh
$ source /usr/lib64/mpich/bin/nest_vars.sh

Using NEST

After the installation and configuration of NEST, you can start using it inside a Python shell.

$ ipython3
In [1]: import nest
[INFO] [2018.10.16 12:27:43 /builddir/build/BUILD/nest-simulator-2.16.0-mpich/nestkernel/rng_manager.cpp:238 @ Network::create_rngs_] : Creating default RNGs
[INFO] [2018.10.16 12:27:43 /builddir/build/BUILD/nest-simulator-2.16.0-mpich/nestkernel/rng_manager.cpp:284 @ Network::create_grng_] : Creating new default global RNG

Oct 16 12:27:43 SLIStartup [Error]:
NEST_DOC_DIR is not usable:

Oct 16 12:27:43 SLIStartup [Error]:
Directory '/usr/lib64/mpich/share/doc/nest' does not exist.

Oct 16 12:27:43 SLIStartup [Error]:
I'm using the default: /usr/lib64/mpich/share/doc/nest

-- N E S T --
Copyright (C) 2004 The NEST Initiative

Version: v2.16.0
Built: Oct 5 2018 20:22:17

This program is provided AS IS and comes with
NO WARRANTY. See the file LICENSE for details.

Problems or suggestions?
Visit http://www.nest-simulator.org

Type 'nest.help()' to find out more about NEST.

In [2]: nest.version()
Out[2]: 'NEST 2.16.0'

NEST documentation is provided in a nest-doc package, and we also provide a README.fedora file in all nest packages: nest, nest-mpich, nest-openmpi that provide detailed instructions on using the different variants. The same file can also be found here.

If you run into issues, find a bug, or just want to chat, you can find the NeuroFedora SIG here.

12 Nov 2018 8:00am GMT

Michael Catanzaro: The GNOME (and WebKitGTK+) Networking Stack

WebKit currently has four network backends:

One guess which of those we're going to be talking about in this post. Yeah, of course, libsoup! If you're not familiar with libsoup, it's the GNOME HTTP library. Why is it called libsoup? Because before it was an HTTP library, it was a SOAP library. And apparently somebody thought that when Mexican people say "soap," it often sounds like "soup," and also thought that this was somehow both funny and a good basis for naming a software library. You can't make this stuff up.

Anyway, libsoup is built on top of GIO's sockets APIs. Did you know that GIO has Object wrappers for BSD sockets? Well it does. If you fancy lower-level APIs, create a GSocket and have a field day with it. Want something a bit more convenient? Use GSocketClient to create a GSocketConnection connected to a GNetworkAddress. Pretty straightforward. Everything parallels normal BSD sockets, but the API is nice and modern and GObject, and that's really all there is to know about it. So when you point WebKitGTK+ at an HTTP address, libsoup is using those APIs behind the scenes to handle connection establishment. (We're glossing over details like "actually implementing HTTP" here. Trust me, libsoup does that too.)

Things get more fun when you want to load an HTTPS address, since we have to add TLS to the picture, and we can't have TLS code in GIO or GLib due to this little thing called "copyright law." See, there are basically three major libraries used to implement TLS on Linux, and they all have problems:

So naturally GLib uses NSS to avoid the license issues of OpenSSL and GnuTLS, right?

Haha no, it uses a dynamically-loadable extension point system to allow you to pick your choice of OpenSSL or GnuTLS! (Support for NSS was started but never finished.) This is OK because embedded systems vendors don't use GPL applications and have no problems with OpenSSL, while desktop Linux users don't produce tivoized embedded systems and have no problems with LGPLv3. So if you're using desktop Linux and point WebKitGTK+ at an HTTPS address, then GLib is going to load a GIO extension point called glib-networking, which implements all of GIO's TLS APIs - notably GTlsConnection and GTlsCertificate - using GnuTLS. But if you're building an embedded system, you simply don't build or install glib-networking, and instead build a different GIO extension point called glib-openssl, and libsoup will create GTlsConnection and GTlsCertificate objects based on OpenSSL instead. Nice! And if you're Centricular and you're building GStreamer for Windows, you can use yet another GIO extension point, glib-schannel, for your native Windows TLS goodness, all hidden behind GTlsConnection so that GStreamer (or whatever application you're writing) doesn't have to know about SChannel or OpenSSL or GnuTLS or any of that sad complexity.

Now you know why the TLS extension point system exists in GIO. Software licenses! And you should not be surprised to learn that direct use of any of these crypto libraries is banned in libsoup and WebKit: we have to cater to both embedded system developers and to GPL-licensed applications. All TLS library use is hidden behind the GTlsConnection API, which is really quite nice to use because it inherits from GIOStream. You ask for a TLS connection, have it handed to you, and then read and write to it without having to deal with any of the crypto details.

As a recap, the layering here is: WebKit -> libsoup -> GIO (GLib) -> glib-networking (or glib-openssl or glib-schannel).

So when Epiphany fails to load a webpage, and you're looking at a TLS-related error, glib-networking is probably to blame. If it's an HTTP-related error, the fault most likely lies in libsoup. Same for any other GNOME applications that are having connectivity troubles: they all use the same network stack. And there you have it!

P.S. The glib-openssl maintainers are helping merge glib-openssl into glib-networking, such that glib-networking will offer a choice of GnuTLS or OpenSSL and obsoleting glib-openssl. This is still a work in progress. glib-schannel will be next!

P.S.S. libcurl also gives you multiple choices of TLS backend, but makes you choose which at build time, whereas with GIO extension points it's actually possible to choose at runtime from the selection of installed extension points. The libcurl approach is fine in theory, but creates some weird problems, e.g. different backends with different bugs are used on different distributions. On Fedora, it used to use NSS, but now uses OpenSSL, which is fine for Fedora, but would be a license problem elsewhere. Debian actually builds several different backends and gives you a choice, unlike everywhere else. I digress.

12 Nov 2018 4:51am GMT

Open Source Security Podcast: Episode 122 - What will Apple's T2 chip mean for the rest of us?

Josh and Kurt talk about Apple's new T2 security chip. It's not open source but we expect it to change the security landscape in the coming years.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/7523042/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

Comment on Twitter with the #osspodcast hashtag

12 Nov 2018 4:01am GMT

Nikos Roussos: Παρακάμπτοντας τη λογοκρισία

free speech conditions

Το παρόν κείμενο είναι μια απόπειρα καταγραφής τεχνικών μεθόδων και εργαλείων παράκαμψης της λογοκρισίας που επιβάλλεται από διάφορες κρατικές επιτροπές και τις οποίες υποχρεώνονται να εφαρμόσουν οι ελληνικοί πάροχοι internet. Πριν από περίπου δύο χρόνια έγραψα ένα post για τον κίνδυνο να δημιουργηθεί μια επιτροπή λογοκρισίας στο όνομα της προάσπισης δικαιωμάτων πνευματική ιδιοκτησίας. Αυτό που τόνιζα τότε:

Αυτό που περιγράφει ο νόμος είναι τη σύσταση μια επιτροπής με την απόλυτη εξουσία και χωρίς να έχει προηγηθεί καμία δικαστική απόφαση να διατάσσει τους παρόχους να μπλοκάρουν ένα website από το ελληνικό internet.

Από τότε μέχρι σήμερα αυτό που μεσολάβησε είναι πως ο νόμος τελικά ψηφίστηκε με ελάχιστες τροποποιήσεις στα επίμαχα σημεία. Και οι όποιες τροποποιήσεις ήταν μάλλον προς το χειρότερο, καθώς η επιτροπή που συστάθηκε τελικώς έχει λιγότερα μέλη και μόνο με εκπροσώπους οργανισμών (ΟΠΙ, ΕΕΤΤ, ΑΠΔΠΧ), αντί για 5μελής και με συμμετοχή δικαστικών, όπως ήταν η αρχική πρόβλεψη. Η συγκεκριμένη επιτρόπή (ΕΔΠΠΙ), πρακτικά κινείται στα χνάρια της επιτροπής παιγνίων (ΕΕΠΠ), που λειτουργεί επίσης ως επιτροπή λογοκρισίας σε ένα διαφορετικό πεδίο.

Εμπόριο λογοκρισίας

Ένα ενδιαφέρον στοιχείο, που απ' όσο θυμάμαι, επίσης έλειπε απ' το υπό διαβούλευση νομοσχέδιο, είναι πως τα αιτήματα προς την επιτροπή κοστίζουν. Η ταρίφα είναι 372€ ανά domain. Αλλά όπως φαίνεται και στον παρακάτω τιμοκατάλογο, κάνουν καλύτερες τιμές αν το αίτημα περιλαμβάνει περισσότερα domains.

edppi pricing

Μπορεί να μην συμφωνώ με τον τρόπο λειτουργίας και με την ουσία του τι κάνει αυτή η επιτροπή, αλλά έχει ενδιαφέρον πως προφανώς δεν απευθύνεται σε μεμονωμένους ανεξάρτητους καλλιτέχνες οι οποίοι πιθανόν να μην έχουν την πολυτέλεια να διαθέσουν τους σχετικούς πόρους. Τα ποσά αυτά είναι μάλλον ψίχουλα για οργανισμούς συλλογικής διαχείρισης, αλλά είναι ταυτόχρονα και απαγορευτικά για τους περισσότερους ανεξάρτητους καλλιτέχνες. Είναι αρκετά σαφές για ποιους, και από ποιους, σχεδιάστηκε να εξυπηρετήσει αυτό το νομοθέτημα.


Πριν από λίγες ημέρες η επιτροπή δημοσίευσε τις 3 πρώτες αποφάσεις της. Οι αποφάσεις ελήφθησαν κατόπιν αιτήματος της Εταιρίας Προστασίας Οπτικοακουστικών Έργων (Ε.Π.Ο.Ε.), που είναι οργανισμός συλλογικής προστασίας πνευματικών δικαιωμάτων, και περιλαμβάνει και μερικά γνωστά ονόματα απ' τον χώρο του κινηματογράφου (Odeon, Seven, Feelgood, Ελληνικό Κέντρο Κινηματογράφου). Η τρίτη κατά σειρά απόφαση, που συγκέντρωσε και τη μεγαλύτερη δημοσιότητα παραθέτει αρκετά domain names (38) και διατάσσει τους παρόχους internet να διακόψουν τη πρόσβαση σε αυτούς εντός 48 ωρών.

Τεχνική Παράκαμψη

Η αφορμή γι αυτό το κείμενο ήταν να αναφέρω συνοπτικά κάποιες απλές τεχνικές λύσεις.

Έχω αναφέρει ξανά πως παρόλο που το ζήτημα είναι επί της ουσίας πολιτικό, αυτό δεν σημαίνει πως δεν θα αξιοποιήσουμε τα τεχνολογικά μέσα που έχουμε στη διάθεση μας παράλληλα με την όποια πολιτική δράση.


Ο πιο απλός τρόπος να παρακάμψουμε τους μηχανισμούς μπλοκαρίσματος websites χωρίς να κάνουμε πολλές ρυθμίσεις ή αλλαγές στον υπολογιστή μας είναι να χρησιμοποιήσουμε τον Tor Browser. Είναι ένας τροποποιημένος Firefox με τον οποίο μπορούμε να περιηγηθούμε κανονικά στο διαδίκτυο, αξιοποιώντας το δίκτυο Tor. Με αυτό τον τρόπο, παρόλο που προφανώς εξακολουθούμε να χρησιμοποιούμε τμήματα της υποδομής του παρόχου μας, παρακάμπτουμε κάποια (πχ. dns) που χρησιμοποιούνται για να μπλοκάρουν συγκεκριμένο traffic.

Ως bonus, κερδίζουμε και όλα τα οφέλη της ανώνυμης περιήγησης που προσφέρει το Tor δίκτυο. Αν θες να μάθεις περισσότερα γι αυτό, κι επειδή κυκλοφορεί αρκετή παραφιλολογία στα ελληνικά media περί dark web, δες την ομιλία του kargig.

tor browser


Δεδομένου πως οι εντολές λογοκρισίας αφορούν συγκεκριμένα domains, ο τρόπος με τον οποίο μπλοκάρονται απ' τους παρόχους είναι μέσω DNS. Χωρίς να μπλέξουμε με πολλές τεχνικές λεπτομέρειες, το DNS είναι ο τρόπος με τον οποίο ο browser ή ο υπολογιστής μας λαμβάνουν την πληροφορία για το ποια IP διεύθυνση αντιστοιχεί σε ποιο domain, ώστε να μπορέσουμε να το επισκεφτούμε.

Το router του παρόχου που έχουμε στο σπίτι μας έχει προρυθμισμένους τους DNS servers του παρόχου, οπότε βασιζόμαστε σε αυτόν να μας δώσει αυτή την πληροφορία. Είναι συνεπώς πολύ απλό να μας δώσει μια λάθος πληροφορία ώστε να μας αποτρέψει να μπούμε σε ένα website.

Ο τεχνικός τρόπος παράκαμψης είναι πολύ απλά να αλλάξουμε τους DNS servers που χρησιμοποιούμε και να επιλέξουμε κάποιους που δεν λογοκρίνουν. Και υπάρχουν πολλοί τρόποι να το κάνουμε αυτό ανάλογα με το τι μας βολεύει περισσότερο.


Ο πιο απλός τρόπος είναι να το κάνουμε στον browser μας. Ο Firefox, απ' την τελευταία έκδοση (63) και μετά, μας δίνει τη δυνατότητα να χρησιμοποιήσουμε μια σχετικά καινούρια τεχνολογία (DNS over HTTPS), με την οποία κρυπτογραφείται το dns traffic. Επειδή είναι ακόμα υπό δοκιμή θα πρέπει να την ενεργοποιήσουμε χειροκίνητα με 4 απλά βήματα:

  1. Γράφουμε στην μπάρα διεύθυνσης το παρακάτω url: about:config
  2. Αφού πατήσουμε το κουμπί επιβεβαίωσης θα μας ανοίξει μια λίστα με όλες τις παραμέτρους ρύθμισης του Firefox. Στο πεδίο αναζήτησης γράφουμε: trr.
  3. Βρίσκουμε την επιλογή network.trr.mode κάνουμε διπλό κλικ στην τιμή (που μάλλον είναι 0) και την αλλάζουμε σε 2.
  4. Επιβεβαιώνουμε πως η επιλογή network.trr.uri έχει το σχετικό url από Mozilla: https://mozilla.cloudflare-dns.com/dns-query.

Με αυτό τον τρόπο ο Firefox θα χρησιμοποιεί την αντίστοιχη υπηρεσία της Cloudflare, αλλά μπορείς να αλλάξεις το url για να χρησιμοποιήσεις όποιον άλλο πάροχο που εμπιστεύεσαι και ξέρεις πως υποστηρίζει τη συγκεκριμένη τεχνολογία. Περισσότερα τεχνικές λεπτομέρειες στο σχετικό blog post από Mozilla.

doh firefox


Μία ακόμα λύση είναι να αλλάξεις DNS servers κατευθείαν στο router σου. Τα περισσότερα καινούρια router υποστηρίζουν κάτι τέτοιο. Αυτό είναι βολικό στην περίπτωση που θέλουμε να χρησιμοποιήσουμε εναλλακτικούς DNS παρόχους σε πολλές συσκευές στο σπίτι μας.

Αν δεν έχεις μπει ποτέ στο διαχειριστικό περιβάλλον του router σου είναι μια καλή ευκαιρία να μάθεις πως να μπαίνεις και να αλλάξεις και τους default κωδικούς πρόσβασης. Για να μπεις πρέπει να γράψεις την IP διεύθυνση του router σου στον browser σου, που συνήθως είναι ή Θα σου ζητήσει τα στοιχεία εισόδου. Αν δεν τα έχεις αλλάξει, θα αναγράφονται στο κάτω μέρος του router.

Αφού μπεις ψάξε για τη σχετική ρύθμιση. Συνήθως στις ρυθμίσεις LAN. Για παράδειγμα, η προκαθορισμένη επιλογή στο router μου είναι "ISP DNS" και βρίσκεται στο τμήμα "DHCP Server". Αν λοιπόν αλλάξουμε αυτή τη ρύθμιση μπορούμε να δώσουμε IP διευθύνσεις κάποιας τρίτης υπηρεσίας. Πχ. μπορούμε να χρησιμοποιήσουμε την Cloudflare ( ή OpenNIC. Προσωπικά θα απέφευγα να χρησιμοποιήσω παρόχους που γενικώς δεν σέβονται την ιδιωτικότητα των χρηστών τους σε υπηρεσίες και προϊόντα τους, αλλά δυστυχώς είναι οι συνήθεις προτάσεις σε διάφορα forums (Google, OpenDNS). Αλλά η επιλογή είναι προσωπική, καθώς ενέχει ένα επίπεδο εμπιστοσύνης.

router dns

Λειτουργικό σύστημα

Μπορείς να αλλάξεις DNS πάροχο και σε επίπεδο λειτουργικό σύστημα. Οι ρυθμίσεις ποικίλουν ανάλογα την έκδοση και το είδος του λειτουργικού συστήματος που χρησιμοποιείς, αλλά συνήθως ο άμεσος τρόπος είναι στις ρυθμίσεις της ασύρματης ή ενσύρματης σύνδεσης που χρησιμοποιούμε.

os dns

Ελευθερία λόγου

Η ελευθερία λόγου στο διαδίκτυο είναι το ίδιο σημαντική με την ελευθερία λόγου στην offline ζωή μας. Ο "ψηφιακός κόσμος" είναι εξίσου πραγματικός με τον φυσικό. Όταν ζεις σε μια χώρα που χρησιμοποιεί μηχανισμούς λογοκρισίας στο διαδίκτυο, τότε ζεις σε μια καταπιεστική κοινωνία. Δεν υπάρχει κάποιος διαχωρισμός ανάμεσα σε αυτά τα δύο.

Το διακύβευμα στη συγκεκριμένη περίπτωση είναι πολύ απλό. Ποιο δικαίωμα είναι πιο σημαντικό; Η ελευθερία λόγου ή το οικονομικό συμφέρον ενός καλλιτέχνη; Δεδομένου πως μιλάμε για επιτροπές οι οποίες αποφασίζουν χωρίς καμία δικαστική ή οποιαδήποτε άλλη λογοδοσία, μπορούμε να καταλάβουμε πως απαντάει σε αυτό το ερώτημα η πολιτική εξουσία.

Το θέμα των πνευματικών δικαιωμάτων και του βιοπορισμού των καλλιτεχνών είναι αρκετά σημαντικό, αλλά δεν πρόκειται να λυθεί με προχειρότητα και υιοθετώντας αυταρχικές μεθόδους με επιτροπές που λειτουργούν πέρα από οποιαδήποτε διαδικασία ερμηνείας και επιβολής του νόμου. Μέθοδοι, που όπως φαίνεται κι απ' τα παραπάνω, δεν έχουν καν την απαιτούμενη αποτελεσματικότητα.

Μάθε πως μπορείς να χρησιμοποιήσεις την τεχνολογία για να παρακάμψεις τη λογοκρισία ή/και να διατηρήσεις την ανωνυμία σου. Γίνονται συχνά συναντήσεις και workshops γύρω απ' αυτά τα θέματα. Αν συμμετέχεις σε κάποια κοινότητα ή οργανισμό θα χαρούμε να οργανώσουμε κι εκεί κάποιο workshop.

Τεχνικό παράρτημα

Το μπλοκάρισμα των sites που αναφέρονται στην απόφαση έχει ήδη ξεκινήσει. Άλλωστε είχαν μόνο 48 ώρες για να συμμορφωθούν οι πάροχοι.

Ως παράδειγμα ένας dns server της Cosmote:

dig +short @ gamatotv.me

Αν αναρωτιέσαι τι είναι αυτή η IP που επιστρέφει:

dig +short -x

Οπότε η επιτροπή έχει και την τεχνική δυνατότητα αν το επιθυμεί να καταγράφει τις IP διευθύνσεις των χρηστών που προσπαθούν να μπουν στα συγκεκριμένα websites.

Ας δοκιμάσουμε να ρωτήσουμε την Cloudflare:

dig +short @ gamatotv.me

*Comments and reactions on Mastodon, Diaspora, Twitter.

12 Nov 2018 2:21am GMT

10 Nov 2018

feedFedora People

Ankur Sinha "FranciscoD": NeuroFedora update: week 45

NeuroFedora logo!

NeuroFedora logo by Terezahl from the Fedora Design Team

In week 45:

All new packages must go through Fedora's QA (testing) process before being made available to end users in the repositories. You can help test these packages following the instructions here.

A lot of the software we worked on this week was related to neuro-imaging, and fortunately, a lot of it was Python based which is usually quite simple to build. The coming week, though, I intend to work on NEURON. Unfortunately, NEURON isn't the easiest to build:

There is a lot of software available in NeuroFedora already. You can see the complete list here on Fedora SCM. Software that is currently being worked on is listed on our Pagure project instance. If you use software that is not on our list, please suggest it to us using the suggestion form.

Feedback is always welcome. You can get in touch with us here.

The Fedora community: enabling Open Science

While the NeuroFedora SIG is actively working on these packages, it would not be possible without our friends in the Fedora community that have helped with the various stages of the package maintenance pipeline.

We're grateful to the various upstreams that we're bothering with issues, and everyone in the Fedora community (including people I may have missed) for enabling us to further Open Science via Fedora.

10 Nov 2018 11:20am GMT

09 Nov 2018

feedFedora People

Fedora Community Blog: FPgM report: 2018-45

Fedora Program Manager weekly report on Fedora Project development and progress

Here's your report of what has happened in Fedora Program Management this week.

I've set up weekly office hours in #fedora-meeting-1 (note a change in channel). Drop by if you have any questions or comments about the schedule, Changes, elections or anything else.


Help wanted

Upcoming meetings

Fedora 29 Status

Fedora 30 Status

Fedora 30 includes a Change that will cause ambiguous python shebangs to error. A list of failing builds is available on Taskotron.

Fedora 30 includes a Change that will remove glibc langpacks from the buildroot. See the devel mailing list for more information and impacted packages.



Submitted to FESCo

The post FPgM report: 2018-45 appeared first on Fedora Community Blog.

09 Nov 2018 9:41pm GMT

Tomas Tomecek: Road to ansible-bender 0.2.0

I'm pleased to announce that ansible-bender is now available in version 0.2.0.

I would like to share a story with you how I used ansible-bender to release the 0.2.0 version.

In our team at Red Hat, we developed a bot to aid us with releasing our upstream projects: release-bot. The bot is able to release to Github, PyPI and Fedora. All you need to do is to create a new Github issue and that's it - the bot would take care of the rest.

Naturally, what I wanted to do was to deploy the bot so that it would release ansible-bender. @kosciCZ, our intern, did a very good job on how people are meant to use and deploy release-bot - using s2i. You just create a new git repo, put your configuration in it and do s2i build to have an image with your custom release-bot. Sadly I'm terribly OCD and I need to build everything from scratch myself: so I went ahead and build a container image with release-bot on my own.

Did I start by writing a dockerfile?

Nope! I wrote a playbook and built the image using ab itself.

The playbook, take 1

- name: this playbook is meant to populate a container image
  hosts: all
    bot_installation: 'git+https://github.com/user-cont/release-bot.git'
    # bot_installation: 'release-bot'
  - name: install required packages
      name: ['python3-pip', 'git', 'python3-twine', 'python3-pyyaml', 'twine']
      state: present
  - name: install release bot
      name: '{{ bot_installation }}'
      state: present
  - name: ensure release-bot is installed
    command: release-bot --help
  - name: copy config file
      src: conf.yaml
      dest: /conf.yaml
  - name: copy pypirc
      src: pypirc
      dest: ~/.pypirc
  - name: copy github app cert
      src: bot-certificate.pem
      dest: /bot-certificate.pem

Let's talk about the playbook briefly:

The playbook, take 2

It took me actually some time to have the playbook you see above. In this release of ansible-bender I added a few features which help you with development of new playbooks:

  1. You can stop loading from cache in any task.

  2. You can disable creation of new layers in any task.

  3. If the build fails, the image is committed for your later inspection.

So how does these help?

The first feature allows you to load expensive actions from cache (such as package installations) and at the same time keep executing tasks even when a task content is the same, such as cloning git repositories. That's exactly what I was doing: keep changing content of a git branch while the task itself was the same. When I faced this situation with dockerfiles, I kept adding no-op operations to the RUN instructions which I wanted to re-exec. With ansible-bender, you only need to add tag no-cache.

When you disable caching for a part of your playbook, usually you don't need layering any more right? This speeds up the build a bit since ab doesn't need to commit and create new containers for every task. Just add a tag stop-layering to a task and it's done.

The third feature is self-explanatory. If you perform an expensive action and it fails, it may be desirable to hop inside the container and see what went wrong: performing the action again sounds like a waste of time and resources.

Time to improve our playbook then.

- name: this playbook is meant to populate a container image
  hosts: all
    bot_installation: 'git+https://github.com/user-cont/release-bot.git'
    # bot_installation: 'release-bot'
  - name: install required packages
      name: ['python3-pip', 'git', 'python3-twine', 'python3-pyyaml', 'twine']
      state: present
  - name: install release bot
      name: '{{ bot_installation }}'
      state: present
    - 'no-cache'
  - name: ensure release-bot is installed
    command: release-bot --help
  - name: copy config file
      src: conf.yaml
      dest: /conf.yaml
    tags: [stop-layering]
  - name: copy pypirc
      src: pypirc
      dest: ~/.pypirc
  - name: copy github app cert
      src: bot-certificate.pem
      dest: /bot-certificate.pem

Let's use it to build a container image using ansible-bender:

$ ab build ./build_recipe.yml registry.fedoraproject.org/fedora:29 release-bot-ab

PLAY [this playbook is meant to populate a container image] *****************************************

TASK [Gathering Facts] ******************************************************************************
ok: [release-bot-ab-20181110-132800551894-cont]

TASK [install required packages] ********************************************************************
changed: [release-bot-ab-20181110-132800551894-cont]
caching the task result in an image 'release-bot-ab-20182910-132941'

TASK [install release bot] **************************************************************************
detected tag 'no-cache': no cache loading from now
changed: [release-bot-ab-20181110-132800551894-cont]
caching the task result in an image 'release-bot-ab-20183010-133011'

TASK [ensure release-bot is installed] **************************************************************
changed: [release-bot-ab-20181110-132800551894-cont]
caching the task result in an image 'release-bot-ab-20183010-133040'

TASK [copy config file] *****************************************************************************
changed: [release-bot-ab-20181110-132800551894-cont]
detected tag 'stop-layering', tasks won't be cached nor layered any more

TASK [copy pypirc] **********************************************************************************
changed: [release-bot-ab-20181110-132800551894-cont]

TASK [copy github app cert] *************************************************************************
changed: [release-bot-ab-20181110-132800551894-cont]

PLAY RECAP ******************************************************************************************
release-bot-ab-20181110-132800551894-cont : ok=7    changed=6    unreachable=0    failed=0

Getting image source signatures
Skipping fetch of repeat blob sha256:482c4d1bda84fe2df0cb5efb4c85192268ed46b79052c9e03ce2091e1ea010bf
Skipping fetch of repeat blob sha256:07432c540f7e1b8b975fe0d7c2168023df2ac89ab3b4bfa495c9e42984000e6e
Copying blob sha256:5d5032c00c1a125d71c03448b67f208bbe8f0530b9da5667b3221961277f4af5

 0 B / 1.74 KiB [--------------------------------------------------------------]
 1.74 KiB / 1.74 KiB [======================================================] 0s
Copying config sha256:021f4df16157f5d98774e197f37b2174015f7af98fa25305d9a99f44ea6f5673

 0 B / 672 B [-----------------------------------------------------------------]
 672 B / 672 B [============================================================] 0s
Writing manifest to image destination
Storing signatures
Image 'release-bot-ab' was built successfully \o/

And when we rerun the build, package installation is loaded from cache while the task when the bot is being installed is actually performed again:

$ ab build ./build_recipe.yml registry.fedoraproject.org/fedora:29 release-bot-ab

PLAY [this playbook is meant to populate a container image] *****************************************

TASK [Gathering Facts] ******************************************************************************
ok: [release-bot-ab-20181110-134828482424-cont]

TASK [install required packages] ********************************************************************
loaded from cache: 'eaff56b73bc767db67362e67560d0ffe0546afabea9223ae2172811baf314d58'
skipping: [release-bot-ab-20181110-134828482424-cont]

TASK [install release bot] **************************************************************************
detected tag 'no-cache': no cache loading from now
changed: [release-bot-ab-20181110-134828482424-cont]
caching the task result in an image 'release-bot-ab-20184810-134847'

TASK [ensure release-bot is installed] **************************************************************
changed: [release-bot-ab-20181110-134828482424-cont]
caching the task result in an image 'release-bot-ab-20184810-134856'

TASK [copy config file] *****************************************************************************
changed: [release-bot-ab-20181110-134828482424-cont]
detected tag 'stop-layering', tasks won't be cached nor layered any more



With those two ansible tags, it was very efficient for me to develop the playbook.

Can't wait to use ab more for my other projects in future.

At the same time, ab is very fresh and probably not bug-free (I use it, there is a solid test suite but I can't catch everything). While I was writing this blog post, I discovered two bugs which I fixed just now and am about to roll out 0.2.1.

So if you run into some strange behavior, please report it.

Happy hacking!

09 Nov 2018 5:03pm GMT

RHEL Developer: Why you should care about RISC-V

If you haven't heard about the RISC-V (pronounced "risk five") processor, it's an open-source (open-hardware, open-design) processor core created by the University of Berkeley. It exists in 32-bit, 64-bit, and 128-bit variants, although only 32- and 64-bit designs exist in practice. The news is full of stories about major hardware manufacturers (Western Digital, NVidia) looking at or choosing RISC-V cores for their product.

But why should you care? You can't just go to the local electronics boutique and buy a RISC-V laptop or blade server. RISC-V commodity hardware is either scarce or expensive. It's all still early in its lifespan and development, not yet ready for enterprise tasks. Yet it's still something that the average professional should be aware of, for a number of reasons.

By now everyone has heard about the Meltdown and Spectre issues, and related "bugs" users have been finding in Intel and AMD processors. This blog is not about how hard CPU design is - it's hard. Even harder than you realize. The fear created by these bugs was not that there was a problem in the design, but that users of these chips had no insight into how these "black boxes" worked, no way to review code that was outside their control, and no way to audit these processors for other security issues. We're at the mercy of the manufacturer to assure us there are no more bugs left (ha!).

The advantage of an open core here is that a company can audit the internal workings of a processor, at least in theory. If a bug is found by one chip manufacturer using a RISC-V core, the fix can be shared with other manufacturers. And certainly, if there are bugs to be exploited, the black hats and white hats will be able to find them (and fix them) much faster and sooner.

And what if you do want to try a RISC-V system today? Support for 64-bit RISC-V cores with common extensions (MAFD - multiply/divide, atomics, float, and double - aka the 'G' set) was added to the GNU C Library (glibc) in version 2.27, which means (for example) Fedora 28 contains RISC-V support. Bootable images are available, which run in a qemu emulator (standard in Fedora) or on real hardware (such as the SiFive HiFive Unleashed board).

A team of volunteers (of which I am one) is currently working on building the latest Fedora packages for RISC-V on a large number of emulators and a small number of hardware systems, such as this one (mine):

HiFive1 Board

An early access RISC-V development system. Upper right is the HiFive board. Bottom is a VC707 board which provides a PCIe bridge. Middle left is a PCIe riser board. At the top is a commodity PCIe SSD card. Connections on the right: USB serial console, ethernet, power. Additional mess is optional, and at the discretion of the desk owner.

But are there down sides to choosing an open core? Well, there are considerations that anyone should be aware of when choosing any core. Here are a few:

So, like all things engineering… YMMV.

In summary… any time something new comes around, in this case a new processor core and a new way of thinking about the intellectual property therein, it offers users more choices about where they want to put their efforts, resources, and risks. For me, having (and supporting) a new architecture give me an opportunity to hone my skills as well as revisit old decisions about how platforms can be used portably.



The post Why you should care about RISC-V appeared first on RHD Blog.

09 Nov 2018 12:00pm GMT

Fedora Community Blog: FAW 2018 Day 5: “Encouraging crazy ideas”

Fedora Appreciation Week (FAW)

Today is Day 5 of Fedora Appreciation Week and the final day of published Contributor Stories. To help celebrate the Fedora Project, our fifteen-year anniversary, and the community of people that make Fedora what it is, the Community Operations team collected Contributor Stories from the community to feature here every day of Appreciation Week.

Have someone you want to thank? Do you want to share your appreciation with Fedora? See how you can celebrate 15 years of Fedora and participate in Fedora Appreciation Week over on the Fedora Magazine.

Some new stories came in during this week. Today, there are five stories to read from four people: Bhavin (bhavin192), Giannis Konstantinidis (giannisk), Eduard Lucena (x3mboy) and Dhanesh Sabane (dhanesh95).

Inspires and Motivates Contributors

Contributor Story #22 from Giannis Konstantinidis (giannisk)

I first met Christos about nine years ago at an IT exhibition. Christos helped me get started with contributing to the Fedora Project. More importantly, he has inspired and motivated me to further advance my contributions to free open-source projects. I have had the pleasure of collaborating with him during several Fedora and Mozilla events. Thank you, Christos!

Giannis Konstantinidis - contributor-stories#22


Contributor Story #21 from Bhavin (bhavin192)

It all started with installing Fedora 24 on my laptop. When I saw it during a workshop, I was really impressed by the interface and after that, it has been Fedora everywhere.

As me, akshay196 and few of my classmates were only users of Fedora at our university, we started to promote it among other students. It is fun to organize release parties and help others to install Fedora on their laptops. Later I started contributing to Transtats (tool to help g11n related tasks of package maintainers easy) with help of suanand. I learned a lot of new things while doing that.

I started participating in various test days. I learned a few things related to RPM packaging from pnemade and kushal.

Thanks to sinnykumari, I felt really welcomed when I attended CoreOS and Silverblue APAC meetings. Hopefully, I will contribute to those projects in some way.

cverna helped me to get started in picking up issues from Container SIG. As containers are something I really like, this is super fun for me 😄

Bhavin - contributor-stories#21

Receiving a big responsibility

Contributor Story #13 from Eduard Lucena (x3mboy)

"I was looking for something to do inside the community on 2016, I wasn't a developer or a designer, my work was on the telecom industry, so I just start looking the teams and joining IRC rooms. At some point I found the Marketing team, the work was to design strategies, that sounds really good because it wasn't technical. I join the first meeting and with a little of fear I raise my hand and say: "I think I can do that". Then Justin without any hesitation wrote:

#action x3mboy will send the email to the teams asking for the Talking Points.

I was totally afraid, it was my first interaction in the community and I was sending emails to the working teams that make this awesome distro to work. But his confidence in assign this task to me in my first meeting teaches me something awesome: You gain trust by working, the opportunity is there, just take it.

After a while, Justin start to be too busy with other teams and personal stuff and I just start to lead the marketing team. After that, we became personal friends, and I'm really proud of it.

Thanks so much for that first opportunity and thanks for being my friend."

Eduard Lucena - contributor-stories#13

Encouraging crazy ideas

Contributor Story #14 from Eduard Lucena (x3mboy)

"As part of the Marketing team we share some ideas and I came up with the idea of doing a podcast. It was a target that was a little explored in the past, but not with the Fedora Brand officially. I met Brian in person at Perú doing some Ambassadors stuff and I ask Brian: "Hey I have this idea of making a Podcast, nothing too big like the Command Line Heroes from Red Hat, but something to reach that public and introduce the Fedora Community with some interviews", and his answer was so incredible: "What do you need?"

I started to think, well, I have no experience with this, but probably a hosting, and after I pay myself for this service and reach my first interview, Brian ask me: "Why are you doing this alone? Let's allocate some budget, pay for the accounts you need, maybe getting you a Microphone, also a good idea is to have subtitles or something written." His support was awesome and it makes the Podcast so successful.

Thanks Brian for encouraging us to work, you're an inspiration for us."

Eduard Lucena - contributor-stories#14

Help will always be given at CommOps to those who ask for it.

Contributor Story #5 from Dhanesh Sabane (dhanesh95)

"I was in the third year of my CS graduation course when I started exploring Open Source communities and Fedora in particular. As someone who was completely new to almost everything, it was tough to get started. After creating my FAS ID back in September 2015, I remained a spectator for the next few months all the while trying to find some ground. At the end of my semester, April 2016, I decided to take action and joined the CommOps group - my first step towards being a Fedora contributor. I dropped by at one of the meetings and that's when I met Justin and Remy. Both of them helped me understand the way CommOps works and I was fortunate enough to get assigned a task at my first meeting itself. (Thanks Justin and Remy for trusting a complete newbie 😊 ) As it was my first task, Remy was always around to help me and answer all of my newbie questions.

Over the next couple of years, I worked on quite a few things in CommOps and a couple of tasks in Marketing, all thanks to Justin. He also helped me with an IRC bouncer which I very much rely on for all my communication related to Fedora. He always has this positive energy around him and his zeal to always do more for the community really inspires me. We, in CommOps, always joke around about Justin having minions to help out with all his tasks because he is always ahead of everyone in the team. He is the kind of person who'll always stay true to the "Friends" foundation. The kind of person you can look up to and feel free to ask for any kind of help.

If I were to wish for something, I'd wish to get as much knowledge as Remy and the same enthusiasm as Justin's. Thank you Remy and Justin for everything!"

Dhanesh Sabane - contributor-stories#5

<figure class="wp-block-image">A spontaneous Fedora Project meet-up in Pune, India. Left to right: Justin W. Flory, Amita Sharma, Sumantro Mukherjee, Dhanesh Sabane<figcaption>A spontaneous Fedora Project meet-up in Pune, India. Left to right: Justin W. Flory, Amita Sharma, Sumantro Mukherjee, Dhanesh Sabane</figcaption></figure>

The post FAW 2018 Day 5: "Encouraging crazy ideas" appeared first on Fedora Community Blog.

09 Nov 2018 8:15am GMT

Fedora Magazine: How Do You Appreciate Fedora?

This week is the first annual Fedora Appreciation Week. As an extension of the How Do You Fedora? series, this article presents how past interviewees appreciate Fedora. The Fedora Project defines four common values that it encourages all contributors and community members to uphold. Those values are known as the Four Foundations. One such value, Friends, represents the vibrant community of contributors and users from across the world, all working towards the same goal: advancing free software.

Like any community, the Fedora community evolves over time. Each contributor's story is a little different. That diversity is what makes the Fedora community so strong. Kernel contributor Justin Forbes puts it succinctly:

Fedora is the community. So much of what Fedora is now came as a direct result of community effort.

Fedora is successful today because of the many contributors, both past and present, who have put their time and effort into the project. Here are some of their stories, and how they appreciate others in the community.

You can click on any of the story headers to see our original interviews with these notable people.

Maria Leandro's story

Fedora has been a huge part of my personal and professional life, so choosing a top moment would leave several fantastic stories behind. I do remember that first time I went to a Flock and meet personally people that I had been interacting, learning from and teaching for almost 5 years. For people like us who spend most of our time behind a screen, having that personal meeting can be life changing. That particular moment is not about the goals or the tasks that need to be done, that moment is the prize to people who work for a common well, for those who change people's life without asking anything in return, it's the moment when you put a face to those commits and bugs, to those wallpapers and docs; it's the moment when we stop being a random robot name to be real… that moment when we hug each other and greet, that has to be the best moment in all Open Source History.

Maria has two favorite wallpapers from Fedora releases:

Fedora Core 7 Wallpaper

Fedora 26 Wallpaper

She sends a special thank you to appreciate Máirín Duffy, who leads the Fedora design team:

Definitely my hero, mo (mizmo). She pushed me to be the designer I am today, always had a chat to solve any doubt I had, and is the most friendly person you can meet.

Maria's most memorable release was early on:

Probably Fedora 6, since it was the first time I did any artwork at all for the community.

Michael Larabel's story

Without a doubt the best Fedora memory with friends would have to be celebrating the Fedora "Beefy Miracle" release back in 2012 at LinuxTag in Berlin where the Fedora booth played it so well and was serving up free hot dogs to go with the delicious beverages of the region. Lots of good catching up with open-source contributors, discussing new ideas, and more during the wonderful community-driven open-source events particularly in Europe.

Michael's favorite Fedora desktop wallpaper will be familiar to current readers of the Magazine. It's the brand new wallpaper for Fedora 29:

Fedora 29 Wallpaper

Michael also sent a special thank you to a very special contributor who died in 2013:

The late Seth Vidal earns much respect for his contributions to Fedora, Yum, and Red Hat communities. His technical achievements were great and he was a kind and interesting person at conferences, etc.

His favorite release was Fedora Core 3:

Fedora Core 3 certainly holds a special place in my heart as it was the first Fedora release I really became intrigued by as it was in much better shape than FC1/FC2. Since there it improved while overall from say Fedora 26 and newer, each release has felt particularly polished and keeps getting better - including Fedora 29 and my experience with it thus far on many test boxes.

Julita Inca's story

Julita shared with us this photo from a recent Women in Fedora event, celebrating the positive impact and contributions of women in the Fedora community:

Fedora WOmen Event with Julita Inca

Her favorite Fedora wallpaper is from the Fedora 17 release:

Fedora 17 Wallpaper

Julita also took time to appreciate one of Fedora's amazing Czech community contributors and organizers:

The person I admired since the beginning was Jiri Eischmann! He is a polite person and very active in his community. He continues to inspire me to this day! I hope to soon attend a celebration of Fedora in Europe where I am living now.

Author's Postscript

As a fellow Fedoran I would like to thank each of the people who responded to my questions and all of the previous interviewees. Writing the How Do You Fedora? series has been immensely rewarding for me. I have learned about lots of new applications and uses of Fedora. The greatest impact of the series is that it reignites my faith in the goodness of the people who make up the Fedora community with each installment.

09 Nov 2018 8:00am GMT

Luya Tshimbalanga: HP Envy x360 Convertible Ryzen 2500u update

Nearly one month later, HP Envy x360 Convertible 15 powered by Ryzen 2500U is running smoother on kernel 4.19.0 with someissues:

On the positive side, I was impressed by the modular adaptability of HP Envy x360 upgrade wise thanks to the excellent HP documentation. The board can be replaced with the powerful version of Ryzen 7 APU. Adding memory turned out very easy once the procedure is fully followed. Currently the upgrade has 16 GB RAM and a SSD 1TB storage drastically improving the overall performance. Granted the hardware is not mean for heavy 3D gaming but is powerful enough for visual editing and some 3D rendering.

The hardware overall is very capable 2-in-1 Linux machine once issues are ironed out hopefully as soon as possible. The users as community provided a suggestion, the ball is on the upstream maintainers/vendors themselves improving the solution so testers can verify.

09 Nov 2018 2:39am GMT

08 Nov 2018

feedFedora People

mythcat: Fedora 29 : System Storage Manager tool.

This Linux tool comes with this intro:
System Storage Manager provides an easy to use command line interface to manage your storage using various technologies like lvm, btrfs, encrypted volumes and more.
Today I will show an easy way to fix the size of your size of the volume and the file system.
Fist you need to install it with dnf tool :

[root@desk mythcat]# dnf install system-storage-manager
Last metadata expiration check: 1:11:16 ago on Thu 08 Nov 2018 08:04:29 PM EET.
Package system-storage-manager-1.2-1.fc29.noarch is already installed.
Dependencies resolved.
Nothing to do.

Use this command to fill up to 100%:

[root@desk mythcat]# ssm resize -s +100%FREE /dev/mapper/fedora-root 

Display the information about all detected devices, pools, volumes, and snapshots, see:

[root@desk mythcat]# ssm list
Device Free Used Total Pool Mount point

Check the file system consistency on the volume ( this cannot be use with swap ).

[root@desk mythcat]# ssm check

You can read more about this tool here.

08 Nov 2018 7:35pm GMT

Remi Collet: PHP version 7.1.24 and 7.2.12

RPM of PHP version 7.2.12 are available in remi repository for Fedora 28-29 and in remi-php72 repository for Fedora 26-27 and Enterprise Linux 6 (RHEL, CentOS).

RPM of PHP version 7.1.24 are available in remi repository for Fedora 26-27 and in remi-php71 repository for Enterprise Linux (RHEL, CentOS).

emblem-notice-24.pngNo security fix this month, so no update for version 5.6.38 and 7.0.32.

emblem-important-2-24.pngPHP version 5.5 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.2 installation (simplest):

yum-config-manager --enable remi-php72
yum update php\*

Parallel installation of version 7.2 as Software Collection (x86_64 only):

yum install php72

Replacement of default PHP by version 7.1 installation (simplest):

yum-config-manager --enable remi-php71
yum update

Parallel installation of version 7.1 as Software Collection (x86_64 only):

yum install php71

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php56 / php70 / php71 / php72)

08 Nov 2018 4:09pm GMT