15 Jun 2021

feedFedora People

Javier Martinez Canillas: The curious case of the ghostly modalias

I was finishing my morning coffee at the Fedora ARM mystery department when a user report came into my attention: the tpm_tis_spi driver was not working on a board that had a TPM device connected through SPI.

There was no /dev/tpm0 character device present in the system, even when the driver was built as a module and the Device Tree (DT) passed to the kernel had a node with a "infineon,slb9670" compatible string.

Peter Robinson chimed in and mentioned that he had briefly looked at this case before. The problem, he explained, is that the module isn't auto-loaded but that manually loading it make things to work.

At the beginning he thought that this was just a common issue of a driver not having module alias information. This would lead to kmod not knowing that the module has to be loaded, when the kernel reported a MODALIAS uevent as a consequence of the SPI device being registered.

But when checking the module to confirm that theory, he found that there were alias entries:

$ modinfo drivers/char/tpm/tpm_tis_spi.ko | grep alias
alias:          of:N*T*Cgoogle,cr50C*
alias:          of:N*T*Cgoogle,cr50
alias:          of:N*T*Ctcg,tpm_tis-spiC*
alias:          of:N*T*Ctcg,tpm_tis-spi
alias:          of:N*T*Cinfineon,slb9670C*
alias:          of:N*T*Cinfineon,slb9670
alias:          of:N*T*Cst,st33htpm-spiC*
alias:          of:N*T*Cst,st33htpm-spi
alias:          spi:cr50
alias:          spi:tpm_tis_spi
alias:          acpi*:SMO0768:*

Since the board uses DT to describe the hardware topology, the TPM device should had been registered by the Open Firmware (OF) subsystem. And should cause the kernel to report a "MODALIAS=of:NspiTCinfineon,slb9670", which should had matched the "of:N*T*Cinfineon,slb9670" module alias entry.

But when digging more on this issue, things started to get more strange. Looking at the uevent sysfs entry for this SPI device, he found that the kernel was not reporting an OF modalias but instead a legacy SPI modalias: "MODALIAS=spi:slb9670".

But how come? a user asked, the device is registered using DT, not platform code! Where is this modalias coming from? Is this legacy SPI device a ghost?

Peter said that he didn't believe in paranormal events and that there should be a reasonable explanation. So armed with grep, he wanted to get to the bottom of this but got preempted by more urgent things to do.

Coincidentally, I had chased down that same ghost before many moons ago. And it's indeed not a spirit from the board files dimension but only an incorrect behavior in the uevent logic of the SPI subsystem.

The reason is that the SPI uevent handler always reports a MODALIAS of the form "spi:foobar" even for devices that are registered through DT. This leads to the situation described above and it's better explained by looking at the SPI subsystem code:

static int spi_uevent(struct device *dev, struct kobj_uevent_env *env)
{
        const struct spi_device         *spi = to_spi_device(dev);
        int rc;

        rc = acpi_device_uevent_modalias(dev, env);
        if (rc != -ENODEV)
                return rc;

        return add_uevent_var(env, "MODALIAS=%s%s", SPI_MODULE_PREFIX, spi->modalias);
}

Conversely, this is what the platform subsystem uevent handler does (which properly reports OF module aliases):

static int platform_uevent(struct device *dev, struct kobj_uevent_env *env)
{
        struct platform_device  *pdev = to_platform_device(dev);
        int rc;

        /* Some devices have extra OF data and an OF-style MODALIAS */
        rc = of_device_uevent_modalias(dev, env);
        if (rc != -ENODEV)
                return rc;

        rc = acpi_device_uevent_modalias(dev, env);
        if (rc != -ENODEV)
                return rc;

        add_uevent_var(env, "MODALIAS=%s%s", PLATFORM_MODULE_PREFIX,
                        pdev->name);
        return 0;
}

Fixing the SPI core would be trivial, but the problem is that there are just too many drivers and Device Trees descriptions that are relying on the current behavior.

It should be possible to change the core, but first all these drivers and DTs have to be fixed. For example, the I2C subsystem had the same issue but has already been resolved.

A workaround then in the meantime could be to add to the legacy SPI device ID table all the entries that are found in the OF device ID table. That way, a platform using for example a DT node with compatible "infineon,slb9670" will match against an alias "spi:slb9670", that will be present in the module.

And that's exactly what the proposed fix for the tpm_tis_spi driver does.

$ modinfo drivers/char/tpm/tpm_tis_spi.ko | grep alias
alias:          of:N*T*Cgoogle,cr50C*
alias:          of:N*T*Cgoogle,cr50
alias:          of:N*T*Ctcg,tpm_tis-spiC*
alias:          of:N*T*Ctcg,tpm_tis-spi
alias:          of:N*T*Cinfineon,slb9670C*
alias:          of:N*T*Cinfineon,slb9670
alias:          of:N*T*Cst,st33htpm-spiC*
alias:          of:N*T*Cst,st33htpm-spi
alias:          spi:cr50
alias:          spi:tpm_tis_spi
alias:          spi:slb9670
alias:          spi:st33htpm-spi
alias:          acpi*:SMO0768:*

Until the next mystery!

15 Jun 2021 3:39pm GMT

Kiwi TCMS: Thank you for downloading Kiwi TCMS 500000 times

"500K banner"

We are happy to announce that Kiwi TCMS has been downloaded more than 500000 times via Docker Hub! You can check the real-time stats here.

Thank you very much and Happy Testing!


If you like what we're doing and how Kiwi TCMS supports various communities please help us!

15 Jun 2021 11:45am GMT

Fedora Community Blog: Heroes of Fedora (HoF) – F34 Final

Hello fellow testers, welcome to the Fedora Linux 34 Final installation of Heroes of Fedora! In this post, we'll look at the stats concerning the testing of Fedora Linux 34 Final. The purpose of Heroes of Fedora is to provide a summation of testing activity on each milestone release of Fedora. Without community support, Fedora would not exist, so thank you to all who contributed to this release! Without further ado, let's get started!

Updates Testing

Test period: Fedora Linux 34 Final (2021-04-06 - 2021-04-27)
Testers: 128
Comments1: 654

<figure class="wp-block-table">

Name Updates commented
Geraldo S. Simião Kutz (geraldosimiao) 103
Pete Walter (pwalter) 58
bojan 55
atim 54
Dmitri Smirnov (cserpentis) 49
Lukáš Růžička (lruzicka) 31
Basil Eric Rabi (basilrabi) 24
František Zatloukal (frantisekz) 18
Charles-Antoine Couret (renault) 17
itrymybest80 11
Adam Williamson (adamwill) 11
Onuralp SEZER (thunderbirdtr) 11
Hans Müller (cairo) 9
Kamil Páral (kparal) 8
Colin Thomson (g6avk) 8
Chris Murphy (chrismurphy) 7
Nie Lili (lnie) 6
Jan Kuparinen (copperi) 6
Ben Cotton (bcotton) 5
Neal Gompa (ngompa) 5
Jens Petersen (petersen) 5
Miro Hrončok (churchyard) 4
Ashish Kumar (akumar99) 4
Dennis Keefe (dkeefe) 4
brett hassall (bretth) 4
Otto Urpelainen (oturpe) 4
Kalvin Lee (kalvinist) 4
Martin Wolf (generalprobe) 3
sixpack13 3
Kalev Lember (kalev) 3
Vitaly Zaitsev (xvitaly) 3
Fabio Valentini (decathorpe) 3
…and also 96 other reporters who created fewer than 3 reports each, but 114 reports combined!

</figure>

1 If a person provides multiple comments to a single update, it is considered as a single comment. Karma value is not taken into account.

Validation Testing

Test period: Fedora Linux 34 Final (2021-04-06 - 2021-04-27)
Testers: 17
Reports: 575
Unique referenced bugs: 7

<figure class="wp-block-table">

Name Reports submitted Referenced bugs1
lruzicka 169 1949427 1950258 (2)
pwhalen 125
geraldosimiao 90 1929643 1952518 (2)
frantisekz 57
nielsenb 24 1950129 1950171 (2)
sumantrom 21
jlinton 21 1952748 (1)
coremodule 13
kparal 12
robatino 12
thunderbirdtr 10
lnie 9
jpbn 5
cmurf 2
tflink 2
nb 2
alciregi 1

</figure>

1 This is a list of bug reports referenced in test results. The bug itself may not be created by the same person.

Bug Reports

Test period: Fedora Linux 34 Final (2021-04-06 - 2021-04-27)
Reporters: 385
New reports: 670

<figure class="wp-block-table">

Name Reports submitted1 Excess reports2 Accepted blockers3
Miro Hrončok 20 6 (30%) 0
Bruno Porceli Alaniz 17 0 (0%) 0
Luna Jernberg 15 3 (20%) 0
Lukas Ruzicka 10 5 (50%) 0
ricky.tigg at gmail.com 10 1 (10%) 0
Chris Murphy 9 2 (22%) 0
lnie 9 5 (55%) 0
Sampson Fung 8 0 (0%) 0
Peter Hutterer 7 0 (0%) 0
Rickard 7 0 (0%) 0
Adam Williamson 6 0 (0%) 2
kxra at riseup.net 6 0 (0%) 1
cooperbang at disroot.org 6 0 (0%) 0
Heldwin 6 0 (0%) 0
Kristo Zondagh 6 0 (0%) 0
Kurt Heine 6 0 (0%) 0
Michael Catanzaro 6 0 (0%) 0
Ryan 6 0 (0%) 0
František Zatloukal 5 0 (0%) 2
Paul Whalen 5 0 (0%) 2
Basil Eric Rabi 5 0 (0%) 0
Ian Laurie 5 0 (0%) 0
OpenQA Coconut 5 5 (100%) 0
Alexander Zhang 4 0 (0%) 0
Garry T. Williams 4 0 (0%) 0
Jeremy Linton 4 0 (0%) 0
Rex Dieter 4 0 (0%) 0
Thomas Citharel 4 0 (0%) 0
Vitaliy Grishin 4 0 (0%) 0
Willy Kuchler 4 0 (0%) 0
…and also 355 other reporters who created fewer than 4 reports each, but 457 reports combined!

</figure>

1 The total number of new reports (including "excess reports"). Reopened reports or reports with a changed version are not included, because it was not technically easy to retrieve those. This is one of the reasons why you shouldn't take the numbers too seriously, but just as interesting and fun data.
2 Excess reports are those that were closed as NOTABUG, WONTFIX, WORKSFORME, CANTFIX or INSUFFICIENT_DATA. Excess reports are not necessarily a bad thing, but they make for interesting statistics. Close manual inspection is required to separate valuable excess reports from those which are less valuable.
3 This only includes reports that were created by that particular user and accepted as blockers afterwards. The user might have proposed other people's reports as blockers, but this is not reflected in this number.

The post Heroes of Fedora (HoF) - F34 Final appeared first on Fedora Community Blog.

15 Jun 2021 8:00am GMT

Robbi Nespu: openSUSE Virtual Conf. 21 t-shirt (FREE!)

I just checked my email and I got email from ddemaio <ddemaio@suse.de> that said welcome to this year's openSUSE Conference. S/he give me token access and said that I can claim free t-shirt. Nice!

<script async="" charset="utf-8" src="https://s.imgur.com/min/embed.js"></script>

15 Jun 2021 4:11am GMT

14 Jun 2021

feedFedora People

Fedora Community Blog: Pride Month Celebration

The Fedora Diversity & Inclusion team is hosting this week's Fedora Social Hour to celebrate Pride month. Come join us to chat, catch up, and have some fun playing Pictionary. Everyone is welcome to join the event!

Date: Thursday, June 17, 2021
Time: 14:00 UTC
Location: #fedora-social-hour on Matrix.org
FedoCal: https://calendar.fedoraproject.org/meeting/9743/

Hope to see you there!

- Fedora Diversity & Inclusion Team

The post Pride Month Celebration appeared first on Fedora Community Blog.

14 Jun 2021 6:42pm GMT

Fedora Magazine: Fedora Classroom: RPM Packaging 101

Fedora Classroom sessions return with a session on RPM packaging targeted at beginners.

About the session

RPMs are the smallest building blocks of the Fedora Linux system. This session will walk through the basics of building an RPM from source code. You will learn how to set up your Fedora system to build RPMs, how to write a spec file that adheres to the Fedora Packaging Guidelines, and how to use it to generate RPMs for distribution. The session will also provide a brief overview of the complete Fedora packaging pipeline.

While no prior knowledge of building RPMs or building software from its source code is required, some software development experience will be useful. The hope is to help users learn the skills required to build and maintain RPM packages, and to encourage them to contribute to Fedora by joining the package collection maintainers.

When and where

The classroom session will be organised on the BlueJeans video platform at 1200 UTC on June 17, 2021 and is expected to last an hour:

Topics covered in the session

Prerequisites

Useful reading

About the instructor

Ankur Sinha has been maintaining packages in Fedora for more than a decade and is currently both a sponsor to the package maintainers group, and a proven packager. Ankur primarily focuses on maintaining neuroscience related software for the NeuroFedora Special Interest Group and contributes to other parts of the community wherever possible.

Fedora Classroom is a project aimed at spreading knowledge on Fedora related topics. If you would like to propose a session, feel free to open a ticket here with the tag classroom. If you are interested in taking a proposed session, please let us know and once you take it, you will be awarded the Sensei Badge too as a token of appreciation. Recordings from the previous sessions can be found here.

14 Jun 2021 8:00am GMT

Josh Bressers: Episode 275 – What in the @#$% is going on with ransomware?

Josh and Kurt talk about why it seems like the world of ransomware has gotten out of control in the last few weeks. Every day there's some new and more bizarre ransomware story than we had yesterday.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2464-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_275_What_in_the_is_going_on_with_ransomware.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_275_What_in_the_is_going_on_with_ransomware.mp3</audio>

Show Notes

14 Jun 2021 12:01am GMT

12 Jun 2021

feedFedora People

Pablo Iranzo Gómez: Geo replication with syncthing

Some years ago I started using geo replication to keep a copy of all the pictures, docs, etc

After being using BitTorrent sync and later resilio sync (even if I didn't fully liked the idea of it being not open source), I gave up. My NAS with 16 GB of ram, even if a bit older (HP N54L), seemed not to have enough memory to run it, and was constantly swapping.

Checking the list of processes pointed to the rslsync process as the culprit, and apparently the cause is the way it handles the files it controls.

The problem is that even one file is deleted long ago, rslsync does keep it in the database… and in memory. After checking with their support (as I had a family license), the workaround was to remove the folder and create a new one, which in parallel meant having to configure it again on all the systems that used for keeping a copy.

I finally decide to give syncthing another try after years since last evaluation.

Syncthing is now is covering some of the features I was using with rslsync:

In addition, it includes systemd support and it's packaged in the operating system, making it really easy to install and update (´rslsync´ was without updates for almost a year).

Only caveat, if using Debian, is to use the repository they provide as the package included in the distribution is really old, causing some issues with the remote encrypted peers.

For starting as user the command is very simple:

1
2
systemctl enable syncthing@user
systemctl starat syncthing@user

Once the process is started, the browser can be pointed locally at http://127.0.0.1:8384 to start configuration:

One difference is that in rslsync having the secret for the key is enough, in syncthing you need to add the hosts in both ways to accept them and be able to share data.

One easing feature here is that one host can be configured as presenter which allows other systems to inherit the know list of hosts from the host marked as presenter, making it easier to do the both-ways initial introduction.

Best outcome, is that the use (or abuse) of RAM has been completely slashed what rslsync was using.

Currently, the only issue is that for some computers in the local network the sync was a bit slow (it even got some remote underpowered devices syncing faster than local ones), but some of the copies were fully in synced already.

The web interface is not bad, even if, for what I was used to, it's not showing as much detail about the hosts status at glance, having to open each individual folder to see how it is going, as in the general view, it shows the percentage of completion and the amount of data still missing to be synced.

Hope you like it!

12 Jun 2021 7:40pm GMT

11 Jun 2021

feedFedora People

Fedora Community Blog: Friday’s Fedora Facts: 2021-23

Here's your weekly Fedora report. Read what happened this week and what's coming up. Your contributions are welcome (see the end of the post)!

Don't forget to take the Annual Fedora Survey and claim your badge!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">

Conference Location Date CfP
AnsibleFest virtual 29-30 Sep closes 29 June
Nest With Fedora virtual 5-8 Aug closes 16 July

</figure>

Help wanted

Prioritized Bugs

<figure class="wp-block-table">

Bug ID Component Status
1953675 kf5-akonadi-server NEW

</figure>

Upcoming meetings

Releases

Fedora Linux 35

Schedule

For the full schedule, see the schedule website.

Changes

<figure class="wp-block-table">

Proposal Type Status
Make btrfs the default file system for Fedora Cloud System-Wide FESCo #2617
Sphinx 4 Self-Contained FESCo #2620
Build Fedora Cloud Images with Hybrid BIOS+UEFI Boot Support System-Wide FESCo #2621
Replace the Anaconda product configuration files with profiles Self-Contained Announced
Use yescrypt as default hashing method for shadow passwords System-Wide Announced

</figure>

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday's Fedora Facts: 2021-23 appeared first on Fedora Community Blog.

11 Jun 2021 8:29pm GMT

Remi Collet: PHP 8.1 as Software Collection

Version 8.1.0alpha1 is released. It's still in development and will enter soon in the stabilization phase for the developers, and the test phase for the users.

RPM of this upcoming version of PHP 8.1, are available in remi repository for Fedora 33, 34 and Enterprise Linux 7, 8 (RHEL, CentOS, ...) in a fresh new Software Collection (php81) allowing its installation beside the system version.

As I strongly believe in SCL potential to provide a simple way to allow installation of various versions simultaneously, and as I think it is useful to offer this feature to allow developers to test their applications, to allow sysadmin to prepare a migration or simply to use this version for some specific application, I decide to create this new SCL.

I also plan to propose this new version as a Fedora 36 change (as F35 should be released a few weeks before PHP 8.1.0).

Installation :

yum install php81

emblem-important-2-24.pngTo be noticed:

emblem-notice-24.pngAlso read other entries about SCL. especially description of my My PHP workstation.

$ module load php81
$ php --version
PHP 8.1.0alpha1 (cli) (built: Jun  8 2021 16:24:50) (NTS gcc x86_64)
Copyright (c) The PHP Group
Zend Engine v4.1.0-dev, Copyright (c) Zend Technologies

As always, your feedback is welcome, a SCL dedicated forum is open.

Software Collections (php81)

11 Jun 2021 4:34am GMT

Josef Strzibny: Moving ActionCable over to Webpacker

This week, I upgraded a little demo application for my book Deployment from Scratch from Rails 6 to Rails 6.1. Since I showcase WebSockets with ActionCable and Redis, I needed to move the ActionCable CoffeeScript from Sprockets to Webpacker.

I started with dependencies. The original application could lose uglifier as Sprockets' JavaScript processor and coffee-rails in favour of JavaScript. I replaced them with webpacker gem in Gemfile:

gem 'webpacker', '~> 5.4'

Once I generated a new Gemfile.lock, I could run a webpacker:install tasks that creates many files (which I won't get into here):

$ rails webpacker:install

In case you won't see the new Webpacker tasks, make sure to delete the Rails cache:

$ rails tmp:cache:clear

It took me a while to realize why I don't see this Webpacker Rake task.

Once that's done, let's see how to move the JavaScript entry point file.

// app/assets/javascripts/application.js
//= require rails-ujs
//= require activestorage
//= require turbolinks
//= require_tree .

All these requirements should now happen in the new app/javascript directory:

// app/javascript/packs/application.js
import Rails from "@rails/ujs"
import Turbolinks from "turbolinks"
import * as ActiveStorage from "@rails/activestorage"
import "channels"

Rails.start()
ActiveStorage.start()

After I had my new application.js ready, I changed javascript_include_tag to javascript_pack_tag in views:

<%= javascript_pack_tag 'application', 'data-turbolinks-track': 'reload' %>

Then I updated the channels. I went from this:

// app/assets/javascripts/cable.js
// Action Cable provides the framework to deal with WebSockets in Rails.
// You can generate new channels where WebSocket features live using the `rails generate channel` command.
//
//= require action_cable
//= require_self
//= require_tree ./channels

(function() {
  this.App || (this.App = {});

  App.cable = ActionCable.createConsumer();

}).call(this);

To new channels structure with channels/index.js and channels/consumer.js:

// app/javascript/channels/index.js
// Load all the channels within this directory and all subdirectories.
// Channel files must be named *_channel.js.

const channels = require.context('.', true, /_channel\.js$/)
channels.keys().forEach(channels)


// app/javascript/channels/consumer.js
// Action Cable provides the framework to deal with WebSockets in Rails.
// You can generate new channels where WebSocket features live using the `bin/rails generate channel` command.

import { createConsumer } from "@rails/actioncable"

export default createConsumer()

And then I rewrote my original subscription file that looked like this:

// app/assets/javascripts/cable/subscriptions/document.coffee
App.cable.subscriptions.create { channel: "DocumentChannel" },
  connected: () ->

  received: (data) ->
    console.log("Received data.")

    alert(data["title"])

To a JavaScript version using the previous consumer.js file:

// app/javascript/channels/documents_channel.js
import consumer from "./consumer"

consumer.subscriptions.create(
  { channel: "DocumentChannel" },
  {
    connect() {},
    received(data) {
      console.log("Received data.")
      alert(data["title"])
    }
  }
)

At this point the all the new files are in place, I just had to go and delete the old app/assets/javascript directory:

$ rm -rf app/assets/javascripts

And remove it from the manifest (the second line):

// app/assets/config/manifest.js
//= link_tree ../images
//= link_directory ../javascripts .js
//= link_directory ../stylesheets .css

Although it's a small app with only one channel, you might find this useful if you didn't move to Webpacker yet.

11 Jun 2021 12:00am GMT

10 Jun 2021

feedFedora People

Lennart Poettering: The Wondrous World of Discoverable GPT Disk Images

TL;DR: Tag your GPT partitions with the right, descriptive partition types, and the world will become a better place.

A number of years ago we started the Discoverable Partitions Specification which defines GPT partition type UUIDs and partition flags for the various partitions Linux systems typically deal with. Before the specification all Linux partitions usually just used the same type, basically saying "Hey, I am a Linux partition" and not much else. With this specification the GPT partition type, flags and label system becomes a lot more expressive, as it can tell you:

  1. What kind of data a partition contains (i.e. is this swap data, a file system or Verity data?)
  2. What the purpose/mount point of a partition is (i.e. is this a /home/ partition or a root file system?)
  3. What CPU architecture a partition is intended for (i.e. is this a root partition for x86-64 or for aarch64?)
  4. Shall this partition be mounted automatically? (i.e. without specifically be configured via /etc/fstab)
  5. And if so, shall it be mounted read-only?
  6. And if so, shall the file system be grown to its enclosing partition size, if smaller?
  7. Which partition contains the newer version of the same data (i.e. multiple root file systems, with different versions)

By embedding all of this information inside the GPT partition table disk images become self-descriptive: without requiring any other source of information (such as /etc/fstab) if you look at a compliant GPT disk image it is clear how an image is put together and how it should be used and mounted. This self-descriptiveness in particular breaks one philosophical weirdness of traditional Linux installations: the original source of information which file system the root file system is, typically is embedded in the root file system itself, in /etc/fstab. Thus, in a way, in order to know what the root file system is you need to know what the root file system is. 🤯 🤯 🤯

(Of course, the way this recursion is traditionally broken up is by then copying the root file system information from /etc/fstab into the boot loader configuration, resulting in a situation where the primary source of information for this - i.e. /etc/fstab - is actually mostly irrelevant, and the secondary source - i.e. the copy in the boot loader - becomes the configuration that actually matters.)

Today, the GPT partition type UUIDs defined by the specification have been adopted quite widely, by distributions and their installers, as well as a variety of partitioning tools and other tools.

In this article I want to highlight how the various tools the systemd project provides make use of the concepts the specification introduces.

But before we start with that, let's underline why tagging partitions with these descriptive partition type UUIDs (and the associated partition flags) is a good thing, besides the philosophical points made above.

  1. Simplicity: in particular OS installers become simpler - adjusting /etc/fstab as part of the installation is not necessary anymore, as the partitioning step already put all information into place for assembling the system properly at boot. i.e. installing doesn't mean that you always have to get fdisk and /etc/fstab into place, the former suffices entirely.

  2. Robustness: since partition tables mostly remain static after installation the chance of corruption is much lower than if the data is stored in file systems (e.g. in /etc/fstab). Moreover by associating the metadata directly with the objects it describes the chance of things getting out of sync is reduced. (i.e. if you lose /etc/fstab, or forget to rerun your initrd builder you still know what a partition is supposed to be just by looking at it.)

  3. Programmability: if partitions are self-descriptive it's much easier to automatically process them with various tools. In fact, this blog story is mostly about that: various systemd tools can naturally process disk images prepared like this.

  4. Alternative entry points: on traditional disk images, the boot loader needs to be told which kernel command line option root= to use, which then provides access to the root file system, where /etc/fstab is then found which describes the rest of the file systems. Where precisely root= is configured for the boot loader highly depends on the boot loader and distribution used, and is typically encoded in a Turing complete programming language (Grub…). This makes it very hard to automatically determine the right root file system to use, to implement alternative entry points to the system. By alternative entry points I mean other ways to boot the disk image, specifically for running it as a systemd-nspawn container - but this extends to other mechanisms where the boot loader may be bypassed to boot up the system, for example qemu when configured without a boot loader.

  5. User friendliness: it's simply a lot nicer for the user looking at a partition table if the partition table explains what is what, instead of just saying "Hey, this is a Linux partition!" and nothing else.

Uses for the concept

Now that we cleared up the Why?, lets have a closer look how this is currently used and exposed in systemd's various components.

Use #1: Running a disk image in a container

If a disk image follows the Discoverable Partition Specification then systemd-nspawn has all it needs to just boot it up. Specifically, if you have a GPT disk image in a file foobar.raw and you want to boot it up in a container, just run systemd-nspawn -i foobar.raw -b, and that's it (you can specify a block device like /dev/sdb too if you like). It becomes easy and natural to prepare disk images that can be booted either on a physical machine, inside a virtual machine manager or inside such a container manager: the necessary meta-information is included in the image, easily accessible before actually looking into its file systems.

Use #2: Booting an OS image on bare-metal without /etc/fstab or kernel command line root=

If a disk image follows the specification in many cases you can remove /etc/fstab (or never even install it) - as the basic information needed is already included in the partition table. The systemd-gpt-auto-generator logic implements automatic discovery of the root file system as well as all auxiliary file systems. (Note that the former requires an initrd that uses systemd, some more conservative distributions do not support that yet, unfortunately). Effectively this means you can boot up a kernel/initrd with an entirely empty kernel command line, and the initrd will automatically find the root file system (by looking for a suitably marked partition on the same drive the EFI System Partition was found on).

(Note, if /etc/fstab or root= exist and contain relevant information they always takes precedence over the automatic logic. This is in particular useful to tweaks thing by specifying additional mount options and such.)

Use #3: Mounting a complex disk image for introspection or manipulation

The systemd-dissect tool may be used to introspect and manipulate OS disk images that implement the specification. If you pass the path to a disk image (or block device) it will extract various bits of useful information from the image (e.g. what OS is this? what partitions to mount?) and display it.

With the --mount switch a disk image (or block device) can be mounted to some location. This is useful for looking what is inside it, or changing its contents. This will dissect the image and then automatically mount all contained file systems matching their GPT partition description to the right places, so that you subsequently could chroot into it. (But why chroot if you can just use systemd-nspawn? 😎)

Use #4: Copying files in and out of a disk image

The systemd-dissect tool also has two switches --copy-from and --copy-to which allow copying files out of or into a compliant disk image, taking all included file systems and the resulting mount hierarchy into account.

Use #5: Running services directly off a disk image

The RootImage= setting in service unit files accepts paths to compliant disk images (or block device nodes), and can mount them automatically, running service binaries directly off them (in chroot() style). In fact, this is the base for the Portable Service concept of systemd.

Use #6: Provisioning disk images

systemd provides various tools that can run operations provisioning disk images in an "offline" mode. Specifically:

systemd-tmpfiles

With the --image= switch systemd-tmpfiles can directly operate on a disk image, and for example create all directories and other inodes defined in its declarative configuration files included in the image. This can be useful for example to set up the /var/ or /etc/ tree according to such configuration before first boot.

systemd-sysusers

Similar, the --image= switch of systemd-sysusers tells the tool to read the declarative system user specifications included in the image and synthesizes system users from it, writing them to the /etc/passwd (and related) files in the image. This is useful for provisioning these users before the first boot, for example to ensure UID/GID numbers are pre-allocated, and such allocations not delayed until first boot.

systemd-machine-id-setup

The --image= switch of systemd-machine-id-setup may be used to provision a fresh machine ID into /etc/machine-id of a disk image, before first boot.

systemd-firstboot

The --image= switch of systemd-firstboot may be used to set various basic system setting (such as root password, locale information, hostname, …) on the specified disk image, before booting it up.

Use #7: Extracting log information

The journalctl switch --image= may be used to show the journal log data included in a disk image (or, as usual, the specified block device). This is very useful for analyzing failed systems offline, as it gives direct access to the logs without any further, manual analysis.

Use #8: Automatic repartitioning/growing of file systems

The systemd-repart tool may be used to repartition a disk or image in an declarative and additive way. One primary use-case for it is to run during boot on physical or VM systems to grow the root file system to the disk size, or to add in, format, encrypt, populate additional partitions at boot.

With its --image= switch it the tool may operate on compliant disk images in offline mode of operation: it will then read the partition definitions that shall be grown or created off the image itself, and then apply them to the image. This is particularly useful in combination with the --size= which allows growing disk images to the specified size.

Specifically, consider the following work-flow: you download a minimized disk image foobar.raw that contains only the minimized root file system (and maybe an ESP, if you want to boot it on bare-metal, too). You then run systemd-repart --image=foo.raw --size=15G to enlarge the image to the 15G, based on the declarative rules defined in the repart.d/ drop-in files included in the image (this means this can grow the root partition, and/or add in more partitions, for example for /srv or so, maybe encrypted with a locally generated key or so). Then, you proceed to boot it up with systemd-nspawn --image=foo.raw -b, making use of the full 15G.

Versioning + Multi-Arch

Disk images implementing this specifications can carry OS executables in one of three ways:

  1. Only a root file system

  2. Only a /usr/ file system (in which case the root file system is automatically picked as tmpfs).

  3. Both a root and a /usr/file system (in which case the two are combined, the /usr/ file system mounted into the root file system, and the former possibly in read-only fashion`)

They may also contain OS executables for different architectures, permitting "multi-arch" disk images that can safely boot up on multiple CPU architectures. As the root and /usr/ partition type UUIDs are specific to architectures this is easily done by including one such partition for x86-64, and another for aarch64. If the image is now used on an x86-64 system automatically the former partition is used, on aarch64 the latter.

Moreover, these OS executables may be contained in different versions, to implement a simple versioning scheme: when tools such as systemd-nspawn or systemd-gpt-auto-generator dissect a disk image, and they find two or more root or /usr/ partitions of the same type UUID, they will automatically pick the one whose GPT partition label (a 36 character free-form string every GPT partition may have) is the newest according to strverscmp() (OK, truth be told, we don't use strverscmp() as-is, but a modified version with some more modern syntax and semantics, but conceptually identical).

This logic allows to implement a very simple and natural A/B update scheme: an updater can drop multiple versions of the OS into separate root or /usr/ partitions, always updating the partition label to the version included there-in once the download is complete. All of the tools described here will then honour this, and always automatically pick the newest version of the OS.

Verity

When building modern OS appliances, security is highly relevant. Specifically, offline security matters: an attacker with physical access should have a difficult time modifying the OS in a way that isn't noticed. i.e. think of a car or a cell network base station: these appliances are usually parked/deployed in environments attackers can get physical access to: it's essential that in this case the OS itself sufficiently protected, so that the attacker cannot just mount the OS file system image, make modifications (inserting a backdoor, spying software or similar) and the system otherwise continues to run without this being immediately detected.

A great way to implement offline security is via Linux' dm-verity subsystem: it allows to securely bind immutable disk IO to a single, short trusted hash value: if an attacker manages to offline modify the disk image the modified disk image won't match the trusted hash anymore, and will not be trusted anymore (depending on policy this then just result in IO errors being generated, or automatic reboot/power-off).

The Discoverable Partitions Specification declares how to include Verity validation data in disk images, and how to relate them to the file systems they protect, thus making if very easy to deploy and work with such protected images. For example systemd-nspawn supports a --root-hash= switch, which accepts the Verity root hash and then will automatically assemble dm-verity with this, automatically matching up the payload and verity partitions. (Alternatively, just place a .roothash file next to the image file).

Future

The above already is a powerful tool set for working with disk images. However, there are some more areas I'd like to extend this logic to:

bootctl

Similar to the other tools mentioned above, bootctl (which is a tool to interface with the boot loader, and install/update systemd's own EFI boot loader sd-boot) should learn a --image= switch, to make installation of the boot loader on disk images easy and natural. It would automatically find the ESP and other relevant partitions in the image, and copy the boot loader binaries into them (or update them).

coredumpctl

Similar to the existing journalctl --image= logic the coredumpctl tool should also gain an --image= switch for extracting coredumps from compliant disk images. The combination of journalctl --image= and coredumpctl --image= would make it exceptionally easy to work with OS disk images of appliances and extracting logging and debugging information from them after failures.

And that's all for now. Please refer to the specification and the man pages for further details. If your distribution's installer does not yet tag the GPT partition it creates with the right GPT type UUIDs, consider asking them to do so.

Thank you for your time.

10 Jun 2021 10:00pm GMT

Fedora Community Blog: Introduce yourself Outreachy

<figure class="wp-block-image size-large"></figure>

Hello everyone!

I'm Manisha Kanyal, a sophomore B.Tech in Computer Science & Engineering student from India. I'm passionate about open source and software development. The project for which I've been selected as an Outreachy intern is "Improve Fedora QA dashboard" and I'm enthusiastic and grateful for this opportunity. It's going to be a great learning experience for me.

My core values

My core values are optimism and faith.

The optimism in me makes me positive, whatever I do whether it's related to my field or not, I do it by thinking the best possible thing will happen and hope for it even if it's not likely. My optimistic attitude reflects a belief or hope that the outcome of some specific endeavor, or outcomes in general, will be positive, favorable, and desirable. That is what keeps me believing in good things most and is very helpful to me.

I tried to get my application accepted twice but didn't make it for the first time but the faith in me never really got demotivated by that fact. I kept learning after that, kept putting in the efforts, and finally, this year came with what I was hoping for. Yes, I got selected, I made it. That made me believe that no matter what, even if you are a beginner, if you have faith in yourself you can reach great heights.

What motivated me to apply to Outreachy?

Outreachy is a great opportunity for me to prove myself worthy enough to be in this industry. When I heard about this for the first time, I got really excited to know about it, as one of my friends told me how helpful this opportunity is for people like me who are underrepresented in their fields.

I wanted to clear my application for Outreachy because I was excited to connect with other females in computing fields and to share my experience and be exposed to open source. This is a great opportunity for me to know other women in my field and to hear their stories, as well as to share my own during and after Outreachy. This motivated me a lot to grab this opportunity asmeeting inspirational people figures at Outreachy will be a great boost for me to try harder and have faith in myself as a woman in tech. Part of me regrets not knowing about Outreachy enough in my freshman year, but now I have this opportunity and I can't be happy enough.

<figure class="wp-block-image size-large"></figure>

The post Introduce yourself Outreachy appeared first on Fedora Community Blog.

10 Jun 2021 5:19pm GMT

Maxim Burgerhout: Using proper FreeIPA certificates on Cockpit

Cockpit and FreeIPA

A couple of years ago, I did a video on Youtube on using FreeIPA / IdM certificates in Cockpit. According to some comments (that I only saw way after the fact…), for some people, my way of doing that didn't work.

Therefore, I redid the video for RHEL7 and RHEL8, connected to IdM from RHEL8. This should work with recent Fedora as well, since I'm using that at home :)

How it works

#SELinux

Both on RHEL7 and RHEL8, the certmonger process that is actually "in charge" of getting the certificates, cannot write to /etc/cockpit/ws-certs.d due to SELinux. Therefore, before we tell it to go fetch certificates through ipa-getcert, we need to tweak SELinux a bit.

The following command works on RHEL7, RHEL8 and recent Fedora and relabels /etc/cockpit/ws-certs.d to certs_t instead of etc_t. This makes it possible for certmonger to write there.

semanage fcontext -a -t cert_t "/etc/cockpit/ws-certs.d(/.*)?"
restorecon -FvR /etc/cockpit/ws-certs.d

RHEL7

On RHEL7, cockpit expects a combined file for the certificate and key information, so we need to concatenate what we get from certmonger before we give to cockpit.

We can pass ipa-getcert a post-save command, that is issued after storing the certificate, but that can be only a single command. Therefore we use a script:

#!/bin/bash

name=$1

cat /etc/pki/tls/certs/${name}.cert /etc/pki/tls/private/${name}.key /etc/cockpit/ws-certs.d/50-${name}.cert
chown root:cockpit-ws /etc/cockpit/ws-certs.d/50-${name}.cert
chmod 0640 /etc/cockpit/ws-certs.d/50-${name}.cert

After we issue that command, we can request the certificate:

ipa-getcert request -f /etc/pki/tls/certs/$(hostname -f).cert -k /etc/pki/tls/private/$(hostname -f).key -D $(hostname -f) -C "/usr/local/sbin/cockpit_certs.sh $(hostname -f)" -K host/$(hostname -f)

This should result in a certificate in /etc/cockpit/ws-certs.d that we'll never have to touch again :)

RHEL8

On RHEL8 and recent Fedora, we don't need a script to concatenate the key and the certificate, because recent cockpit can handle two separate files for them.

Therefore, we only have to issue the ipa-getcert command:

ipa-getcert request -f /etc/cockpit/ws-certs.d/$(hostname -f).cert -k /etc/cockpit/ws-certs.d/$(hostname -f).key -D $(hostname -f) -K host/$(hostname -f) -m 0640  -o root:cockpit-ws -O root:root -M 0644

This again should result in a certificate that we'll never have to touch again until we decommission this machine!

Hope this helps!

<style> div.ytcontainer { position: relative; width: 100%; height: 0; padding-bottom: 56.25%; } iframe.yt { position: absolute; top: 0; left: 0; width: 100%; height: 100%; border: 0; } </style>

<iframe allowfullscreen="" class="yt" src="https://www.youtube.com/embed/W26rWtEqToc"></iframe>

10 Jun 2021 4:39pm GMT

Peter Czanik: The syslog-ng Insider 2021-06: Alerting; EoL technologies; Google Summer of Code;

Dear syslog-ng users,

This is the 92nd issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.

NEWS

First steps of sending alerts to Discord and others from syslog-ng: http() and Apprise

A returning question I get is: "I see, that you can send alerts from syslog-ng to Slack and Telegram, but do you happen to support XYZ?" Replace XYZ with Discord and countless others. Up until recently, my regular answer has been: "Take a look at the Slack destination of syslog-ng, and based on that, you can add support for your favorite service". Then I learned about Apprise, a notification library for Python, supporting dozens of different services. This blog is the first part of a series. It covers how to send log messages to Discord using the http() destination of syslog-ng and an initial try at using Apprise for alerting.

https://www.syslog-ng.com/community/b/blog/posts/first-steps-of-sending-alerts-to-discord-and-others-from-syslog-ng-http-and-apprise

Changes in technologies supported by syslog-ng: Python 2, CentOS 6 & Co.

Technology is continuously evolving. There are regular changes in platforms running syslog-ng: old technologies disappear, and new technologies are introduced. While we try to provide stability and continuity to our users, we also need to adapt. Python 2 reached its end of life a year ago, CentOS 6 in November 2020. Using Java-based drivers has been problematic for many, so they were mostly replaced with native implementations. From this blog you can learn about recent changes affecting syslog-ng development and packaging.

https://www.syslog-ng.com/community/b/blog/posts/changes-in-technologies-supported-by-syslog-ng-python-2-centos-6-co

Google Summer of Code 2021

This year, the syslog-ng team participates in Google Summer of Code (GSoC) again as a mentoring organization. Two students paid by GSoC work on syslog-ng under the mentoring of syslog-ng developers. One of the students works on MacOS support, including the new ARM-based systems, while the other one is working developing on a new regular expression parser:

https://summerofcode.withgoogle.com/organizations/5548293561516032/

WEBINARS


Your feedback and news, or tips about the next issue are welcome. To read this newsletter online, visit: https://syslog-ng.com/blog/

10 Jun 2021 10:09am GMT

Fedora Magazine: Use cpulimit to free up your CPU

The recommended tool for managing system resources on Linux systems is cgroups. While very powerful in terms of what sorts of limits can be tuned (CPU, memory, disk I/O, network, etc.), configuring cgroups is non-trivial. The nice command has been available since 1973. But it only adjusts the scheduling priority among processes that are competing for time on a processor. The nice command will not limit the percentage of CPU cycles that a process can consume per unit of time. The cpulimit command provides the best of both worlds. It limits the percentage of CPU cycles that a process can allocate per unit of time and it is relatively easy to invoke.

The cpulimit command is mainly useful for long-running and CPU-intensive processes. Compiling software and converting videos are common examples of long-running processes that can max out a computer's CPU. Limiting the CPU usage of such processes will free up processor time for use by other tasks that may be running on the computer. Limiting CPU-intensive processes will also reduce the power consumption, heat output, and possibly the fan noise of the system. The trade-off for limiting a process's CPU usage is that it will require more time to run to completion.

Install cpulimit

The cpulimit command is available in the default Fedora Linux repositories. Run the following command to install cpulimit on a Fedora Linux system.

$ sudo dnf install cpulimit

View the documentation for cpulimit

The cpulimit package does not come with a man page. Use the following command to view cpulimit's built-in documentation. The output is provided below. But you may want to run the command on your own system in case the options have changed since this article was written.

$ cpulimit --help
Usage: cpulimit [OPTIONS…] TARGET
   OPTIONS
      -l, --limit=N percentage of cpu allowed from 0 to 800 (required)
      -v, --verbose show control statistics
      -z, --lazy exit if there is no target process, or if it dies
      -i, --include-children limit also the children processes
      -h, --help display this help and exit
   TARGET must be exactly one of these:
      -p, --pid=N pid of the process (implies -z)
      -e, --exe=FILE name of the executable program file or path name
      COMMAND [ARGS] run this command and limit it (implies -z)

A demonstration

To demonstrate using the cpulimit command, a contrived, computationally-intensive Python script is provided below. The script is run first with no limit and then with a limit of 50%. It computes the value of the 42nd Fibonacci number. The script is run as a child process of the time command in both cases to show the total time that was required to compute the answer.

$ /bin/time -f '(computed in %e seconds)' /bin/python -c 'f = lambda n: n if n<2 else f(n-1)+f(n-2); print(f(42), end=" ")'
267914296 (computed in 51.80 seconds)
$ /bin/cpulimit -i -l 50 /bin/time -f '(computed in %e seconds)' /bin/python -c 'f = lambda n: n if n<2 else f(n-1)+f(n-2); print(f(42), end=" ")'
267914296 (computed in 127.38 seconds)

You might hear the CPU fan on your PC rev up when running the first version of the command. But you should not when running the second version. The first version of the command is not CPU limited but it should not cause your PC to become bogged down. It is written in such a way that it can only use at most one CPU. Most modern PCs have multiple CPUs and can simultaneously run other tasks without difficulty when one of the CPUs is 100% busy. To verify that the first command is maxing out one of your processors, run the top command in a separate terminal window and press the 1 key. Press the Q key to quit the top command.

Setting a limit above 100% is only meaningful on a program that is capable of task parallelism. For such programs, each increment of 100% represents full utilization of a CPU (200% = 2 CPUs, 300% = 3 CPUs, etc.).

Notice that the -i option has been passed to the cpulimit command in the above example. This is necessary because the command to be limited is not a direct child process of the cpulimit command. Rather it is a child process of the time command which in turn is a child process of the cpulimit command. Without the -i option, cpulimit would only limit the time command.

Final notes

If you want to limit a graphical application that you start from a desktop icon, copy the application's .desktop file (often located under the /usr/share/applications directory) to your ~/.local/share/applications directory and modify the Exec line accordingly. Then run the following command to apply the changes.

$ update-desktop-database ~/.local/share/applications

10 Jun 2021 8:00am GMT