02 Feb 2023

feedFedora People

Richard W.M. Jones: Fedora now has frame pointers

Fedora now has frame pointers. I don't want to dwell on the how of this, it was a somewhat controversial decision and you can read all about it here. But I do want to say a bit about the why, and how it makes performance analysis so much easier.

Recently we've been looking at a performance problem in qemu. To try to understand this I've been looking at FlameGraphs all day, like this one:

<figure class="wp-block-image size-large"></figure>

FlameGraphs rely on the Linux tool perf being able to collect stack traces. The stack traces start in the kernel and go up through userspace often for dozens or even hundreds of frames. They must be collected quickly (my 1 minute long trace has nearly half a million samples) and accurately.

Perf (or actually I think it's some component of the kernel) has various methods to unwind the stack. It can use frame pointers, kernel ORC information or DWARF debug information. The thing is that DWARF unwinding (the only userspace option that doesn't use frame pointers) is really unreliable. In fact it has such serious problems that it's not that usable at all.

For example, here is a broken stack trace from Fedora 37 (with full debuginfo installed):

<figure class="wp-block-image size-large"></figure>

Notice that we go from the qemu-img program, through an "[unknown]" frame, into zlib's inflate.

In the same trace we get completely detached frames too (which are wrong):

<figure class="wp-block-image size-large"></figure>

Upgrading zlib to F38 (with frame pointers) shows what this should look like:

<figure class="wp-block-image size-large"></figure>

Another common problem with lack of frame pointers can be seen in this trace from Fedora 37:

<figure class="wp-block-image size-large"></figure>

It looks like it might be OK, until you compare it to the same workload using Fedora 38 libraries:

<figure class="wp-block-image size-large"></figure>

Look at those beautiful towering peaks! What seems to be happening (I don't know why) is that stack traces start in the wrong place when you don't have frame pointers (note that FlameGraphs show stack traces upside down, with the starting point in the kernel shown at the top). Also if you look closely you'll notice missed frames in the first one, like the "direct" call to __libc_action which actually goes through an intermediate frame.

Before Fedora 38 the only way to get good stack traces was to recompile your software and all of its dependencies with frame pointers, a massive pain in the neck and a major barrier to entry when investigating performance problems.

With Fedora 38, it's simply a matter of using the regular libraries, installing debuginfo if you want (it does still add detail), and you can start using perf straight away by following Brendan Gregg's tutorials.

02 Feb 2023 7:33pm GMT

Fedora Community Blog: Outreachy Summer’23: Call for Projects and Mentors!

The Fedora Project is participating in the upcoming round of Outreachy. We need more project ideas and mentors! The last day to propose a project or to apply as a general mentor is February 24, 2023, at 4pm UTC.

Outreachy provides a unique opportunity for underrepresented groups to gain valuable experience in open-source and gain access to a supportive community of mentors and peers. By participating in this program, the Fedora community can help create a more diverse and inclusive tech community.

If you have a project idea for the upcoming round of Outreachy, please open a ticket in the mentored projects repository. You can also volunteer to be a mentor for a project that's not yours. As a supporting mentor, you will guide interns through the completion of the project.

A good project proposal makes all the difference. It saves time for both the mentors and the applicants.

What makes a good project proposal

The Mentored Projects Coordinators will review your ideas and help you prep your project proposal to be submitted to Outreachy.

How to participate

Project Mentor

Signing up as a mentor is a commitment. Before signing up, please consider the following

Please read through the mentor-faq page from Outreachy.

General Mentor

We are also looking for general mentors for the facilitation of proper communication of feedback and evaluation with the interns working on the selected projects.

Submit your proposals

Please submit your project ideas and mentorship availability as soon as possible. The last date for project idea submission is February 24, 2023

Mentoring can be a fulfilling pursuit. It is a great opportunity to contribute to the community and shape the future of Fedora by mentoring a talented intern who will work on your project. Don't miss out on this exciting opportunity to make a difference in the Fedora community and the tech industry as a whole. Together, we can make the open-source community even more diverse and inclusive.

The post Outreachy Summer'23: Call for Projects and Mentors! appeared first on Fedora Community Blog.

02 Feb 2023 4:51pm GMT

Matthew Garrett: Blocking free API access to Twitter doesn't stop abuse

In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.

There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?

To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.

The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.

comment count unavailable comments

02 Feb 2023 10:21am GMT

Fedora Community Blog: Fedora Code of Conduct Report 2022

We publish a summary report of Code of Conduct activity each year. This provides transparency to the community. It also shows that we take our Code of Conduct seriously. In 2022, warnings and moderations increased over the previous year, with a slight reduction in total reports.

How'd it go in 2022?

We had a small decrease (about 10%) in the number of Code of Conduct reports opened in 2022 versus 2021. After three years of closely tracking our Code of Conduct incident management, it seems that we continue to hover around 20-25 reports per year. Although we saw the number of reports double in 2020 over 2019, that was the year we implemented our current and more comprehensive Code of Conduct. We believe this indicates a positive thing: the new Code of Conduct is more accessible, easier to use, and people feel safer reporting.

While the number of warnings and moderations in 2022 increased over 2021, we did not issue any suspensions or bans. This reinforces our theory from our 2021 report that people are more comfortable opening reports, no matter the severity of the incident. Meanwhile, we are all still feeling the effects of a global pandemic and widespread socio-political division, and Fedora isn't entirely separate from the world. As we mention later in this report, remember to be kind to each other, speak with empathy, and think before you type - everyone has (often invisible) challenges to overcome.

Stats

<figure class="wp-block-table">

Year Tickets Opened Tickets Closed Warnings Issued Moderations Issued Suspensions Issued Bans Issued
2020 20 16 8 4 2 0
2021 23 24 2 1 0 1
2022 21 24 6 3 0 0

</figure>

Looking forward to 2023

If you have witnessed or been a part of a situation that you believe violates Fedora's Code of Conduct, please open a private ticket on the Code of Conduct repo or email codeofconduct@fedoraproject.org. As always, your reports are confidential and only visible to the Code of Conduct Committee. Remember that opening a CoC ticket does not automatically mean action will be taken. Sometimes things can be clarified, improved, or resolved entirely. Or, it could be something pretty small, but it definitely wasn't okay, and you don't want to make a big deal… open that ticket anyway, because it could show a pattern of behavior that is negatively impacting more people than yourself.

Here is a reminder to our Fedora community to be kind and considerate to each other in all our interactions. We all depend on each other to create a community that is healthy, safe, and happy. Most of all, we love seeing folks self-moderate and stand up for the right thing day to day in our community. Keep it up, and keep being awesome Fedora, we <3 you!

About the process

Fedora Project's Code of Conduct and reports are managed by the Fedora Code of Conduct Committee (CoCC). The Fedora CoCC is made up of the Fedora Project Leader, Matthew Miller; the Fedora Community Architect, Justin W. Flory; Marie Nordin (on a transitional basis); and the Red Hat legal team, as appropriate. We are working on solidifying a process to officially add more community members to the Committee.

The post Fedora Code of Conduct Report 2022 appeared first on Fedora Community Blog.

02 Feb 2023 8:00am GMT

01 Feb 2023

feedFedora People

Fedora Badges: New badge: CentOS Connect 2023 Attendee !

CentOS Connect 2023 AttendeeYou connected with CentOS in 2023

01 Feb 2023 12:08pm GMT

Fedora Magazine: Automatically decrypt your disk using TPM2

This article demonstrates how to configure clevis and systemd-cryptenroll using a Trusted Platform Module 2 chip to automatically decrypt your LUKS-encrypted partitions at boot.

If you just want to get automatic decryption going you may skip directly to the Prerequisites section.

Motivation

Disk encryption protects your data (private keys and critical documents) through direct access of your hardware. Think of selling your notebook / smartphone or it being stolen by an opportunistic evil actor. Any data, even if "deleted", is recoverable and hence may fall into the hands of an unknown third party.

Disk encryption does not protect your data from access on the running system. For example, disk encryption does not protect your data from access by malware running as your user or in kernel space. It's already decrypted at that point.

Entering the passphrase to decrypt the disk at boot can become quite tedious. On modern systems a secure hardware chip called "TPM" (Trusted Platform Module) can store a secret and automatically decrypt your disk. This is an alternative factor, not a second factor. Keep that in mind. Done right, this is an alternative with a level of security similar to a passphrase.

Background

A TPM2 chip is a little hardware module inside your device which basically provides APIs for either WRITE-only or READ-only information. This way you might write a secret onto it, but you can never read it out later (but the TPM may use it later internally). Or you write info at one point that you only read out later. The TPM2 provides something called PCRs (Platform Configuration Registers). These registers take SHA1 or SHA256 hashes and contain measurements used to assert integrity of, for example, the UEFI configuration.

Enable or disable Secure Boot in the system's UEFI. Among other things, Secure Boot computes hashes of every component in the boot chain (UEFI and its configuration, bootloader, etc.) and chains them together such that a change in one of those components changes the computed and stored hashes in all following PCRs. This way you can build up trust about the environment you are in. Having a measure of the trustworthiness of your environment is useful, for example, when decrypting your disk. The UEFI Secure Boot specification defines PCRs 0 - 7. Everything beyond that is free for the OS and applications to use.

A summary of what is measured into which PCRs according to the spec

Some examples on what is measured into which PCR

A tool called clevis generates a new decryption secret for the LUKS encrypted disk, stores it in the TPM2 chip and configures the TPM2 to only return the secret if the PCR state matches the one at configuration time. Clevis will attempt to retrieve the secret and automatically decrypt the disk at boot time only if the state is as expected.

Security implications

As you establish an alternative unlock method using only the on-board hardware of your platform, you have to trust your platform manufacturer to do their job right. This is a delicate topic. There is trust in a secure hardware and firmware design. Then there is trust that the UEFI, bootloader, kernel, initramfs, etc. are all unmodified. Combined you expect a trustworthy environment where it is OK to automatically decrypt the disk.

That being said you have to trust (or better, verify) that the manufacturer did not mess anything up in the overall platform design for this to be considered a fairly safe decryption alternative. There are a range of cases where things did not work out as planned. For example, when security researches showed that BitLocker on a Lenovo notebook would use unencrypted SPI communication with the TPM2 leaking the LUKS passphrase in plain text without even altering the system, or that BitLocker used the native encryption features of SSD drives that you can by-pass through factory reset.

These examples are all about BitLocker but it should make it clear that if the overall design is broken, then the secret is accessible and this alternative method less secure than a passphrase only present in your head (and somewhere safe like a password manager). On the other hand, keep in mind that in most cases elaborate research and attacks to access a drive's data are not worth the effort for an opportunistic bad actor. Additionally, not having to enter a passphrase on every boot should help adoption of this technology as it is transparent but adds additional hurdles to unwanted access.

Prerequisites

First check that:

Clevis is where the magic happens. It's a tool you use in the running OS to bind the TPM2 as an alternative decryption method and use it inside the initramfs to read the decryption secret from the TPM2.

Check that secure boot is enabled. The output of dmesg should look like this:

$ dmesg | grep Secure
[    0.000000] secureboot: Secure boot enabled
[    0.000000] Kernel is locked down from EFI Secure Boot mode; see man kernel_lockdown.7
[    0.005537] secureboot: Secure boot enabled
[    1.582598] integrity: Loaded X.509 cert 'Fedora Secure Boot CA: fde32599c2d61db1bf5807335d7b20e4cd963b42'
[   35.382910] Bluetooth: hci0: Secure boot is enabled

Check dmesg for the presence of a TPM2 chip:

$ dmesg | grep TPM
[    0.005598] ACPI: TPM2 0x000000005D757000 00004C (v04 DELL   Dell Inc 00000002      01000013)

Install the clevis dependencies and regenerate your initramfs using dracut.

sudo dnf install clevis clevis-luks clevis-dracut clevis-udisks2 clevis-systemd
sudo dracut -fv --regenerate-all
sudo systemctl reboot

The reboot is important to get the correct PCR measurements based on the new initramfs image used for the next step.

Configure clevis

To bind the LUKS-encrypted partition with the TPM2 chip. Point clevis to your (root) LUKS partition and specify the PCRs it should use.

Enter your current LUKS passphrase when asked. The process uses this to generate a new independent secret that will tie your LUKS partition to the TPM2 for use as an alternative decryption method. So if it does not work you will still have the option to enter your decryption passphrase directly.

sudo clevis luks bind -d /dev/nvme... tpm2 '{"pcr_ids":"1,4,5,7,9"}'

As mentioned previously, PCRs 1, 4 and 5 change when booting into another system such as a live disk. PCR 7 tracks the current UEFI Secure Boot policy and PCR 9 changes if the initramfs loaded via EFI changes.

Note: If you just want to protect the LUKS passphrase from live images but don't care about more "elaborate" attacks such as altering the unsigned initramfs on the unencrypted boot partition, then you might omit PCR 9 and save yourself the trouble of rebinding on updates.

Automatically decrypt additional partitions

In case of secondary encrypted partitions use /etc/crypttab.

Use systemd-cryptenroll to register the disk for systemd to unlock:

sudo systemd-cryptenroll /dev/nvme0n1... --tpm2-device=auto --tpm2-pcrs=1,4,5,7,9

Then reflect that config in your /etc/crypttab by appending the options tpm2-device=auto,tpm2-pcrs=1,4,5,7,9.

Unbind, rebind and edit

List all current bindings of a device:

$ sudo clevis luks list -d /dev/nvme0n1... tpm2
1: tpm2 '{"hash":"sha256","key":"ecc","pcr_bank":"sha256","pcr_ids":"0,1,2,3,4,5,7,9"}'

Unbind a device:

sudo clevis luks unbind -d /dev/nvme0n1... -s 1 tpm2

The -s parameter specifies the slot of the alternative secret for this disk stored in the TPM. It should be 1 if you always unbind before binding again.

Regenerate binding, in case the PCRs have changed:

sudo clevis luks regen -d /dev/nvme0n1... -s 1 tpm2

Edit the configuration of a device:

sudo clevis luks edit -d /dev/nvme0n1... -s 1 -c '{"pcr_ids":"0,1,2,3,4,5,7,9"}'

Troubleshooting

Disk decryption passphrase prompt shows at boot, but goes away after a while:

Add a sleep command to the systemd-ask-password-plymouth.service file using systemctl edit to avoid requests to the TPM before its kernel module is loaded:

[Service]
ExecStartPre=/bin/sleep 10

Add the following to the config file /etc/dracut.conf.d/systemd-ask-password-plymouth.conf:

install_items+=" /etc/systemd/system/systemd-ask-password-plymouth.service.d/override.conf "

Then regenerate dracut via sudo dracut -fv -regenerate-all.

Reboot and then regenerate the binding:

sudo systemctl reboot
...
sudo clevis luks regen -d /dev/nvme0n1... -s 1

Resources

01 Feb 2023 8:00am GMT

31 Jan 2023

feedFedora People

Kevin Fenzi: error: rpmdbNextIterator: skipping in Fedora 38+

I've seen this question enough times recently to decide to just write up a blog post on it and point people here. :) 

If you are running Fedora 38 (currently rawhide, but will be branching off soon) and you are getting errors like this from dnf and/or rpm:

Running transaction check
error: rpmdbNextIterator: skipping h#   47749 
Header V4 DSA/SHA1 Signature, key ID 7fac5991: BAD
Header SHA256 digest: OK
Header SHA1 digest: OK
error: rpmdbNextIterator: skipping h#   47749 
Header V4 DSA/SHA1 Signature, key ID 7fac5991: BAD
Header SHA256 digest: OK
Header SHA1 digest: OK

This post is for you. What is going on here? And how can you get around the problem?

Well, what happened is that rpm used to have internal code to handle signatures on packages. This code was really something rpm didn't want to have to maintain, so they switched recently to using sequoia, which is a new gnupg handling project written in rust. With this switch, sequoia actually honors the site wide fedora crypto policy (which the internal old rpm version did not).

Back in Fedora 33, the distro wide crypto policy was updated to disallow SHA-1 as a signature algorithm. See https://fedoraproject.org/wiki/Changes/StrongCryptoSettings2 for more information.

You might wonder why, if Fedora changed the distro wide crypto policy to disallow SHA-1 in signatures, why didn't they update things so nothing used SHA-1 now? Well, the short answer is: they did. No rpms that Fedora produces now use SHA-1 signatures. However, some third-party rpms do. One of the big ones that many people are hitting is google's "chrome" web browser. There's probably others.

Now that we know _why_ this is happening, what can you do? Well, first you need to do something so dnf and rpm allows you to remove/update/change your package set. You can do this by (temporarily!) allowing SHA-1 Signatures with:

sudo update-crypto-policies --set DEFAULT:SHA1

rpm and dnf should now work for you again. You might remove packages that have SHA-1 signatures and switch to alternatives. Or wait until google updates their signing key (the current one is from 2007). Once you have done what you need to do you can set the policy back to it's sane default:

sudo update-crypto-policies --set DEFAULT

31 Jan 2023 9:50pm GMT

Martin Stransky: Firefox, VA-API and NVIDIA on Fedora 37

<figure class="wp-block-image size-large"><figcaption class="wp-element-caption">Image comes from https://www.nvidia.com/en-us/about-nvidia/legal-info/logo-brand-usage/</figcaption></figure>

Some time ago I got borrowed NVIDIA GeForce GTX 1070 from my employer (Red Hat) and I finally managed to put it to a workstation instead of my own AMD RX 6600 XT.

I installed proprietary drivers from rpmfusion and to my surprise everything worked smoothly (except Atom on XWayland). Both Wayland and X11 Gnome sessions popped up, Firefox picked up HW accelerated backend (WebRender) with DMABuf support so it's time to check VA-API.

Thanks to nvidia-vaapi-driver by Stephen "elFarto" Firefox may directly decode video on NVIDIA hardware. The driver translates VA-API calls from Firefox to VPDAU used by NVIDIA. I think you also need a decently fresh NVIDIA drivers which supports DMABuf (which is used to transfer decoded images between Firefox processes and render them as GL textures).

I hit three bumps on the road. First one is Firefox RDD sandbox. Firefox runs media decode in extra process (RDD) which restricts where decoder can access. It was adjusted by Mozilla folks for VA-API decode on Intel/AMD but NVIDIA needs some extra tweaks. Right now you need to disable the sanbox by MOZ_DISABLE_RDD_SANDBOX=1 env variable.

Next one is a bug in recent NVIDIA 525 driver series (which I got from rpmfusion) and I needed to use direct mode (whatever it is).

Last issue may be in Firefox itself. Broken graphics hardware may freeze whole browser on start or spread coredumps on every start. That's being worked on as Bug 1813500 and Bug 1787182.

There's a complete how-to for Firefox/NVIDIA/Fedora 37 available on Fedora wiki.

Nvidia-vaapi-driver playback performance is similar to what I see on AMD/Intel. It also correctly handles decoding of intermediate frames which is recent AMD NAVI2 bug (Bug 1772028, Bug 1802844) so the playback is smooth and I haven't seen any glitches or CPU usage peaks.

There are few options for NVIDIA users how to run it.

If you have a workstation (or laptop) with one NVIDIA graphics card, it's quite simple. On Fedora 37 you boot on noveau drivers and then install NVIDIA drivers from rpmfusion. With the proprietary drivers both Wayland and X11 Gnome sessions work fine (anyone to test KDE?) and hardware acceleration is enabled.

A bit different scenario comes with integrated Intel device and secondary NVIDIA one. I don't see any reason why to use NVIDIA as Intel works pretty well with Wayland, VA-API and X11/EGL. But if you really want to set NVIDIA as primary, X11 may be better for you. Wayland on secondary NVIDIA GPU is supported by Sway Wayland compositor only which is a bit geeky (or I'm just too lazy to learn new shortcuts and get used to a new environment).

Anyway, if you need to use NVIDIA as your primary GPU, there's a hope for you (and it's not due to me). Give it a try and report eventual bugs at Mozilla NVIDIA VA-API bug tracker.

31 Jan 2023 8:19pm GMT

Daniel Vrátil: QCoro 0.8.0 Release Announcement

QCoro 0.8.0 Release Announcement

This is a rather small release with only two new features and one small improvement.

Big thank you to Xstrahl Inc. who sponsored development of new features included in this release and of QCoro in general.

And as always, thank you to everyone who reported issues and contributed to QCoro. Your help is much appreciated!

The original release announcement on qcoro.dvratil.cz.

Improved QCoro::waitFor()

Up until this version, QCoro::waitFor() was only usable for QCoro::Task<T>. Starting with QCoro 0.8.0, it is possible to use it with any type that satisfies the Awaitable concept. The concept has also been fixed to satisfies not just types with the await_resume(), await_suspend() and await_ready() member functions, but also types with member operator co_await() and non-member operator co_await() functions.

QCoro::sleepFor() and QCoro::sleepUntil()

Working both on QCoro codebase as well as some third-party code bases using QCoro it's clear that there's a usecase for a simple coroutine that will sleep for specified amount of time (or until a specified timepoint). It is especially useful in tests, where simulating delays, especially in asynchronous code is common.

Previously I used to create small coroutines like this:

QCoro::Task<> timer(std::chrono::milliseconds timeout) {
    QTimer timer;
    timer.setSingleShot(true);
    timer.start(timeout);
    co_await timer;
}

Now we can do the same simply by using QCoro::sleepFor().

Read the documentation for QCoro::sleepFor() and QCoro::sleepUntil() for more details.

QCoro::moveToThread()

A small helper coroutine that allows a piece of function to be executed in the context of another thread.

void App::runSlowOperation(QThread *helperThread) {
    // Still on the main thread
    ui->statusLabel.setText(tr("Running"));

    const QString input = ui->userInput.text();

    co_await QCoro::moveToThread(helperThread);
    // Now we are running in the context of the helper thread, the main thread is not blocked

    // It is safe to use `input` which was created in another thread
    doSomeComplexCalculation(input);

    // Move the execution back to the main thread
    co_await QCoro::moveToThread(this->thread());
    // Runs on the main thread again
    ui->statusLabel.setText(tr("Done"));
}

Read the documentation for QCoro::moveToThread for more details.

Full changelog

See changelog on Github

31 Jan 2023 7:53pm GMT

Fedora Community Blog: Help decide how to handle tags on merged Ask Fedora + Fedora Discussion

We are in the process of merging our user-support forum Ask Fedora into Fedora Discussion - our site geared towards contributor and project team conversations. Historically, we've used tags differently on those two sites. This means we need to figure out an approach for combining them. Please take a look at the Adding -team to (almost) all of the tags in Project Discussion? thread and add your thoughts.

History of tags

These sites were initially separate because of the different target audiences. We learned that is often confusing to people, and it's hard to re-categorize posts that land in the wrong place. Plus, making a strong distinction between users and contributors has never been the Fedora way. Almost all contributors are Fedora Linux users too, and we always welcome all of our users to get more involved. So: together!

In the current Fedora Discussion site organization, tags are roughly the equivalent of team mailing lists. If you're interested in documentation, you subscribe to the #docs tag. If you're interested in marketing, subscribe to the #marketing tag. Then you get notifications (including email, if that's your preference) when someone posts about those things.

On Ask, we've used tags more loosely, and in a more traditional way: tags describe the content. For example, if a question is about Fedora Workstation, it might include the #workstation tag.

Merging

This is all fine, but we have a concern for the merged site. After the merger, will topics from Ask Fedora will overwhelm those from Project Discussion? If you're following the #docs tag, you'll get notifications for both support questions and team discussion. If you're getting email notifications, you can filter on X-Discourse-Category and X-Discourse-Tags headers to distinguish, but that doesn't work for the site's notification menu.

I proposed that existing tags get a suffix, becoming #docs-team, #workstation-wg, #kde-sig, etc., to distinguish them. But, folks from several teams have now told me they prefer having the same tag. And I can see the simplicity and elegance of that, too.

So why is this on the Community Blog rather than just Fedora Discussion? Because I'd like to encourage wider use, and so even if you aren't following the forum a lot currently, I'd like your input! Please take a look at the Adding -team to (almost) all of the tags in Project Discussion? thread and add your thoughts.

The post Help decide how to handle tags on merged Ask Fedora + Fedora Discussion appeared first on Fedora Community Blog.

31 Jan 2023 8:00am GMT

Joe Brockmeier: Poking at Distrobox

I'm probably late to the party, but Distrobox has to be one of the best open source projects to drop in the past few years. No matter which Linux distro I standardize on, there's inevitably something I want to run that runs best or only on another distro. Or I just want to dip into a shell for $distro real quick to verify whether a certain package exists, or what the package name is, the default config for an application, etc.

Or I'd like to run two instances of an application with different profiles, without having to set up a whole virtual machine.

Distrobox provides an easy answer for many of those use cases. Distrobox lets you run "any Linux distribution inside your terminal." There's a slight asterisk next to "any" in the form of "the distribution has to have a ready made Docker container you can pull." But the number of distros I'd like to run and the number of distros that don't have an official container are few and far between. The only exception that comes to mind is Slackware, which has a container on Docker Hub but it hasn't been updated in about 7 years.

But, if you want to run a mainstream-ish Linux distro on top of your existing distro, Distrobox has you covered. I set up Distrobox this evening on top of a laptop running Pop!_OS and had Fedora 37 running in a container with Chromium playing YouTube videos in about 5 minutes. Pretty snazzy.

In theory, Distrobox will even let you run other architectures like ARM64 on top of AMD64, but that remains to be seen. I tried out a Fedora ARM64 image on Distrobox but it seems almost hopelessly slow. Nice party trick, perhaps not that useful in real life.

But if you want to run AMD64 containers on AMD64 hosts, seems perfectly suitable. Going to be using this much more in the future and will write up any useful tips or tricks I run into.

31 Jan 2023 3:37am GMT

30 Jan 2023

feedFedora People

The NeuroFedora Blog: Next Open NeuroFedora meeting: 30 January 1300 UTC

Photo by William White on Unsplash

Photo by William White on Unsplash.


Happy new year!!

Please join us at the next regular Open NeuroFedora team meeting on Monday 30 January at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 2023-01-30'

The meeting will be chaired by @ankursinha. The agenda for the meeting is:

We hope to see you there!

30 Jan 2023 9:42am GMT

Josh Bressers: Episode 360 – Memory safety and the NSA

Josh and Kurt talk about the NSA guidance on using memory safety issues. The TL;DR is to stop using C. We discuss why C has so many problem, why we can't fix C, and what some alternatives looks like. Even the alternatives have their own set of issues and there are many options, but the one thing we can agree on is we have to stop using C.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3049-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_360_Memory_safety_and_the_NSA.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_360_Memory_safety_and_the_NSA.mp3</audio>

Show Notes

30 Jan 2023 12:00am GMT

28 Jan 2023

feedFedora People

Packit Team: 2022 for Packit

Packit project in 2022 # As you will see in the following paragraphs, the year 2022 was really fruitful for the Packit project. Without further ado, let's take a look at what the Packit team accomplished last year: Fedora automation # We have made a huge improvement in downstream automation. At the beginning of the year, we finished the workflow and you are now able to use Packit to get your release from upstream via dist-git and Koji to Bodhi.

28 Jan 2023 10:58am GMT

Kushal Das: Introducing Tugpgp

At Sunet, we have heavy OpenPGP usage. But, every time a new employee joins, it takes hours (and sometime days for some remote folks) to have their Yubikey + OpenPGP setup ready.

Final screen

Tugpgp is a small application built with these specific requirements for creating OpenPGP keys & uploading to Yubikeys as required in Sunet. The requirements are the following:

We have an Apple Silicon dmg and AppImage (for Ubuntu 20.04 onwards) in the release page. This is my first ever AppImage build, the application still needs pcscd running on the host system. I tested it on Debian 11, Fedora 37 with Yubikey 4 & Yubikey 5.

Oh, there is also a specific command line argument if you really want to save the private key :) But, you will have to find it yourself :).

demo gif

If you are looking for the generic all purpose application which will allow everyone of us to deal with OpenPGP keys and Yubikeys, then you should check the upcoming release of Tumpa, we have a complete redesign done there (after proper user research done by professionals).

28 Jan 2023 8:32am GMT

27 Jan 2023

feedFedora People

Matthew Garrett: Further adventures in Apple PKCS#11 land

After my previous efforts, I wrote up a PKCS#11 module of my own that had no odd restrictions about using non-RSA keys and I tested it. And things looked much better - ssh successfully obtained the key, negotiated with the server to determine that it was present in authorized_keys, and then went to actually do the key verification step. At which point things went wrong - the Sign() method in my PKCS#11 module was never called, and a strange
debug1: identity_sign: sshkey_sign: error in libcrypto
sign_and_send_pubkey: signing failed for ECDSA "testkey": error in libcrypto"

error appeared in the ssh output. Odd. libcrypto was originally part of OpenSSL, but Apple ship the LibreSSL fork. Apple don't include the LibreSSL source in their public source repo, but do include OpenSSH. I grabbed the OpenSSH source and jumped through a whole bunch of hoops to make it build (it uses the macosx.internal SDK, which isn't publicly available, so I had to cobble together a bunch of headers from various places), and also installed upstream LibreSSL with a version number matching what Apple shipped. And everything worked - I logged into the server using a hardware-backed key.

Was the difference in OpenSSH or in LibreSSL? Telling my OpenSSH to use the system libcrypto resulted in the same failure, so it seemed pretty clear this was an issue with the Apple version of the library. The way all this works is that when OpenSSH has a challenge to sign, it calls ECDSA_do_sign(). This then calls ECDSA_do_sign_ex(), which in turn follows a function pointer to the actual signature method. By default this is a software implementation that expects to have the private key available, but you can also register your own callback that will be used instead. The OpenSSH PKCS#11 code does this by calling EC_KEY_set_method(), and as a result calling ECDSA_do_sign() ends up calling back into the PKCS#11 code that then calls into the module that communicates with the hardware and everything works.

Except it doesn't under macOS. Running under a debugger and setting a breakpoint on EC_do_sign(), I saw that we went down a code path with a function called ECDSA_do_sign_new(). This doesn't appear in any of the public source code, so seems to be an Apple-specific patch. I pushed Apple's libcrypto into Ghidra and looked at ECDSA_do_sign() and found something that approximates this:

nid = EC_GROUP_get_curve_name(curve);
if (nid == NID_X9_62_prime256v1) {
  return ECDSA_do_sign_new(dgst,dgst_len,eckey);
}
return ECDSA_do_sign_ex(dgst,dgst_len,NULL,NULL,eckey);

What this means is that if you ask ECDSA_do_sign() to sign something on a Mac, and if the key in question corresponds to the NIST P256 elliptic curve type, it goes down the ECDSA_do_sign_new() path and never calls the registered callback. This is the only key type supported by the Apple Secure Enclave, so I assume it's special-cased to do something with that. Unfortunately the consequence is that it's impossible to use a PKCS#11 module that uses Secure Enclave keys with the shipped version of OpenSSH under macOS. For now I'm working around this with an SSH agent built using Go's agent module, forwarding most requests through to the default session agent but appending hardware-backed keys and implementing signing with them, which is probably what I should have done in the first place.

comment count unavailable comments

27 Jan 2023 11:39pm GMT