06 Apr 2026
Fedora People
Fedora Infrastructure Status: Matrix server maintenance
06 Apr 2026 11:15am GMT
02 Apr 2026
Fedora People
Fedora Community Blog: Fedora Code of Conduct Report 2024

The Fedora Project's Code of Conduct and its reports are managed by the Fedora Code of Conduct Committee, the Fedora Community Architect, and the Fedora Project Leader. We publish this summary to demonstrate our commitment to community safety and our project's social fabric.
This post covers the year of reports received in the 2024 calendar year. The 2023 and 2024 annual report posts are published with delays due to changes in membership in the Code of Conduct Committee and rebalancing existing work. The purpose of publishing the reports now is to provide transparency, insight, and awareness into the health signs of the community.
How'd it go in 2024
The Fedora community continues to see a mix of hurdles in collaborations within the community, off-platform brand management, and a significant focus on moderator accountability.
2024 included reports about external social media posts made outside of our core community spaces. The Fedora Code of Conduct Committee (CoCC) were no longer just "putting out fires" of individual indifferences; we actively set expectations for how contributors represent Fedora on the web and its communities. To support this mission and bring in fresh perspectives to our work, we expanded our committee by welcoming three new members Jona Azizaj, Dave Cantrell, and Dorka Volavkova.
Overall, the 2024 data shows a significant decrease in new reports opened from the previous years. Additionally, fewer warnings and moderations were issued than previous years. The data matches the experience of the Code of Conduct Committee, in that the case load from new reports was finally beginning to decrease in volume. The incidents we received in 2024 were typically less intense and time-consuming than prior years. This supports a hypothesis made by the Committee that reports will decrease as time goes on from the global pandemic. The 2021 initiative of modernizing the Fedora Code of Conduct for sustainability was a successful effort.
| Year | Reports Opened | Reports Closed | Warnings Issued | Moderations Issued | Suspensions Issued | Bans Issued |
| 2024 | 11 | 11 | 1 | 0 | 1 | 0 |
| 2023 | 17 | 17 | 5 | 3 | 1 | 1 |
| 2022 | 21 | 24 | 6 | 3 | 0 | 0 |
| 2021 | 23 | 24 | 2 | 1 | 0 | 1 |
| 2020 | 20 | 16 | 8 | 4 | 2 | 0 |
Looking forward to 2025
If you witness or are part of a situation that violates Fedora's Code of Conduct, please open a private report on the Code of Conduct repo or email codeofconduct@fedoraproject.org. As always, your reports are confidential and only visible to the Code of Conduct Committee.
Remember that opening a CoC report does not automatically mean action will be taken. Sometimes things can be clarified, improved, or resolved entirely. Or, it could be something pretty small, but it definitely wasn't okay, and you don't want to make a big deal… open that report anyway, because it could show a pattern of behavior that is negatively impacting more people than yourself.
Here is a reminder to our Fedora community to be kind and considerate to each other in all our interactions. We all depend on each other to create a community that is healthy, safe, and happy. Most of all, we love seeing folks self-moderate and stand up for the right thing day-to-day in our community. Keep it up, and keep being awesome Fedora, we <3 you!
About the Committee
Fedora Project's Code of Conduct and reports are managed by the Fedora Code of Conduct Committee (CoCC). The Fedora CoCC is made up of the Fedora Project Leader, Matthew Miller; the Fedora Community Architect, Justin Wheeler; the Red Hat legal team, as appropriate; and community nominated members. In 2024, the Fedora Code of Conduct Committee (CoCC) expanded its membership by adding three new members. Jona Azizaj, David Cantrell, Dorka Volavkova came in this year.
The post Fedora Code of Conduct Report 2024 appeared first on Fedora Community Blog.
02 Apr 2026 12:00pm GMT
Brian (bex) Exelbierd: A Few More Thoughts on Sashiko and the Kernel
Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.
I kept thinking about the LWN article $ and the basic analysis I did yesterday. I kept coming back to one of the central themes of the mailing list conversation: false positives. Sashiko's false positive rate is debated, but, I'm gathering, is pretty good by LLM standards. Still, there was a complaint about the number of false positives focused on the burden that false positives put on contributors and maintainers.
I wanted to understand if the false positive rate, and by extension the burden, was higher from an LLM than from human reviewers. To run that experiment, I needed to define what a false positive actually is. That turns out to be the interesting part.
The Definition Problem
My initial naïve definition of a false positive was any substantial comment that doesn't yield a code change. If you said something and the code wasn't changed, then even if it generated future work, it wasn't applicable to this change now. The obvious hole is a comment that raises a future code change coming in a different patch set. But it felt like this number could be directionally accurate for understanding if we get more false positives or not.
The deeper problem is that "comment that doesn't change code" isn't really what false positive means in review. The act of questioning code can lead to greater confidence in the patch being proposed. It can reveal unrelated changes that are required or surface features that should also be considered. Not a negative outcome, but potentially not relevant to the actual patch set under discussion. So I tried reframing from false positives to burden: any comment that doesn't result in a code change and was actually read by the contributor or maintainer is burdensome. It doesn't matter whether a human or LLM reviewer raised the comment. If it didn't result in a change, it was work or thought they didn't need to do. For example, a back-and-forth conversation to prove the correctness of something that was already correct.
But that definition fails too, and the reason it fails is the real insight.
If two humans are engaged in a review process and there's a back-and-forth conversation that does not result in a code change, most likely neither human would describe this as unnecessary burden. They would probably describe it as work they had to do or effort they expended, but both humans have likely come out of that conversation changed. Greater understanding of different parts of the system. Better ability to express oneself so the questions aren't raised next time. Increased confidence in the correctness of a solution. There is a change assumed to have happened to one or both of the people.
A review conversation that doesn't change code but changes the people having it isn't a false positive. It only looks like one when the reviewer is a machine that won't be changed.
For what it's worth, I did look at existing studies of human review false positive rates. In my brief and non-exhaustive look, I've come to believe they aren't useful here, not only because the question is moot when both parties come out changed, but because many are flawed or non-comparable. Some are in domains where reviewers are generalists talking to a specialist, unlikely in the kernel. Others misclassify trivial exchanges like "LGTM" or "thanks" as false positives. And none have been conducted over the kernel.
When the Reviewer Is a Machine
When a finding or probing question is raised by an LLM agent, the assumption that both parties come out changed breaks down.
Probing questions may not even be welcome from an LLM agent. One could never really be sure whether this was a "humans normally say this kind of thing in this context" situation versus an "I see something that maybe is wrong" situation.
But the more important part is this: if a human has to read a false positive, they have to put in their side of the work to validate, verify, explore, or test the question, and ultimately determine that it's not an issue. They are unlikely to be changed in the absence of an exchange. And we know for a fact that the machine is not going to be changed.
In theory, we could wire up a training loop for Sashiko to take these back-and-forth exchanges and learn from them to reduce the incidence of false positives. I suspect it would have very little impact overall. First, the analysis showed that there's almost no situation where the same bug is being surfaced over and over again. The machine is unlikely to run into the same finding and then have learned that finding isn't valid. Second, the machine is not arguing from a position of true reasoning, therefore it is never clear if it backed down because it decided to be an agreeable sycophant or because the additional commentary made the correctness argument airtight.
The Social Problem
At its true core, I think the conversation around false positives, based on what I read in the article, is likely a social problem, like most truly intractable problems in computer science.
If an LLM agent reviews my contribution and the maintainer insists that I address the review, I am not only forced to do what turns out, in the case of a false positive, to be unnecessary work, but forced to performatively defend myself against a machine. Or worse, argue with the machine performatively. The combination of unnecessary work that generates no value, plus being forced to do so performatively in the face of knowing it generates no value, but now having to do more work to show that I generated the work that did no value is a line too far for most of our psyches.
A Possible Path
Setting aside the separate question of whether LLM ability will continue improving and therefore the number of false positives will go down, the core question of how to deal with false positives needs to be addressed at a social level.
In a space like the kernel, I would argue it may be appropriate to allow those whose code has been reviewed to react to LLM-generated findings with something along the lines of "smells like bullshit" and not have to go through the performative exercise of proving it's bullshit, because we trust their instinct.
That said, it is probably worth creating some kind of long-term profile or scoreboard, both of those being the wrong words, for a contributor, so that they can over time understand if their intuition has blind spots. If an LLM is consistently raising a certain kind of feedback that they are dismissing, but we later discover a bug and have to fix it, or if human reviewers come back and their synthesis of their own experience plus what the LLM provided leads them to believe there's a real, demonstrable problem, that's a learning opportunity for the contributor.
The challenge is that there are no systems I'm aware of in modern use where these kinds of profiles are ever not used abusively against those profiled. Which is yet another social problem.
02 Apr 2026 11:50am GMT
01 Apr 2026
Fedora People
Brian (bex) Exelbierd: What’s Actually in a Sashiko Review?
Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions. And, yes, I'm aware of the date. The data is real - and in 40 minutes it won't be April 1 anymore, at least where I live.
Daroc Alden's LWN article on Sashiko $ captures a real tension in the Linux kernel community. Andrew Morton wants to make Sashiko - an LLM-based patch reviewer - a mandatory part of the memory management workflow. Lorenzo Stoakes and others say it's too noisy and adds burden to already-overworked maintainers. Morton points to a ~60% hit rate on actual bugs. Stoakes points out that's per-review, not per-comment, so the individual false positive rate is worse.
Reading the thread, I kept wondering about two specific mechanisms that could be driving maintainer frustration beyond the false positive question.
Two Hypotheses
Hypothesis 1: Reviewers are getting told about bugs they didn't create. Sashiko's review protocol explicitly instructs the LLM to read surrounding code, not just the diff. That's good review practice - but it means the tool might flag pre-existing bugs in code the patch author merely touched, putting those problems in their inbox.
Hypothesis 2: The same pre-existing bugs surface repeatedly. If a known issue in a subsystem doesn't get fixed between review runs, every patch touching nearby code could trigger the same finding. That would create a steady drip of duplicate noise across the mailing list.
I pulled data from Sashiko's public API and tested both.
Method
I fetched all 406 patchsets from the linux-mm mailing list and a 500-patchset sample from LKML as of April 1, 2026. Of the 252 linux-mm reviews with findings, 204 had full review text available for analysis.
I had an LLM write Python scripts to classify the 466 extracted findings into three categories using deterministic regex pattern matching - roughly 50 weighted patterns that look for specific language in the review text. The classification code runs the same way every time on the same input. An LLM wrote it, but the scanning itself involves no inferencing.
The three categories:
- Patch-specific - about the actual changed lines. Patterns match phrases like "this patch adds," "the new code," "missing check."
- Interaction - about how new code interacts with existing code. Patterns match references to callers, callees, lock state, concurrent access.
- Pre-existing - about bugs in surrounding code not introduced by the patch. Patterns match "not introduced by this patch," "pre-existing," "noticed while reviewing."
When a finding matched multiple categories, the most specific won: pre-existing > interaction > patch-specific. About 7% of findings didn't match any pattern and were excluded from further analysis.
For duplication, the scripts computed pairwise text similarity across reviews within the same subsystem. Again - deterministic comparison, LLM-authored code.
The full methodology, including the code used, a cached copy of the reviews, and the classification patterns and caveats, is in the analysis document in github.com/bexelbie/sashiko-analysis.
What the Data Shows
Hypothesis 2 is dead. Cross-review duplication was essentially zero. Across 16 LKML subsystems with 5+ reviewed patches each, only one pair of findings exceeded the similarity threshold - and that was the same author submitting similar patches, not the same bug recurring. Whatever is driving maintainer frustration, it's not the same findings appearing over and over. While it is possible this would surface in a larger sample size, I personally find it unlikely.
Hypothesis 1 is partially supported, but the story is in the distribution. About 9% of findings explicitly discuss pre-existing issues. Averaged across all reviews, that's roughly 12 words per review - barely noticeable.
But the average is misleading. The distribution is bimodal: 81% of reviews contain zero pre-existing findings. The other 19% contain pre-existing findings that constitute 28% of the review on average, adding roughly 19 lines to what the patch author reads. A few reviews are 75-82% pre-existing content.
Here's the breakdown of what an average review with findings contains:
| Category | % of findings | Avg words |
|---|---|---|
| About the submitted patch | 72% | 74 |
| Patch × existing code interactions | 12% | 103 |
| Pre-existing issues | 9% | 62 |
| Unclassified | 8% | 47 |
The interaction findings (category 2) are worth calling out. They're the longest - 103 words on average, 39% more than patch-specific findings - because explaining how new code breaks against existing behavior requires describing that behavior. These are arguably the hardest findings for a human reviewer to produce and exactly where a tool with codebase-wide context adds value.
Who Owns This Bug Now?
The sharpest question the data raises isn't statistical. It's social.
When you submit a patch to linux-mm and get a Sashiko review, there's roughly a 1-in-5 chance that a meaningful chunk of that review describes a bug you didn't write - a race, a leak, a use-after-free in the code you're modifying. Some of these are trivial (typos in nearby comments). Some are substantive.
Either way, the review has put it in your inbox. You are now the person who has been told about it.
Morton's position - "don't add bugs" as Rule #1 - makes sense if the tool's output is mostly about your patch. And it is: ~85% of findings concern either the submitted change or its direct interactions with existing code. But 1 in 5 reviewees is also getting handed someone else's problem, with an implicit expectation to respond.
Stoakes's concern about maintainer burden lands differently when you see the bimodal distribution. The average review is manageable. The tail is not.
What This Doesn't Answer
This analysis classifies scope - whether a finding is about the submitted patch, its interactions, or pre-existing code. It does not measure correctness. The core Morton/Stoakes disagreement is about false positive rates within on-topic findings - how often Sashiko flags something in your patch that turns out to be wrong. That question requires domain expertise to evaluate each finding individually, and this data doesn't go there.
The classification also has limits. The regex patterns achieve ~93% coverage but aren't semantic - borderline cases between categories get decided by pattern specificity, not understanding. The proportions are directionally sound but not precise.
The full data, methodology, and API references are in the repository, github.com/bexelbie/sashiko-analysis if anyone wants to reproduce or extend this.
01 Apr 2026 9:20pm GMT
Peter Czanik: My new toy: April 1 syslog-ng performance tests
01 Apr 2026 10:35am GMT
Jeremy Cline: Fedora's aarch64 images support Secure Boot
01 Apr 2026 8:51am GMT
Fedora Magazine: Make a private CA with step-ca

In this article you will learn how TLS (Transport Layer Security) and SSH (Secure SHell) use public/private key-pairs to authenticate web servers you visit and linux machines you log in to. You will also learn how the TLS framework installed by default in mainstream web browsers fails to prevent MITM (Man In The Middle) attacks in critical ways. Then we will walk through setting up a private .FEDORA TLD (Top Level Domain), setting up your own private CA with the smallstep package, and using the acme-tiny package to issue certificates for a website under that private TLD.
I will not cover setting up a simple "Hello World" website using your favorite web server packaged with Fedora. This needs to be up and running on HTTP to follow along. For this article, the website will be named hello.fedora.
Sadly, we will also explain how this does not completely solve the MITM problem - but this is already a big article. Click here to skip the background and motivation and go directly to the HowTo.
How Public Keys Prevent Man-In-The-Middle Attacks
While NSA director Admiral Bobby revealed that intel agencies were aware of two key, or public-key cryptography since the 1960s, the first unclassified paper was published by Whitfield Diffie and Martin E. Hellman in 1976. In college, I remember playing with cryptosystems based on the knapsack problem. These had various vulnerabilities. What revolutionized the field was publication of the RSA algorithm in 1977. I vividly remember where I sat in the college library when I read the paper. There was some controversy over "you can't patent algorithms". However RSA patented their implementation (which is already protected by copyright - but that is another discussion). Yes, you can whip up a 1 line Perl implementation in a few minutes (we all did) - but a secure implementation that does not leak the private key through various side channels is NOT trivial.
The original concept of public keys was to look up a recipient's pubkey in a directory, and use it to encrypt a message that only the possessor of the corresponding private key can decrypt. This can also be used to authenticate a correspondent via a protocol that proves they hold the corresponding private key. The basic idea is to encrypt a random token with a pubkey, the recipient decrypts the token and sends it back encrypted by your pubkey. The details are not trivial. The primary concern is MITM attacks. SSH and TLS support several widely accepted algorithms for authentication and key exchange.
The Directory of Pubkeys is Critical
If you think about it, that "directory" is all important. Suppose you have a "secure" phone app (without naming names) that uses a public directory to map telephone number to pubkey. Whoever runs that directory can return their own pubkey (likely a different one for each telephone number), decrypt the data, and send it on, re-encrypted to the real pubkey of the intended recipient (and the same for the other direction). I.e. - the classic MITM attack. This is why such secure applications usually provide a way to verify you have the real pubkey via an in-person meeting or alternate medium.
So how do you know the real pubkey for a secure (https) website? Websites provide a "certificate" saying "this pubkey is for these domain names" (and other information we are not concerned with here). Well, anyone can create such a certificate - in fact we will do so in this article - so how do you know it is truthful? The certificate is "signed" by a Certificate Authority (CA). Pubkeys can be used to sign data. For RSA, the basic concept is to compute a secure "hash" (e.g. SHA256) of the certificate data, and "decrypt" it using the private key of the CA. The signature can be verified by using the pubkey of the CA to "encrypt" the result, - which should match the hash of the signed data. RSA is nice in that decryption and encryption are symmetrical - verifying a signature is the same operation as encrypting the signature to the owner of the privkey for the pubkey . So now, instead of every web user maintaining a private database of pubkeys for domain names, the browser has a list of trusted CAs which sign website certificates after verifying them in some way. In case a private key is compromised, CAs publish a Revocation List (which regular people rarely use) and TLS certificates always have an expiration date.
Note that CAs can certify data other than domain names, like the name of a company or individual. Commercial CAs generally charge a premium for this, but there are also non-profit CAs like cacert.org that certify personal details via in-person meetings.
How Mainstream Browsers Know Which CAs to Trust
Regular Joes ("normies") do not keep track of all this, so where does that "list of trusted CAs" come from? Well, there is a CA and Browser forum with representatives from popular browser software makers and commercial CAs. They maintain a list of trusted CAs, and changes are voted on in public meetings with minutes published on their web page. Fedora installs this list in /usr/share/pki. Browsers may have their own copy. Users may add additional trusted CAs to /usr/share/pki or /etc/pki/ca-trust and browsers may have their own way of adding additional trusted CAs.
This all sounds well and good, BUT. The critical flaw could be called serial reliability. The trusted CAs are trusted for any domain. So any trusted CA (including any you add) can forge a certificate for any website. DNS vulnerabilities (cache poisoning and such) are beyond the scope of this article. But we will set up a private CA which you could use to forge any website cert and fool anyone you convince to trust your CA (and can hack their DNS and/or IP routing). The cabforum is very careful about their list. As part of hostilities, forum CAs stopped certifying .RU domains (ISO TLD for Russia). Russia promptly put up their own national CAs, which anyone can add to their browser trust store. Normies were warned NOT to do this, as the Russian CAs could then forge certs for any domain. But a moment's thought reveals that ANY cabforum CA could go "rogue" and do the same thing. It only takes one.
There are solutions to this blanket trust problem, but that will require another article.
Create a private TLD with bind
For illustration, we will create the .FEDORA TLD. Everyone following along will create a different instance of that TLD, and hostnames under .FEDORA will resolve to different IPs (or NXDOMAIN) depending on whose DNS server you point that TLD at. This was the motivation for creating ICANN - a worldwide centralized DNS root (list of official TLDs). This provides a consistent namespace at the expense of absolute power (to cancel domains and TLDs) invested in ICANN. Before ICANN, admins all maintained their own DNS root, and periodically updated (manually or automatically) nameservers for well known TLDs like .COM etc. ISO defined an official list of TLDs, including country code TLDs (like .US). That worked well. The problem came with more obscure TLDs like .FREE. Companies trying to be "cool" were upset that not all customers got the same IPs for .FREE hostnames. Also admins liked having "someone else" maintain the DNS root. Hence, ICANN. There is also Opennic which likewise has "someone else" (volunteers) maintain a root zone, with fallback to ICANN, and has its own "forum" (existing TLDs vote) to approve new TLDs.
Here is a bind zonefile for .FEDORA:
$TTL 2H
; hello.fedora
@ IN SOA ns1 hostadmin.hello.fedora. (
2025122600 ; serial
1H ; refresh
15M ; retry
14D ; expire
6H ; default_ttl
)
@ IN NS ns1.fedora.
@ IN TXT "v=spf1 -all"
hello IN A 192.168.100.31
ns1 IN A 192.168.100.31
ca IN A 192.168.100.31
But that was a bait and switch. Setting up DNS for a private TLD is its own article. If you know how to add such a zone to your self hosted DNS - then do so. For the rest, we'll use an even older hostname/IP map that predates DNS: as root, edit the file /etc/hosts on the system you will run step-ca on and append these lines:
# smallstep article
192.168.100.31 hello.fedora
192.168.100.31 ca.fedora
Replace 192.168.100.31 with the IP of the system you are trying all this out on. Step-ca must be able to lookup the hello.fedora hostname it is certifying to do the ACME protocol. We will use the /.well-known/acme-challenge method, which does not require real DNS. The system you run acme-tiny on also needs to lookup ca.fedora.
Run a private CA with step-ca
If the smallstep package is still under review when you read this, you'll need to enable the copr repo (otherwise skip this step):
sudo dnf copr enable @fedora-review/fedora-review-2418762-smallstep
Create root CA
First, we need to create our root CA. In production, this should be on a separate offline machine. For small operations, the secondary CAs can be automated, and you sign the certificates for these secondaries manually with the root CA. I would keep the root CA password on paper - can't be hacked (but watch out for cameras). Do NOT skip the password for the root CA. Some number of systems will trust that CA for any domain. If the private key leaks, you end up with a situation like Dell faced in 2015.
Let's put the manual root CA in /etc/pki/CA and generate the root cert. Openssl will ask you for a key passwd, and what x509 calls "subject identifiers". I left the state and email blank, and set city to Fedora City, organization to Fedora Project, organizational unit to ca, and common name to ca.example.org. The "-days 3650" sets the expiration to 10 years from now. The second command shows the "Issuer" information end-users will see when they ask for the issuer in an app like Firefox. The common name should normally be the hostname of the root CA, but it doesn't really matter when the root CA is offline - and example.org is coincidentally offline by convention. 
$ sudo mkdir /etc/pki/CA
$ cd /etc/pki/CA
$ sudo install --mode=644 /dev/stdin root_ca.fedora.ext <<EOF
subjectAltName=DNS:ca.example.org
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:1
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
EOF
$ sudo mkdir -m 0700 private
$ sudo openssl req -new -keyout private/root_ca.key -out root_ca.csr
...
$ sudo openssl x509 -req -in root_ca.csr -key private/root_ca.key -out root_ca.crt -days 3652 -sha256 -extfile root_ca.fedora.ext
Enter pass phrase for private/root_ca.key:
Certificate request self-signature ok
subject=C=US, L=Fedora City, O=Fedora Project, OU=ca, CN=ca.example.org
Create intermediate certificate and install smallstep
Then install the smallstep package with step-ca binary and supporting files:
$ sudo dnf install smallstep
The package installs a skeleton config for a step-ca service in /var/lib/step-ca. Let's flesh out the config as step-ca and generate an intermediate cert request ("csr").
$ cd /var/lib/step-ca
$ sudo -u step-ca bash -l
$ ls
certs config db secrets templates
$ cp /etc/pki/CA/root_ca.crt certs
$ openssl req -new -keyout secrets/intermediate_ca.key -out intermediate_ca.csr
...
$ nano config/ca.json
$ exit
Again, openssl will ask for subject identifiers. I used the same as for the root CA, but with the common name ca.fedora. Use your favorite text editor; "nano" is beginner friendly. Change MYCABAL to FEDORA and ca.mycabal.org to ca.fedora. If you provided a password for intermediate_ca.key, put it in the "password" field of ca.json. Do not set the password in ca.json to the empty string. This will make step-ca try to prompt for it at startup - which is not allowed under systemd, and fails with an error opening /dev/tty. For the intermediate cert, the common name is important. Smallstep will auto generate a host cert for "ca.fedora" (it is, after all, a certificate authority), and it must match the hostname ACME clients use to sign certs. Now we need to sign the intermediate cert with the root CA. 1825 days is 5 years. Intermediate certs should be shorter lived than the root CA. Not too short, if you are manually resigning the certs.
$ cd /etc/pki/CA
$ sudo install --mode=644 /dev/stdin ca.fedora.ext << EOF
subjectAltName=DNS:ca.fedora
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
EOF
$ sudo openssl x509 -req -in /var/lib/step-ca/intermediate_ca.csr -CA root_ca.crt -CAkey private/root_ca.key -CAcreateserial -out intermediate_ca.crt -days 1825 -sha256 -extfile ca.fedora.ext
$ sudo -u step-ca cp intermediate_ca.crt /var/lib/step-ca/certs
$ sudo systemctl start step-ca
$ sudo systemctl status step-ca
...
Mar 31 15:18:56 test.gathman.org step-ca[2814912]: 2026/03/31 15:18:56 Serving HTTPS on :9000 ...
Use httpd to serve hello.fedora web page
Running a web server was a prerequisite. I'll use apache as an example, and hopefully users of nginx and others can translate. First, /etc/httpd/conf.d/hello.conf
<VirtualHost *:80>
ServerName hello.fedora
DocumentRoot "/var/www/html/hello"
#RedirectMatch ^((?!\/\.well-known\/).*)$ https://hello.fedora$1
<Location "/.well-known/acme-challenge/">
Options -Indexes
Order allow,deny
Allow from all
</Location>
<Location "/">
Options FollowSymLinks Indexes
Require all granted
</Location>
</VirtualHost>
The redirect is commented out until we have a signed cert. Assuming httpd is already running, use sudo apachectl graceful to load the changes. Then a simple document in /var/www/html/hello/index.html
<html>
<head>
<title> Hello Fedora </title>
</head>
<body>
<h1> Hello Fedora! </h1>
</body>
</html>
Use acme-tiny to sign a TLS cert with step-ca
Add private root CA
Acme-tiny needs to trust the root CA to use the ACME service. The step-ca service provides a handy API to fetch the root ca:
$ cd /etc/pki/ca-trust/source/anchors
$ sudo curl https://ca.fedora:9000/roots.pem -o fedora_ca.crt
curl: (60) SSL certificate problem: unable to get local issuer certificate
Ooops! Catch 22. You need the root CA to use the handy API that gets the root CA. So we'll have to tell curl to accept the strange root cert. (Or use rsync, cp on the same machine, copy/paste between terminal windows, or other more secure method.)
$ sudo curl -k https://ca.fedora:9000/roots.pem -o fedora_ca.crt $ sudo update-ca-trust extract
Now, we are ready to run acme-tiny. Once again, openssl req will prompt for subject identifiers. The only one browsers care about is Common Name, which should be "hello.fedora". However, users may care about the other fields when they use browser features to inspect certs.
$ sudo dnf install acme-tiny
$ sudo apachectl graceful
$ cd /var/lib/acme
$ sudo -u acme bash -l
$ ls
certs csr private
$ /usr/libexec/acme-tiny/sign # NOTE: generates account.key if needed
$ ls private
account.key
$ openssl req -new -passout pass:'' -keyout private/hello.key -out csr/hello.csr
$ /usr/sbin/acme_tiny --account-key private/account.key --csr csr/hello.csr --acme-dir /var/www/challenges/ --ca https://ca.fedora:9000/acme/FEDORA >certs/hello.crt
$ exit
$ sudo nano /etc/httpd/conf.d/hello.conf
Now uncomment the Redirect Match and append the below SSL virtual host definition to hello.conf. Use apachectl graceful to load the changes.
<VirtualHost *:443>
ServerName hello.fedora:443
SSLEngine on
SSLProtocol all -SSLv2 -SSLv3
SSLCipherSuite HIGH:3DES:!aNULL:!MD5:!SEED:!IDEA
DocumentRoot "/var/www/html/hello"
SSLCertificateFile /var/lib/acme/certs/hello.crt
SSLCACertificateFile /var/lib/acme/certs/hello.crt
SSLCertificateKeyFile /var/lib/acme/private/hello.key
CustomLog logs/ssl_request_log \
"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
<Location "/">
Options FollowSymLinks Indexes
</Location>
</VirtualHost>
The current acme-tiny package auto-renews certs only for the letsencrypt.org CA. That should be extended soon. Meanwhile, feel free to add something hacky. (I'll try to have it lookup tlds in /etc/sysconfig or something to get custom CA url.)
Use a browser to display the web page
On the machine with your web browser, you need 2 things: the new root CA and some way to lookup names in the .FEDORA TLD, either by pointing DNS to the server you set up with the private zone, or by appending the lines to /etc/hosts for ca.fedora and hello.fedora.
Now the curl should work without -k. And your browser should work to display https://hello.fedora, although you might have to restart it. If it doesn't read Fedora ca-trust store on startup, you might need to find an option to import CA on the browser menu.
$ curl https://hello.fedora
<html>
<head>
<title> Hello Fedora </title>
</head>
<body>
<h1> Hello Fedora! </h1>
</body>
</html>
Now, that your root CA is up and running, don't lose sight of what can be done by having it go rogue. Get lots of people to install it so they can access your cool new TLDS. Then start forging certs for arbitrary web sites, and conquer the world!! Bwa! ha! ha! (A future article can address PKCS#11 and restricting how you trust CAs in browsers and other software.)
01 Apr 2026 8:00am GMT
Matthew Garrett: Self hosting as much of my online presence as practical
01 Apr 2026 2:35am GMT
31 Mar 2026
Fedora People
Fedora Community Blog: Fedora Code of Conduct Report 2023

The Fedora Project's Code of Conduct and its reports are managed by the Fedora Code of Conduct Committee, the Fedora Community Architect, and the Fedora Project Leader. We publish this summary to demonstrate our commitment to community safety and our project's social fabric.
This post covers the year of reports received in the 2023 calendar year. The 2023 and 2024 annual report posts are published with delays due to changes in membership in the Code of Conduct Committee and rebalancing existing work. The purpose of publishing the reports now is to provide transparency, insight, and awareness into the health signs of the community.
How'd it go in 2023
Reflecting on the 17 reports opened in 2023, the Fedora community saw a shift in incidents landscape compared to 2022. While the total number of reports decreased by approximately 19% (17 in 2023 vs. 21 in 2022), the severity of actions taken suggests a year focused on addressing persistent friction and high-impact behavioral issues.
The most notable trend in 2023 was the departure from the "zero-ban" status of 2022. The Committee moved toward more decisive actions including a permanent account closure for a slur and a suspension for aggressive ban evasion. Indicating a lower tolerance for behavior that directly threatens the safety and inclusivity of the community.
| Year | Reports Opened | Reports Closed | Warnings Issued | Moderations Issued | Suspensions Issued | Bans Issued |
| 2023 | 17 | 17 | 5 | 3 | 1 | 1 |
| 2022 | 21 | 24 | 6 | 3 | 0 | 0 |
| 2021 | 23 | 24 | 2 | 1 | 0 | 1 |
| 2020 | 20 | 16 | 8 | 4 | 2 | 0 |
While the volume of reports from 2020 to 2022 were usually stabilized around 20 a year, there is a wide range to the level of severity for every case investigated by the Code of Conduct Committee. Some persistent challenges continue to underline the importance of soft skills like communication and collaboration. However, global world affairs, politics, and international conflicts often see a correlation between community conflicts. These cases often require more care and consideration than other reports.
Overall, the report shows a community safe enough for people to report incidents, including those involving high-profile members. The Code of Conduct Committee aims to humbly protect an environment and community culture where anyone can feel a part of the Friends Foundation of the Four Foundations, and feel safe to be their authentic and genuine self in the community. However, it is also showing that this is the first time since several years when reports went below 20 reports. This might show a sign of some stabilizing as years of backlog and process debt were improved and fixed in 2021, and the intense online pressure-cooker period of the global pandemic finally relents.
Looking forward to 2024
If you witness or are part of a situation that violates Fedora's Code of Conduct, please open a private report on the [Code of Conduct repo] or email codeofconduct@fedoraproject.org. As always, your reports are confidential and only visible to the Code of Conduct Committee.
Remember that opening a CoC report does not automatically mean action will be taken. Sometimes things can be clarified, improved, or resolved entirely. Or, it could be something pretty small, but it definitely wasn't okay, and you don't want to make a big deal… open that report anyway, because it could show a pattern of behavior that is negatively impacting more people than yourself.
Here is a reminder to our Fedora community to be kind and considerate to each other in all our interactions. We all depend on each other to create a community that is healthy, safe, and happy. Most of all, we love seeing folks self-moderate and stand up for the right thing day-to-day in our community. Keep it up, and keep being awesome Fedora, we <3 you!
About the Committee
Fedora Project's Code of Conduct and reports are managed by the Fedora Code of Conduct Committee (CoCC). The Fedora CoCC is made up of the Fedora Project Leader, Matthew Miller; the Fedora Community Architect, Justin Wheeler; the Red Hat legal team, as appropriate; and community nominated members.
The post Fedora Code of Conduct Report 2023 appeared first on Fedora Community Blog.
31 Mar 2026 12:04pm GMT
Peter Czanik: My new toy: Back to high-end audio
31 Mar 2026 8:12am GMT
Chris Short: OSPO Notes: Open Source Governance — Who Decides, and How
31 Mar 2026 4:00am GMT
Fabio Alessandro Locati: On the value of an automation platform
31 Mar 2026 12:00am GMT
29 Mar 2026
Fedora People
Akashdeep Dhar: Loadouts For Genshin Impact v0.1.15 Released

Hello travelers!
Loadouts for Genshin Impact v0.1.15 is OUT NOW with the addition of support for recently released characters like Varka and for recently released weapons like Gest of the Mighty Wolf from Genshin Impact Luna V or v6.4 Phase 2. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.
Resources
- Loadouts for Genshin Impact - GitHub
- Loadouts for Genshin Impact - PyPI
- Loadouts for Genshin Impact v0.1.15
Installation
Besides its availability as a repository package on PyPI and as an archived binary on PyInstaller, Loadouts for Genshin Impact is now available as an installable package on Fedora Linux. Travelers using Fedora Linux 42 and above can install the package on their operating system by executing the following command.
$ sudo dnf install gi-loadouts --assumeyes --setopt=install_weak_deps=False
Installation command for Fedora Linux
Changelog
- Automated dependency updates for GI Loadouts by @renovate[bot] in #507
- chore(deps): update actions/upload-artifact action to v7 by @renovate[bot] in #508
- Add the recently added character
Varkato the GI Loadouts roster by @sdglitched in #511 - Add the recently added weapon
Gest of the Mighty Wolfto the GI Loadouts roster by @sdglitched in #512 - Stage the release v0.1.15 for Genshin Impact Luna V (v6.4 Phase 1) by @sdglitched in #513
- Include
gi_loadouts/packdirectory in the package by @gridhead in #514 - Update the latest test count in README.md by @gridhead in #515
Characters
One character has debuted in this version release.
Varka
Varka is a claymore-wielding Anemo character of five-star quality.


Varka - Workspace and Results
Weapons
One weapon has debuted in this version release.
Appeal
While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.
Disclaimer
With an extensive suite of over 1550 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.
The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.
All rights to Genshin Impact assets used in this project are reserved by MiHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.
29 Mar 2026 6:30pm GMT
28 Mar 2026
Fedora People
Evgeni Golov: Converting Dovecot password schemes on the fly without (too much) cursing
28 Mar 2026 10:11pm GMT
Vít Smolík: Switching email providers, again
28 Mar 2026 8:00pm GMT
Kevin Fenzi: misc fedora bits last week of march 2026
secure boot signing
Last week we finally got the new secure boot setup fully switched over. We are now signing aarch64 grub2/kernel/fwupd as we are the x86_64 versions. The aarch64 signed artifacts are in rawhide now, but will move to stable releases as testing permits.
Sadly my Lenovo slim7x doesn't boot correctly with the signed artifacts, I think due to needing a firmware update or manually enrolling the microsoft certs. I'll try and test more with it when I can, but many other folks are seeing it work fine.
It's been a 7 year journey to get this done. Why so long? A few of the reasons in no particular order:
-
At first we were not even sure MS would sign others on aarch64
-
Our old x86_64 setup was smart cards in 2 builders, and we didn't have any easy way to install more in aarch64 builders.
-
They stopped making the smart cards we were using.
-
There were a number of things that made the fedora aarch64 kernel not work with secure boot. Many around the 'lockdown' patches.
-
Lack of time from everyone involved.
-
Need for someone to write a way to use our normal signing server to sign these things (so we wouldn't need cards in builders).
-
Lack of capacity in old smart cards to add new certs.
And probibly many more things I have forgotten about.
Feels great to get us in a better place and have signed aarch64 builds!
mass update/reboots
We had a mass update/reboot cycle this last week. It went pretty smoothly this time as we were not applying firmware updates or doing any other work.
We should be all caught up for the freeze next week....
final freeze coming up
Next tuesday starts the Fedora 44 Final freeze. This is the weeks running up to the Fedora 44 linux final release. So, if you need to get anything in, do so before tuesday.
solar fun
So the reason I was off line thursday was because I was getting solar and battery and inverter installed here. It's already pretty awesome. Look for a long blog post on it next weekish or so.
whats next?
During this freeze I am hoping to get started on some projects I was meaning to do already, but got busy with the signing stuff: revamping our backups and moving more stuff to rhel10 (will do staging in freeze).
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/116308267360944066
28 Mar 2026 4:51pm GMT