27 Mar 2026

feedFedora People

Remi Collet: 🎲 PHP version 8.4.20RC1 and 8.5.5RC1

Remi Collet's avatar

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.5.5RC1 are available

RPMs of PHP version 8.4.20RC1 are available

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.5 as Software Collection:

yum --enablerepo=remi-test install php85

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Update of system version 8.5:

dnf module switch-to php:remi-8.5
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

Software Collections (php84, php85)

Base packages (php)

27 Mar 2026 6:38am GMT

26 Mar 2026

feedFedora People

Christof Damian: Friday Links 26-11

26 Mar 2026 11:00pm GMT

Peter Czanik: My new toy: Open WebUI first steps

26 Mar 2026 12:42pm GMT

Fedora Infrastructure Status: Matrix server maintenance

26 Mar 2026 11:50am GMT

25 Mar 2026

feedFedora People

Fedora Infrastructure Status: Updates and reboots on Fedora infrastructure

25 Mar 2026 10:00pm GMT

Peter Czanik: Compiling syslog-ng on an old Mac

25 Mar 2026 2:43pm GMT

Peter Czanik: My new toy: first steps with AI on Linux

25 Mar 2026 11:48am GMT

Chris Short: OSPO Notes: How to find your community

25 Mar 2026 4:00am GMT

24 Mar 2026

feedFedora People

Fedora Community Blog: The forge is our new home.

Fedora Community Blog's avatar

After a full year of preparation, the Community Linux Engineering (CLE) team is excited to announce that Fedora Forge, powered by Forgejo, is ready for use! We are proud of this modern Open Source platform and what it means for the future of Fedora Infrastructure. While pagure.io has been a vital part of our community for many years, the time has come to retire our homegrown forge and transition to this powerful new tool.

The final cutover is planned for Flock to Fedora 2026. We strongly encourage teams to migrate their projects well before the conference to ensure a smooth transition. The pagure.io migration is only the first step in a broader infrastructure modernization effort. By the 2027 Fedora 46 release, we plan to retire all remaining Pagure instances across the project, including the package source repositories on src.fedoraproject.org. Getting familiar with Fedora Forge now will help ensure your team is ready as the rest of the Fedora ecosystem transitions.

pagure.io users, it is time to migrate!

If you own a project at pagure.io, you must migrate out of it before June 2026. We've prepared a Migration Guide. If you're unsure about what's happening, please keep reading

A Focused Scope for Fedora Forge

Historically, the Fedora Project utilized pagure.io, which operated as a general-use public forge where Fedora repositories coexisted alongside personal projects, unrelated upstream software, and individual portfolios.

The Fedora Forge (powered by Forgejo) intentionally adopts a narrower scope. It is an internal piece of project infrastructure, explicitly provisioned to host the code, documentation, and tooling that directly build, manage, and govern the Fedora Project.

What belongs on Fedora Forge:

What does NOT belong:

Why Migrate Early?

Migrating now avoids the "last-minute bottleneck" and gives your team time to adapt to the new resource limits outlined in the Usage Policy:

Feature Parity & Transparency

We are aware that Forgejo is not a 1:1 clone of Pagure. Most notably, private issues within public repositories are not currently supported in the same way. The CLE team is actively working with the upstream Forgejo community to bridge these functional gaps.

The Migration Roadmap

Other Related Work

The Fedora Council currently has a draft usage policy under consideration, aimed at filling in the details of usage of the new forge instances inside the Fedora Project. Please watch for an additional article here on the Fedora Community Blog that starts the formal feedback process ahead of a Council vote on the policy.

Need help? For technical issues, please open a ticket on the Fedora Infrastructure Tracker or ask in the #fedora-admin Matrix channel.

Technical FAQ

How do authentication and team management work?
Authentication is fully integrated with the Fedora Account System (FAS) via OIDC. Team membership is directly mapped to FAS groups; if you are in a group, your permissions will automatically map to the corresponding Organization/Team on the Forge.

What happens to my API tokens and automation scripts?
Pagure API tokens will not migrate. You must generate new tokens within your account or organization settings on the new Forge and update your scripts to point to the Forgejo API.

Will my local git remote URLs break?
Yes. Once your repository is migrated, pushes to Pagure.io will be rejected. Update your remotes to the new instance:

git remote set-url origin https://forge.fedoraproject.org/<organization>/<your-project>.git

Are Issues and PRs migrating with full fidelity?
Yes. As outlined in the documentation, our tools port Pull Requests, Issues, and Issue Dependencies/Assignments. Pagure-specific tags will be mapped to Forgejo Labels.

Where do I go if my project's migration fails?
The CLE team is monitoring the #fedora-forge Matrix channel. Reach out there for help with permission desyncs, missing refs, or pipeline breakages.

The post The forge is our new home. appeared first on Fedora Community Blog.

24 Mar 2026 4:03pm GMT

23 Mar 2026

feedFedora People

Brian (bex) Exelbierd: Reflecting on “Warranty Void If Regenerated”

Brian (bex) Exelbierd's avatar

I've seen "Warranty Void if Regenerated" going around, particularly among the subset of my friends who believe "LLMs are slop generators". They typically characterize it as overly optimistic - hopeful, if not downright fantasy.

The "slop generator" position is, in my opinion, demonstrably false, as countless successful code generation outcomes contradict such a sweeping generalization. The dogged pursuit of this position clouds the issue of the real concerns with LLMs as built and used today. I believe there are legitimate company ethics, environmental, and license/copyright concerns worthy of consideration in this space. I also believe that we are still in a highly emotional place where those concerns tend to be both understated and overstated depending on who is talking.

The story consists of three vignettes told from the perspective of Tom, a post-transition specification repair person who works with farmers. In this universe, all code is generated from specs and average humans are making custom software constantly. Domain experts are needed to refine, debug, and in some cases wholesale write the specifications.

There is also a great discussion of the human impact of this post-transition existence. I encourage you to read it, but I'm not addressing that below - not because it isn't important, but because I want to preserve focus on the "slop generator" drumbeat that feels so misguided.

All in all, I think the piece is well written and that Scott Werner did a great job. This isn't a critique of the writing or the story itself. I also don't know what Scott's perspective is on LLMs, though their public pages and site lead me to believe they are not anti-generative AI.

I'd been harboring a delusion in the back of my mind about trying to write a story about a "machine whisperer". Scott's piece reminded me that I am likely still not a creative writer, and I'm glad for their work here.

My thesis here is simple: this story reads like a set of specification and contract failures. It does not read like evidence that code generation inherently produces "slop" or that opaque code from code generation is inherently a failed concept. To be clear here, this is not a critique of Scott's view, but instead of the "slop generator" view point.

Margaret

Margaret has generated software that pulls in various data sets from both their farm and external sources to predict the best time to harvest. Their latest harvest was harvested before it should have been, and Tom realizes that the specification failed to include a requirement that it raise an error if a data source's structure or methodology changed. Instead, the system absorbed the data from an updated methodology and didn't change how it used that data.

This is shown to be a specification problem. The spec as written didn't suggest that changes were possible or that they should be monitored for, so the generated system didn't do that.

While this happens with, I suspect, regularity in hand-coded systems, my point isn't that this is normal. When it happens in a hand-coded system, it is wrong too. And, importantly, it is also a specification error.

There may never have been a specification in the first place and the developer was just expected to figure this out. Depending on their experience and other conditions, they either did … or they didn't. A clearer spec or set of standards (a/k/a a system prompt) would have fixed this in both cases.

Pit Crew

Scott introduces pit crews in this anecdote. These are people who monitor ongoing quality and concerns.

Today we often approximate this with monitoring systems that we hope are checking the right things, perhaps even with real end-to-end live tests running on a regular basis. We don't generally dedicate human teams to it.

Whether we ever hit post-transition or not, this begs for a conversation: is QE/QA solely a pre-ship function, or should we be leveraging that knowledge to monitor delivered software in ways that go deeper than what we typically monitor today? What does the SRE practice in this space look like?

Framed that way, the pit crew in the story is less a bandage for sloppy generated code and more the missing extension of our specifications and contracts into how we watch systems evolve over time.

Ethan

Ethan has generated a multitude of tools and they are all communicating with each other. Ethan is a microservice machine.

Ethan, much like Margaret, has a data feed problem. This time one of his own tools made a change in the methodology and calculated a value per-hundredweight instead of per-head. While not stated in the story, this unit for output was chosen at generation because it wasn't in the specification and the specification also didn't have a way (or likely even a requirement) to flag changes. The downstream tool didn't get a read failure but began using this new data value as though it was still per-head. This resulted in poor market price prediction.

The story is similar to Margaret's except it is more like when Team A breaks Team B in your own company.

For me it raises the interesting point that while we tend to believe otherwise, in many cases our APIs and data formats are our only true contracts. They operate only at the level where they exist. The internals of our dependencies, or the work of other teams, are opaque, and you could say that they may "regenerate" their code every day of the week and you just have to hope it still works for your consumption and use. You have to rely on them not breaking the contract and ensure the contract provides the guarantees you need.

Choreographer

A choreographer is a post-transition architect. It is, in my opinion, the thing we should all be if we are going to use LLMs to generate code.

Here a choreographer goes through Ethan's systems and defines their interface contracts and layers. They also notice that some tools are unnecessary, while others have formed a sub-network that has no effect. The output of this person's work is a cleaned up system that functions as a whole and not a set of discrete parts.

This is something we already have to do in large systems, and it's something that people generating code still have to do. I suspect that some concepts like Gastown try to push parts of this work into a different layer of tooling. And it may even work.

LLM generation and reasoning capacity is getting higher, but none of this eliminates the need for this role or for specification correctness. This is something which we've basically never had. Even waterfall failed here.

In this sense, the story reads less like an indictment of generation and more like a warning about what happens when we refuse to name, own, and maintain those contracts across a growing system.

Carol

Carol's farm illustrates the ugly mess of things we give automation and then complain about.

In this specific case there is a new irrigation system that is using all of the sensors it has to maintain a 60% moisture level across the farm. This results in under- and over-irrigation in some places because the moisture level in those places is influenced by external factors. The system is doing exactly what it was asked to do. The problem is that the target it was given is a bad fit for the actual farm not that the generated system is inherently bad.

Note: I am not a farmer, so I am taking this example at face value.

The short version is that drainage is funny in some places, other places are getting more wind, and still others need slightly differing levels based on the actual crop in that spot. None of this data has been provided to the system, and the story makes it clear that most of it is not in any system.

The farmer just understands their land and can look at it and tell you what is going to happen based on 30 years of real history and 30 years of experience. This is also not new. This is the art and practice of both coding and system administration, and we have failed to codify it usefully to date. We shouldn't hold our new system accountable for that, but we also shouldn't pretend that "just write a better spec" is an easy button when so much of the domain is still tacitly known and not shared beyond tribal means.

This is perhaps the one vignette that gives me pause. Even if we can find code generation (it doesn't have to be LLMs) that writes to a specification, we may still be unsuccessful when our measurements, abstractions, and language can't yet capture the thing we actually care about.

Right now we make surgical tweaks to the code to encode these lessons as we learn them. Specifying them in human language is often difficult, and maybe that is the core problem. The boundary here isn't really "hand-written vs generated code", it is between where, as technologists, we have experience stating precisely enough and where we don't have a history of doing that well.

But we work in a precise space. In the case of Carol's farm, Carol and Tom are able to describe the core problems pretty quickly, and I suspect, given time, could come up with data feeds, additional sensors, or equations that describe the issues sufficiently to fix the irrigation system.

It would be hyper-customized to Carol's farm, but in many ways that is what she wants and needs - and it's something we fail to deliver, in general, today. Even here, though, calling the outcome "slop" feels like a category error: the system is faithfully pursuing the narrow, naive target we gave it, not spewing random garbage.

The Real Conversation

I wrote this piece in part because the anti-LLM rhetoric of "they are slop generators" gets under my skin. There are a lot of valid reasons to be anti-LLM today. This is not one.

Reading the story reinforced that for me: what fails in these vignettes are specs, contracts, and incentives, not some inherent "slop" property of generated code. The story isn't an indictment of generated code, it's a parable about the timeless need for human wisdom, clear communication, and rigorous oversight, no matter how the code comes to be.

I'd like to see our LLM conversations stick closer to the concrete and demonstrably true. Let's focus on what these systems do, where they fail, and how our specs and contracts are part of that story, instead of getting pulled into slogans like "slop generator" that, by being false, derail the conversation. This creates space for us to have the real conversations that matter around ethics, the environment, and training data usage.

23 Mar 2026 10:50am GMT

Alexander Bokovoy: ASN.1 for legacy apps: Synta

23 Mar 2026 8:33am GMT

21 Mar 2026

feedFedora People

Matthew Garrett: SSH certificates and git signing

21 Mar 2026 7:38pm GMT

Kevin Fenzi: misc fedora bits third week of march 2026

Kevin Fenzi's avatar Scrye into the crystal ball

Things are just flying by and it seems to be saturday again, so here's another weekly recap.

Secureboot signing

Most of my week was consumed with work on our secure boot signing infrastructure. The old setup was using smart cards in specific builders. This had a lot of disadvantages, including:

  • space on the smart cards was pretty much full, preventing adding more certs

  • Those machines were 'special' and if they went down/broke things would be bad.

  • The smart cards in them are not even made anymore or supported, so we couldn't get more for adding more builders.

So, thanks to a bunch of work from Jeremy Cline we finally have things moved over to the new setup. This setup is:

  • Using our normal signing infrastructure (sigul, soon to be replaced by a rust re-write). We can easily decide in config which machines are used.

  • Using a new hardware on the vault end that has more space for more certs.

  • Allows us to easily add a aarch64 path to sign there.

The signed aarch64 grub2 build is in rawhide now, but for whatever reason it's not working on my slim7x. It is however working in vm's, cloud providers and other hardware, so I suspect it might be just a problem with this laptop. It also doesn't work with my Radxa Orion O6, but again could be something going on there. I think it's at least good enough to get more widespread testing.

We should hopefully have a signed kernel next week, but in the mean time if you have a arm device that supports secureboot, you can update to the latest grub2 and give it a try.

Openh264 builds

We seem to have dropped the ball on f44/f45 openh264 builds. :(

So, I looked at doing some this week. I ran into a linker issue on the i686 builds, but managed to work around that and get builds.

Now we just need to wait for cisco to publish them. I am hoping this process will go much quicker than it has in the past, since we have a better way to upload things for them now.

Time will tell.

Openshift cluster upgrades

I moved all our openshift clusters to 4.21.5 this week (from 4.20.15).

I really love how easy openshift upgrades are. Press button and wait usually. I did have to uprgade to the latest 4.20 first before it would let me move to 4.21, but both steps went fine.

Mass update / reboots next week

Next week we will be catching up on updates all around and rebooting things. The week after we start Fedora 44 Final freeze so we want to have things all updated before that. No special stuff this time, just updates/reboots so I expect it to go smoothly.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116268414239551452

21 Mar 2026 5:25pm GMT

Fedora Magazine: Contribute at the Fedora CoreOS 44 Test Week

Fedora Magazine's avatar

The Fedora CoreOS and QA teams are gearing up for Fedora 44, and we need your help! We are organizing a Test Week running from March 23 to March 27, 2026.

This event is a nice opportunity for the community to test Fedora CoreOS (FCOS) based on Fedora 44 content before it officially reaches the testing and stable streams. By participating, you help us ensure a smooth and reliable experience for all users.

How does a Test Week work?

A Test Week is an event where anyone can help verify that the upcoming release works as expected. If you've been looking for a way to get started with Fedora contribution, this is the perfect entry point.

To participate, you simply need to:

The Wiki Page is your primary source of information for this event. Once you have completed your tests, please log your results here! Your contribution, big or small, makes a huge difference. Let's work together to make this release a great one. Happy testing!

Join the Live Sync Session

Want to chat with the team? We are hosting a virtual in-person session on Tuesday, March 24, from 3:00 PM - 4:30 PM UTC. Drop in to ask questions and get help with testing!

Video Meeting: meet.google.com/ufp-bwsb-zwh

21 Mar 2026 8:00am GMT

20 Mar 2026

feedFedora People

Fedora Community Blog: Community Update – Week 12 2026

Fedora Community Blog's avatar

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 16 - 20 March 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It's responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It's responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It's responsible for releases, retirement process of packages and package builds.
Ticket tracker

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

AI

This is the summary of the work done regarding AI in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

EPEL

This team is working on keeping Epel running and helping package things.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update - Week 12 2026 appeared first on Fedora Community Blog.

20 Mar 2026 12:00pm GMT

Fedora Badges: New badge: Chemnitzer Linux-Tage 2026 !

20 Mar 2026 4:21am GMT