29 Jan 2026
Fedora People
Fedora Infrastructure Status: Updates and Reboots
29 Jan 2026 10:00pm GMT
26 Jan 2026
Fedora People
Kushal Das: replyfast a python module for signal
26 Jan 2026 12:16pm GMT
24 Jan 2026
Fedora People
Kevin Fenzi: misc fedora bits for third week of jan 2026
Another week another recap here in longer form. I started to get all caught up from the holidays this week, but then got derailed later in the week sadly.
Infra tickets migrated to new forejo forge
On tuesday I migrated our https://pagure.io/fedora-infrastructure (pagure) repo over to https://forge.fedoraproject.org/infra/tickets/ (forgejo).
Things went mostly smoothly, the migration tool is pretty slick and I borrowed a bunch from the checklist that the quality folks put together ( https://forge.fedoraproject.org/quality/tickets/issues/836 ) Thanks Adam and Kamil!
There are still a few outstanding things I need to do:
-
We need to update our docs everywhere it mentions the old url, I am working on a pull request for that.
-
I cannot seem to get the fedora-messaging hook working right It might well be something I did wrong, but it is just not working
-
Of course no private issues migrated, hopefully someday (soon!) we will be able to just migrate them over once there's support in forgejo.
-
We could likely tweak the templates a bit more.
Once I sort out the fedora-messaging hook I should be able to look at moving our ansible repo over, which will be nice. forgejo's pull request reviews are much nicer, and we may be able to leverage lots of other fun features there.
Mass rebuild finished
Even thought it started late (was supposed to start last wed, but didn't end up starting really until friday morning) it finished over the weekend pretty easily. There was some cleanup and such and then it was tagged in.
I updated my laptop and everything just kept working. I would like to shout out that openqa caught a mozjs bug landing (again) that would have broken gdm, so that got untagged and sorted and I never hit it here.
Scrapers redux
Wed night I noticed that one of our two network links in the datacenter was topping out (10GB). I looked a bit, but marked it down to the mass rebuild landing and causing everyone to sync all of rawhide.
Thursday morning there were more reports of issues with the master mirrors being very slow. Network was still saturated on that link (the other 10G link was only doing about 2-3GB/sec).
On investigation, it turned out that scrapers were now scraping our master mirrors. This was bad because all the BW used downloading every package ever over http and was saturating the link. These seemed to mostly be what I am calling "type 1" scrapers.
"type 1" are scrapers coming from clouds or known network blocks. These are mostly known in anubis'es list and it can just DENY them without too much trouble. These could also manually be blocked, but you would have to maintain the list(s).
"type 2" are the worse kind. Those are the browser botnets, where the connections are coming from a vast diverse set of consumer ip's and also since they are just using someone elses computer/browser they don't care too much if they have to do a proof of work challenge. These are much harder to deal with, but if they are hitting specific areas, upping the amount of challenge anubis gives those areas helps if only to slow them down.
First order of business was to setup anubis in front of them. There's no epel9 package for anubis, so I went with the method we used for pagure (el8) and just set it up using a container. There was a bit of tweaking around to get everything set, but I got it in place by mid morning and it definitely cut the load a great deal there.
Also, at the same time it seems we had some config on download servers for prefork apache. Which, we have not used in a while. So, I cleaned all that up and updated things so their apache setup could handle lots more connections.
The BW used was still high though, and a bit later I figured out why. The websites had been updated to point downloads of CHECKSUM files to the master mirrors. This was to make sure they were all coming from a known location, etc. However, accidentially _all_ artifact download links were pointing to the master mirrors. Luckly we could handle the load and also luckily there wasn't a release so it was less people downloading. Switching that back to point to mirrors got things happier.
So, hopefully scrapers handled again... for now.
Infra Sprint planning meeting
So, as many folks may know, our Red Hat teams are all trying to use agile and scrum these days. We have various things in case anyone is interested:
-
We have daily standup notes from each team member in matrix. They submit with a bot and it posts to a team room. You can find them all in #cle-standups:fedora.im space on matrix. This daily is just a quick 'what did you do', 'what do you plan to do' any notes or blockers.
-
We have been doing retro/planning meetings, but those have been in video calls. However, there's no reason they need to be there, so I suggested and we are going to try and just meet on matrix for anyone interested. The first of these will be monday in the #meeting-3:fedoraproject.org room at 15UTC. We will talk about the last 2 weeks and plan for what planned things we want to try and get in the next 2.
The forge projects boards are much nicer than the pagure boards were, and we can use them more effectively. Here's how it will work:
Right now the current sprint is in: https://forge.fedoraproject.org/infra/tickets/projects/325 and the next one is in: https://forge.fedoraproject.org/infra/tickets/projects/326
On monday we will review the first, move everything that wasn't completed over to the second, add/tweak the second one then close the first one, rename the 'next' to 'current' and add a new current one. This will allow us to track what was done in which sprint and be able to populate things for the next one.
Additionally, we are going to label tickets that come in and are just 'day-to-day' requests that we need to do and add those to the current sprint to track. That should help us get an idea of things that we are doing that we cannot plan for.
Mass update/reboot outage =========================o
Next week we are also going to be doing a mass update/reboot cycle with outage on thrusday. This is pretty overdue as we haven't done such since before the holidays.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/115951447954013009
24 Jan 2026 5:27pm GMT
23 Jan 2026
Fedora People
Christof Damian: Friday Links 26-03
23 Jan 2026 9:00am GMT
Remi Collet: 📝 Redis version 8.6 🎲
RPMs of Redis version 8.6 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
⚠️ Warning: this is a pre-release version not ready for production usage.
1. Installation
Packages are available in the redis:remi-8.6 module stream.
1.1. Using dnf4 on Enterprise Linux
# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm # dnf module switch-to redis:remi-8.6/common
1.2. Using dnf5 on Fedora
# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm # dnf module reset redis # dnf module enable redis:remi-8.6 # dnf install redis --allowerasing
You may have to remove the valkey-compat-redis compatibilty package.
2. Modules
Some optional modules are also available:
- RedisBloom as redis-bloom
- RedisJSON as redis-json
- RedisTimeSeries as redis-timeseries
These packages are weak dependencies of Redis, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The modules are automatically loaded after installation and service (re)start.
The modules are not available for Enterprise Linux 8.
3. Future
Valkey also provides a similar set of modules, requiring some packaging changes already applied in Fedora official repository.
Redis may be proposed for unretirement and be back in the Fedora official repository, by me if I find enough motivation and energy, or by someone else.
I may also try to solve packaging issues for other modules (e.g. RediSearch). For now, module packages are very far from Packaging Guidelines, so obviously not ready for a review.
4. Statistics
redis
redis-bloom
redis-json
redis-timeseries
23 Jan 2026 7:28am GMT
22 Jan 2026
Fedora People
Fedora Badges: New badge: CentOS Connect 2026 Attendee !
22 Jan 2026 10:40am GMT
Fedora Badges: New badge: DevConf India 2026 Attendee !
22 Jan 2026 5:58am GMT
21 Jan 2026
Fedora People
Evgeni Golov: Validating cloud-init configs without being root
21 Jan 2026 7:42pm GMT
Fedora Infrastructure Status: dl.fedoraproject.org slow
21 Jan 2026 12:00pm GMT
Ben Cotton: Use your labels
Most modern issue trackers offer a label mechanism (sometimes called "tags" or a similar name) that allow you or your users to set metadata on issues and pull/merge requests. It's fun to set them up and anticipate all of the cool things you'll do. But it turns out that labels you don't use are worse than useless. As I wrote a few years ago, "adding more labels adds cognitive overhead to creating and managing issues, so you don't want to add complexity when you don't have to."
A label that you don't use just complicates the experience and doesn't give you useful information. A label that you're not consistent in using will lead to unreliable analysis data. Use your labels.
Jeff Fortin Tam highlighted one benefit to using labels: after two years of regular use in GNOME, it was easy to see nearly a thousand performance improvements because of the "Performance" label. (As of this writing, the count is over 1,200.)
How to ensure you use your labels
The problem with labels is that they're either present or they're not. If your process requires affirmatively adding labels, then you can't treat the absence of a label as significant. The label might be absent because it doesn't apply, or it might be absent because nobody remembered to apply it. By the same token, you don't want to apply all the labels up front and then remove the ones that don't apply. That's a lot of extra effort.
There are two parts of having consistent label usage. The first is having a simple and well-documented label setup. Only have the labels you need. A label that only applies to a small number of issues is probably not necessary. Clearly document what each label is for and under what conditions it should be applied.
The other part of consistent label usage is to automatically apply a "needs triage" label. Many ticket systems support doing this in a template or with an automated action. When someone triages an incoming issue, they can apply the appropriate labels and then remove the "needs triage" label. Any issue that still includes a "needs triage" label should be excluded from any analysis, since you can reasonably infer that it hasn't been appropriately labeled.
You'll still miss a few here and there, but that will help you use your labels, and that makes the labels valuable.
This post's featured photo by Angèle Kamp on Unsplash.
The post Use your labels appeared first on Duck Alignment Academy.
21 Jan 2026 12:00pm GMT
20 Jan 2026
Fedora People
Peter Czanik: Call for testing: syslog-ng 4.11 is coming
20 Jan 2026 12:44pm GMT
Vedran Miletić: What is the price of open-source fear, uncertainty, and doubt?
20 Jan 2026 12:34pm GMT
Vedran Miletić: The academic and the free software community ideals
20 Jan 2026 12:34pm GMT
Vedran Miletić: I am still not buying the new-open-source-friendly-Microsoft narrative
20 Jan 2026 12:34pm GMT
Vedran Miletić: Free to know: Open access and open source
20 Jan 2026 12:34pm GMT
Vedran Miletić: AMD and the open-source community are writing history
20 Jan 2026 12:34pm GMT