05 Jul 2025
Fedora People
Akashdeep Dhar: Arriving At Flock To Fedora 2025
On 03rd June 2025, it was that time of the year again when we departed for the annual Fedora Project community conference, Flock To Fedora 2025. Waking up at 05:00AM Indian Standard Time, I was soon on my way to the Netaji Subhash Chandra Bose International Airport (CCU) after some quick breakfast bites from my aunt. It roughly took around twenty minutes for me to make it to the airport with my luggage, where I met up with Sumantro Mukherjee, who was waiting for me at around 06:30AM Indian Standard Time. We got ourselves checked in at the Emirates counter for the flight EK0571 to Dubai International Airport (DXB) and headed towards the lightly populated immigration queue. Weirdly enough, I was held back for some superficial questioning at the security queue, but it was nothing to worry about as we were quite ahead of the schedule for the flight.

We were at the departure gates by 07:30AM Indian Standard Time, which was a little under three hours from the estimated departure time for the Emirates EK0571 flight. After connecting with my friends and family about having made it to the departure gates, Sumantro and I had conversations about a bunch of things. Starting from community affairs and agile implementation, it was fun catching up with Sumantro after FOSDEM 2025. The time until the boarding passed quite quickly, and we soon made it inside the flight, capturing the painstakingly booked window seats of Zone E, with me seated at 29K and Sumantro seated at 30K. The first flight from Kolkata (CCU) to Dubai (DXB) was expected to be around five hours long, so I decided to spend time watching a couple of movies, namely Unthinkable (2010) and Despicable Me 4 (2024) from the seat display.

At around 11:45AM Indian Standard Time, the breakfast was served, and I decided to catch some sleep after I was done with the food. The seemingly short sleep did help reduce the apparent duration of the flight because at around 04:00PM Gulf Standard Time, the flight started landing at the Dubai International Airport (DXB). As our next flight, Emirates EK0125 towards Airport Flughafen Wien-Schwechat (VIE), was departing from the same terminal, we had some time in our hands to traverse through the gates. Sumantro and I had to catch a subway that would take us from the A Gates to the B Gates after we passed through a crowded security queue. In about thirty minutes, we found ourselves at the departure Gate B20, from where our flight was designated to depart. We went around browsing the Duty Free stores as we waited for the boarding announcement to be made.

Of course, we kept ourselves from purchasing goodies from the stores, and the boarding announcement that was made some time later helped fortify our resolve. As Sumantro was planning on traveling ahead with his partner from DevConf.CZ 2025 and I was planning on doing loads of shopping in Prague, it made little sense to get encumbered there. Just like the previous flight, we were seated at the window seats, with me on 39K and Sumantro on 40K on this flight. An odd event ended up changing that arrangement as the passengers traveling beside Sumantro wanted us to switch seats for the window seats that we had coordinated and selected well in advance from the web check-in. In the end, I was seated beside Sumantro on an aisle seat, 40I, while Sumantro retained his seat, 40K, as the two belligerent passengers went somewhere else after some arrangements were made.

While not dwelling much on this uncomfortable encounter with some entitled passengers, I decided to watch Fast X (2023) from the in-flight entertainment system as the flight took off. Sumantro decided to catch up on some assignments and finish preparing his slide decks, as this was supposed to be a longer flight of around six hours. We were looking into the renewed Test Days application, which was recently deployed in the Fedora Infrastructure, and we ended up finding various bugs and oversights with the production deployment. I decided to watch Inglourious Basterds (2009) after having lunch, but I found myself dozing off every now and then. I decided to use the time to catch up on some sleep, as both Sumantro and I had to travel on an overnight bus from Airport Flughafen Wien-Schwechat (VIE) to Prague Central Station and finish the movie at a later time.

The next time I opened my eyes - I was gifted with the wonderful vistas of the country skyline as the flight was slowly descending into Airport Flughafen Wien-Schwechat (VIE). It was around 08:30PM Central European Summer Time then, and the sun was barely setting in summertime Vienna. We got off the flight at around 09:00PM Central European Summer Time and made our way into the crowded immigration queue before picking up our checked-in luggage from the belts. I wished that our onward journey had ended there, as the time zone shift and the lack of sleep were taking a toll on my body. The one silver lining that kept us going was that the bus taking us from Airport Flughafen Wien-Schwechat (VIE) to Prague Central Station departed from the arrival gates at around 11:00PM Central European Summer Time, so we did not have to rush anywhere anymore.

After sharing some conversations with friends and family back at home and from the Fedora Project, we kicked around at the arrival gates. The waiting was most certainly easier said than done, but I would much rather be in a situation where I was ahead of the schedule than one where I was running behind. Sumantro and I discussed just how long we would have been active by the time we ended up getting to the hotel, and the calculation did help keep us from getting bored. At around 10:50PM Central European Summer Time, a bus #N60 operated by Flixbus arrived at Station #04 for the pickup. After getting our passports checked before boarding the bus, we decided to keep ourselves to the bottom deck of the double-decker bus. It would have been fun visiting the top deck, but at the twenty-fourth hour of being active, all we wanted was to get some sleep at the hotel.

We had a couple of stops before making it to Prague Central Station, so we were seated near the gates for convenience. While the evening started off with some pretty mild temperatures and normal humidity, the temperature started falling and the humidity started rising as the night grew darker. I knew for a fact that I would doze off as soon as I found a soft seat to place myself on, so I decided to schedule some alarms for 03:30AM Central European Summer Time, which was still over three hours away from then. It was just as important to schedule multiple alarms in the rare occurrence of one not being enough, and the last thing that we wanted to do then was end up in Berlin, where the bus was actually headed. We soon found ourselves at our stop after a combination of looking into the darkness from the window and failing miserably to catch some well deserved slumber.

Prague Central Station welcomed us with 14 degrees Celsius and 75% humidity on the early morning of 04th June 2025. Sumantro booked an Uber for us, and after an uneventful yet swift fifteen minutes, we found ourselves at the entrance of the Ibis Praha Mala Strana hotel. Thankfully, we had the reservation done from the day before, so we could easily find ourselves a bed to rest on and not wait until the scheduled check-in time of 03:00PM Central European Summer Time. Sumantro had some issues with the inclusion of breakfast in his booking, but we decided that it was for the best that he took it up with Julia Bley the next day. Thanks to the Red Hat Corporate Card that we were provided with weeks before the commencement of our journey, Sumantro and I were able to retire to rooms #239 and #225 at around 04:00AM Central European Summer Time, ending the onward journey.
05 Jul 2025 6:30pm GMT
Ankur Sinha: Splitting Taskwarrior tasks to sub-tasks
A feature that I often miss in Taskwarrior (which I use for managing my tasks in a Getting Things Done method) is the ability to split tasks into sub-tasks.
A common use case, for example, is when I add a research paper that I want to read to my task list. It's usually added as "Read <title of research paper>", with the URL or the file path as an annotation. However, when I do get down to read it, I want to break it down into smaller, manageable tasks that I can do over a few days such as "Read introduction", "Read results". This applies for lots of other tasks too, which turn into projects with sub-tasks when I finally do get down to working on them.
The way to do it is to create new tasks for each of these, and then replace the original task with them. It is also a workflow that cab be easily scripted so that one doesn't have to manually create the tasks and copy over annotations and so on.
Here is a script I wrote:
#!/usr/bin/env python3 """ Split a taskwarrior task into sub-tasks File: task-split.py Copyright 2025 Ankur Sinha Author: Ankur Sinha <sanjay DOT ankur AT gmail DOT com> """ import typing import typer import subprocess import json import logging logging.basicConfig(level=logging.NOTSET) logger = logging.getLogger("task-split") logger.setLevel(logging.INFO) logger.propagate = False formatter = logging.Formatter("%(name)s (%(levelname)s): %(message)s") handler = logging.StreamHandler() handler.setLevel(logging.INFO) handler.setFormatter(formatter) logger.addHandler(handler) def split(src_task: int, new_project: str, sub_tasks: typing.List[str], dry_run: bool = True) -> None: """Split task into new sub-tasks For each provided sub_tasks string, a new task is created using the string as description in the provided new_project. Annotations from the provided src_task are copied over and the src_task is removed. If dry_run is enabled (default), the src_task will be obtained but not processed. :param src_task: id of task to split :type src_task: int :param sub_tasks: list of sub-tasks to create :type sub_tasks: list(str) :returns: None """ # Always get info on the task get_task_command = f"task {src_task} export" logger.info(get_task_command) ret = subprocess.run(get_task_command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE) if ret.returncode == 0: task_stdout = ret.stdout.decode(encoding="utf-8") task_json = (json.loads(task_stdout)[0]) logger.info(task_json) tags = task_json.get('tags', []) priority = task_json.get('priority') due = task_json.get('due') estimate = task_json.get('estimate') impact = task_json.get('impact') annotations = task_json.get('annotations', []) description = task_json.get('description') uuid = task_json.get('uuid') for sub_task in sub_tasks: new_task_command = f"task add project:{new_project} tags:{','.join(tags)} priority:{priority} due:{due} impact:{impact} estimate:{estimate} '{sub_task}'" logger.info(new_task_command) if not dry_run: ret = subprocess.run(new_task_command.split()) if dry_run or ret.returncode: annotate_task_command = f"task +LATEST annotate '{description}'" logger.info(annotate_task_command) if not dry_run: ret = subprocess.run(annotate_task_command.split()) for annotation in annotations: annotation_description = annotation['description'] annotate_task_command = f"task +LATEST annotate '{annotation_description}'" logger.info(annotate_task_command) if not dry_run: ret = subprocess.run(annotate_task_command.split()) mark_original_as_done_command = f"task uuid:{uuid} done" logger.info(mark_original_as_done_command) if not dry_run: ret = subprocess.run(mark_original_as_done_command.split()) if __name__ == "__main__": typer.run(split)
It uses typer to provide command line features:
task-split --help Usage: task-split [OPTIONS] SRC_TASK NEW_PROJECT SUB_TASKS... Split task into new sub-tasks Arguments * src_task INTEGER [default: None] * new_project TEXT [default: None] * sub_tasks SUB_TASKS... [default: None] Options --dry-run --no-dry-run [default: dry-run] --help Show this message and exit.
So, if one has a task "Put up shelves" with ID 800, it can now be broken into a number of smaller tasks:
task-split 800 "personal.shelves" "Buy shelves" "Buy drill" "Buy tools"
This will add the new tasks to the "personal.shelves" topic, and copy over meta-data from the original task, such as annotations, priority, due date and other user defined attributes. It runs in "dry-run" mode by default to give me a chance to double-check the commands/tasks. To carry out the operations, pass the --no-dry-run
flag to the script too.
The script is heavily based on my personal workflow, but can easily be tweaked. It lives here on GitHub and you are welcome to modify it to suit your own workflow.
Please remember to make it executable and put it in your PATH to be able to run the command on your terminal, and do remember to install typer. On Fedora, this would be sudo dnf install python3-typer
.
05 Jul 2025 12:11pm GMT
04 Jul 2025
Fedora People
Fedora Infrastructure Status: Datacenter Move Complete
04 Jul 2025 6:00pm GMT
Hans de Goede: Recovering a FP2 which gives "flash write failure" errors
04 Jul 2025 4:14pm GMT
Fedora Community Blog: Infra and RelEng Update โ Week 27, 2025
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 30 June - 04 July 2025

Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It's responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues
Fedora Infra
- In progress:
- no Matrix notifications from FMN since June 15
- [FMTS] TLS certificate for gitlab-centos service is about to expire in 30 days
- Decommission provisioning.fp.org
- Planned Outage - Datacenter Move outage - 2025-06-30 01:00 UTC
- Please register my testing instance of Fedora Infrastructure apps to OIDC
- Forgejo: Owner access to @jflory7 for @CommOps
- Move copr_hypervisor group from iptables to nftables
- tmpwatch removed from ansible
- Link in staging distgit instance leads to prod auth
- re-add datagrepper nagios checks (and add to zabbix?)
- Fix logrotate on kojipkgs01/02
- 2025 datacenter move (IAD2->RDU3)
- Broken link for STG IPA CA certificate, needed for staging CentOS Koji cert
- [CommOps] Open Data Hub on Communishift
- retire easyfix
- Deploy Element Server Suite operator in staging
- Pagure returns error 500 trying to open a PR on https://src.fedoraproject.org/rpms/python-setuptools-gettext
- maubot-meetings bot multi line paste is cut
- Move OpenShift apps from deploymentconfig to deployment
- The process to update the OpenH264 repos is broken
- httpd 2.4.61 causing issue in fedora infrastructure
- Support allocation dedicated hosts for Testing Farm
- EPEL minor version archive repos in MirrorManager
- Add fedora-l10n pagure group as an admin to the fedora-l10n-docs namespace projects
- vmhost-x86-copr01.rdu-cc.fedoraproject.org DOWN
- Add yselkowitz to list to notify when ELN builds fail
- Cleaning script for communishift
- Move from iptables to firewalld
- Port apps to OIDC
- Help me move my discourse bots to production?
- Migration of registry.fedoraproject.org to quay.io
- Done:
- Some links from src.fedoraproject.org are broken for packages with plus (+) in their names
- release-monitoring.org isn't filing bugs
- please don't remove enrolled centos machines from IPA in staging
- RFR: Requesting a new FAS Group: Phosh SIG
- Need to delete user account in FAS that has been already deleted in Discourse (discussion.fedoraproject.org) due to violations (spam, AI, etc.)
CentOS Infra including CentOS CI
- In progress:
- Done:
- Request for New Mailing List: genos - at - centos.org
- c7 builds failing for cksum mismatch of firewalld-filesystem-0.6.3-13.el7_9.noarch - centos7-updates
- Add siosm (myself) as a sponsor to sig-cloud group
- Stream 9 new mirror : mirror.clarkson.edu
- Verify/Change target of press email address
- Create a repo and FAS group for FRCL
- CBS koji permissions
- Update openinfra server to sync centos mirror
- resources for OpenQA test execution
- migrate id{.stg}.centos.org to new DC
- Hardware init new RDU3 servers (part 1 / wave 0)
- Prepare AWS new VPC for isolated builders
- Mailing lists broken with DMARC
Release Engineering
- In progress:
- Stalled package epel10 gtksourceview3
- F43 system-wide change: GNU Toolchain update for F43 https://fedoraproject.org/wiki/Changes/GNUToolchainF43
- evaluate proposed F43 change for preserving debuginfo in static .a libraries
- `-no-git-branch` option to fedpkg request-branch and creating the branch manually doesn't work as expected
- EPEL 8 x86_64 won't sync with Red Hat Satellite
- Broken fork
- msedit: Delete commit
- F39 Archives are still not cleaned up
- Fedora-KDE-42-1.1-x86_64-CHECKSUM has wrong ISO filename
- F40 end of life
- Turn EPEL minor branching scripts into playbooks
- Mass retirement of packages with uninitialized rawhide branch
- 300+ F42FTBFS bugzillas block the F41FTBFS tracker
- Packages that have not been rebuilt in a while or ever
- Send compose reports to a to-be-created separate ML
- Could we have fedoraproject-updates-archive.fedoraproject.org for Rawhide?
- Investigate and untag packages that failed gating but were merged in via mass rebuild
- a few mass rebuild bumps failed to git push - script should retry or error
- Package retirements are broken in rawhide
- Update pungi filters
- Implement checks on package retirements
- Untag containers-common-0.57.1-6.fc40
- Provide stable names for images
- Packages that fail to build SRPM are not reported during the mass rebuild bugzillas
- When orphaning packages, keep the original owner as co-maintainer
- Create an ansible playbook to do the mass-branching
- RFE: Integration of Anitya to Packager Workflow
- Fix tokens for ftbfs_weekly_reminder. script
- Update bootloader components assignee to "Bootloader Engineering Team"for Improved collaboration
- Done:
- Cannot build rust-debug-helper for epel9
- Side tag for Perl 5.42
- F43 System-wide change: Perl 5.42
- New package not in tag f43-updates-candidate
- "initial_commit": false not respected in releng/fedora-scm-requests
- Fix incorrectly created epel10 branch for lcov package
- Untag graphviz-13.0.1-2.fc43 and graphviz-13.0.1-2.eln150 (and the 13.0.1-1 builds too)
- Removing oneself from a package does not reset bugzilla assignee
- Unretirement request: elementary-photos
- F42 Atomic Desktops `-testing` builds failing on pungi-make-ostree error
- Recent WSL image builds fail with "Unsupported file type: Fedora-WSL-Base-Rawhide-20250530.n.0.aarch64.wsl"
- Request for permissions to make changes on torrent server
- Fix OpenH264 tagging issues
Fedora Data Center Move: "It's Move Time!" and Successful Progress!
This week was "move time" for the Fedora Data Center migration from IAD2 to RDU3, and thanks to the collective effort of the entire team, it's been a significant success! We officially closed off the IAD2 datacenter, with core applications, databases, and the build pipeline successfully migrated to RDU3. This involved meticulously scaling down IAD2 OpenShift apps, migrating critical databases, and updating DNS, followed by the deployment and activation of numerous OpenShift applications in RDU3. While challenges arose, especially with networking and various service configurations, our dedicated team worked tirelessly to address them, ensuring most services are now operational in the new environment. We'll continue validating and refining everything, but we're thrilled with the progress made in establishing Fedora's new home!
If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.
The post Infra and RelEng Update - Week 27, 2025 appeared first on Fedora Community Blog.
04 Jul 2025 11:55am GMT
Fedora Magazine: ๐งฑ Building better initramfs: A deep dive into dracut on Fedora & RHEL
Understanding how to use dracut is critical for kernel upgrades, troubleshooting boot issues, disk migration, encryption, and even kernel debugging.
Introduction: What is dracut?
dracut is a powerful tool used in Fedora, RHEL, and other distributions to create and manage initramfs images-the initial RAM filesystem used during system boot. Unlike older tools like mkinitrd, dracut uses a modular approach, allowing you to build minimal or specialized initramfs tailored to your system.
Installing dracut (if not already available)
dracut comes pre-installed in Fedora and RHEL. If it is missing, install it with:
$ sudo dnf install dracut
Verify the version:
$ dracut --version
Basic usage
Regenerate the current initramfs
$ sudo dracut --force
This regenerates the initramfs for the currently running kernel.
Generate initramfs for a specific kernel
$ sudo dracut --force /boot/initramfs-$(uname -r).img $(uname -r)
Or Manually!
$ sudo dracut --force $ sudo dracut --force /boot/initramfs-5.14.0-327.el9.x86_64.img 5.14.0-327.el9.x86_64
Understanding key dracut options (with examples)
-force
Force regeneration even if the file already exists:
$ sudo dracut --force
-kver <kernel-version>
Generate initramfs for a specific kernel:
$ sudo dracut --force --kver 5.14.0-327.el9.x86_64
-add <module> / -omit <module>
Include or exclude specific modules (e.g., lvm, crypt, network).
Include LVM module only:
$ sudo dracut --force --add lvm
Omit network module:
$ sudo dracut --force --omit network
-no-hostonly
Build a generic initramfs that boots on any compatible machine:
$ sudo dracut --force --no-hostonly
-hostonly
Create a host-specific image for minimal size:
$ sudo dracut --force --hostonly
-print-cmdline
Show the kernel command line:
$ dracut --print-cmdline
-list-modules
List all available dracut modules:
$ dracut --list-modules
-add-drivers "driver1 driver2"
Include specific drivers:
$ sudo dracut --add-drivers "nvme ahci" --force
Test cases and real-world scenarios
1. LVM root disk fails to boot after migration
$ sudo dracut --force --add lvm --hostonly
2. Initramfs too large
Shrink it by omitting unused modules:
$ sudo dracut --force --omit network --omit plymouth
3. Generic initramfs for provisioning
$ sudo dracut --force --no-hostonly --add network --add nfs
4. Rebuild initramfs for rollback kernel
$ sudo dracut --force /boot/initramfs-5.14.0-362.el9.x86_64.img 5.14.0-362.el9.x86_64
Advanced use: Debugging and analysis
Enable verbose output:
$ sudo dracut -v --force
Enter the dracut shell if boot fails:
Use rd.break in the GRUB kernel line.
Where is dracut configuration stored?
There are two locations where configuration setting may occur.
The global settings location is at:
/etc/dracut.conf
and the drop-in location is at:
/etc/dracut.conf.d/*.conf
Example using the drop-in location:
$ cat /etc/dracut.conf.d/custom.conf
The contents might appear as follows for omitting and adding modules:
omit_dracutmodules+=" plymouth network "
add_dracutmodules+=" crypt lvm "
Note: Always include a space at the beginning and end of the value when using += in these configuration files. These files are sourced as Bash scripts, so
add_dracutmodules+=" crypt lvm "ensures proper spacing when multiple config files are concatenated. Without the spaces, the resulting string could concatenate improperly (e.g., mod2mod3) and cause module loading failures.
Deep dive: /usr/lib/dracut/modules.d/ - the heart of dracut
The directory /usr/lib/dracut/modules.d includes all module definitions. Each contains:
- A module-setup.sh script
- Supporting scripts and binaries
- Udev rules, hooks, and configs
List the modules using the following command:
$ ls /usr/lib/dracut/modules.d/
Example output:
01fips/ 30crypt/ 45ifcfg/ 90lvm/ 95resume/ 02systemd/ 40network/ 50drm/ 91crypt-gpg/ 98selinux/
Inspect specific module content (module-setup.sh, in this example) using this:
$ cat /usr/lib/dracut/modules.d/90lvm/module-setup.sh
You can also create custom modules at this location for specialized logic.
Final thoughts
dracut is more than a utility-it's your boot-time engineer. From creating lightweight images to resolving boot failures, it offers unparalleled flexibility.
Explore man dracut, read through /usr/lib/dracut/modules.d/, and start customizing.
This article is dedicated to my wife, Rupali Suraj Patil, for her continuous support and encouragement.
04 Jul 2025 8:00am GMT
Remi Collet: ๐ฒ PHP 8.5 as Software Collection
Version 8.5.0alpha1 has been released. It's still in development and will enter soon in the stabilization phase for the developers, and the test phase for the users (see the schedule).
RPM of this upcoming version of PHP 8.5, are available in remi repository for Fedora โฅ 41 and Enterprise Linux โฅ 8 (RHEL, CentOS, Alma, Rocky...) in a fresh new Software Collection (php85) allowing its installation beside the system version.
As I (still) strongly believe in SCL's potential to provide a simple way to allow installation of various versions simultaneously, and as I think it is useful to offer this feature to allow developers to test their applications, to allow sysadmin to prepare a migration or simply to use this version for some specific application, I decide to create this new SCL.
I also plan to propose this new version as a Fedora 44 change (as F43 should be released a few weeks before PHP 8.5.0).
Installation :
yum install php85
โ ๏ธ To be noticed:
- the SCL is independent from the system and doesn't alter it
- this SCL is available in remi-safe repository (or remi for Fedora)
- installation is under the /opt/remi/php85 tree, configuration under the /etc/opt/remi/php85 tree
- the FPM service (php85-php-fpm) is available, listening on /var/opt/remi/php85/run/php-fpm/www.sock
- the php85 command gives simple access to this new version, however, the module or scl command is still the recommended way.
- for now, the collection provides 8.5.0-alpha1, and alpha/beta/RC versions will be released in the next weeks
- some of the PECL extensions are already available, see the extensions status page
- tracking issue #307 can be used to follow the work in progress on RPMS of PHP and extensions
- the php85-syspaths package allows to use it as the system's default version
โน๏ธ Also, read other entries about SCL especially the description of My PHP workstation.
$ module load php85 $ php --version PHP 8.5.0alpha1 (cli) (built: Jul 1 2025 21:58:05) (NTS gcc x86_64) Copyright (c) The PHP Group Built by Remi's RPM repository #StandWithUkraine Zend Engine v4.5.0-dev, Copyright (c) Zend Technologies with Zend OPcache v8.5.0alpha1, Copyright (c), by Zend Technologies
As always, your feedback is welcome on the tracking ticket.
Software Collections (php85)
04 Jul 2025 6:14am GMT
Remi Collet: ๐ก๏ธ PHP version 8.1.33, 8.2.29, 8.3.23 and 8.4.10
RPMs of PHP version 8.4.10 are available in the remi-modular repository for Fedora โฅ 40 and Enterprise Linux โฅ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.3.23 are available in the remi-modular repository for Fedora โฅ 40 and Enterprise Linux โฅ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.2.29 are available in the remi-modular repository for Fedora โฅ 40 and Enterprise Linux โฅ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.1.33 are available in the remi-modular repository for Fedora โฅ 40 and Enterprise Linux โฅ 8 (RHEL, Alma, CentOS, Rocky...).
โน๏ธ The packages are available for x86_64 and aarch64.
โ ๏ธ PHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.
These versions are also available as Software Collections in the remi-safe repository.
๐ก๏ธ These Versions fix 3 security bugs (CVE-2025-1220, CVE-2025-1735, CVE-2025-6491), so the update is strongly recommended.
Version announcements:
- PHP 8.4.10 Release Annoucement
- PHP 8.3.23 Release Annoucement
- PHP 8.2.29 Release Annoucement
- PHP 8.1.33 Release Annoucement
โน๏ธ Installation: use the Configuration Wizard and choose your version and installation mode.
Replacement of default PHP by version 8.4 installation (simplest):
dnf module switch-to php:remi-8.4/common
Parallel installation of version 8.4 as Software Collection
yum install php84
Replacement of default PHP by version 8.3 installation (simplest):
dnf module switch-to php:remi-8.3/common
Parallel installation of version 8.3 as Software Collection
yum install php83
And soon in the official updates:
- Fedora Rawhide now has PHP version 8.4.10
- Fedora 42 - PHP 8.4.10
- Fedora 41 - PHP 8.3.23
โ ๏ธ To be noticed :
- EL-10 RPMs are built using RHEL-10.0
- EL-9 RPMs are built using RHEL-9.6
- EL-8 RPMs are built using RHEL-8.10
- intl extension now uses libicu74 (version 74.2)
- mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
- oci8 extension now uses the RPM of Oracle Instant Client version 23.8 on x86_64 and aarch64
- a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page
โน๏ธ Information:
Base packages (php)
Software Collections (php83 / php84)
04 Jul 2025 4:49am GMT
03 Jul 2025
Fedora People
Akashdeep Dhar: Loadouts For Genshin Impact v0.1.9 Released
Hello travelers!
Loadouts for Genshin Impact v0.1.9 is OUT NOW with the addition of support for recently released characters like Skirk and Dahlia and for recently released weapons like Azurelight from Genshin Impact v5.7 Phase 1. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.
Resources
- Loadouts for Genshin Impact - GitHub
- Loadouts for Genshin Impact - PyPI
- Loadouts for Genshin Impact v0.1.9
Changelog
- Automated dependency updates for GI Loadouts by @renovate in #342
- Automated dependency updates for GI Loadouts by @renovate in #344
- Automated dependency updates for GI Loadouts by @renovate in #345
- Automated dependency updates for GI Loadouts by @renovate in #346
- Automated dependency updates for GI Loadouts by @renovate in #347
- Add the recently added character
Dahlia
to the GI Loadouts roster by @sdglitched in #348 - Add the recently added character
Skirk
to the GI Loadouts roster by @sdglitched in #349 - Add the recently added weapon
Azurelight
to the GI Loadouts roster by @sdglitched in #351 - Stage the release v0.1.9 for Genshin Impact v5.7 Phase 1 by @sdglitched in #352
- Update dependency ruff to ^0.2.0 || ^0.3.0 || ^0.6.0 || ^0.7.0 || ^0.11.0 || ^0.12.0 by @renovate in #353
- Automated dependency updates for GI Loadouts by @renovate in #354
- Automated dependency updates for GI Loadouts by @renovate in #355
- Update dependency pillow to v11.3.0 [SECURITY] by @renovate in #356
Characters
Skirk
Escoffier is a sword-wielding Cryo character of five-star quality.


Dahlia
Dahlia is a catalyst-wielding Hydro character of four-star quality.


Weapons
Azurelight

Appeal
While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.
Disclaimer
With an extensive suite of over 1428 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.
The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.
All rights to Genshin Impact assets used in this project are reserved by miHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.
03 Jul 2025 6:30pm GMT
Peter Czanik: openSUSE turned 20
03 Jul 2025 12:34pm GMT
02 Jul 2025
Fedora People
Emmanuel Seyman: A COPR for Ansible roles
Packaging Ansible roles
Since Fedora's Server SIG has decided to promote using Ansible, I've decided to package a number of roles I find interesting. Packaging solves two problems in my opinion:
- This allows users to get roles and playbooks without having to learn how to get them from Ansible Galaxy
- It allows us to patch the roles to work properly on Fedora systems
I've started submitting rpms to Fedora but I thought having a copr in the meantime that includes all my ansible rpms would make it easier for people to install and test them.
Activating the COPR on a Fedora system:
You can run the command "dnf copr enable eseyman/ansible
" on a F42 or rawhide system. From there, you'll be able to "dnf search
" or "dnf install
" any of the packages in the copr. On that system, you'll be able to run a playbook that uses the role on any host you can ssh to.
02 Jul 2025 4:53pm GMT
Ben Cotton: Using AI moderation tools
Ben Balter recently announced a new tool he created: AI Community Moderator. This project, written by an AI coding assistant at Balter's direction, takes moderation action in GitHub repositories. Using any AI model supported by GitHub, it automatically enforces a project's code of conduct and contribution guidelines. Should you use it for your project?
For the sake of this post, I'm assuming that you're open to using large language model tools in certain contexts. If you're not, then there's nothing to discuss.
Why to not use AI moderation tools
Moderating community interactions is a key part of leading an open source project. Good moderation creates a safe and welcoming community where people can do their best work. Bad moderation drives people away - either because toxic members are allowed to run roughshod over others or because good-faith interactions are given heavy-handed punishment. Moderation is one of the most important factors in creating a sustainable community - people have to want to be there.
Moderation is hard - and often thankless - work. It requires emotional energy in addition to time. I understand the appeal of offloading that work to AI. AI models don't get emotionally invested. They can't feel burnout. They're available around the clock.
But they also don't understand a community's culture. They can't build relationships with contributors. They're not human. Communities are ultimately a human endeavor. Don't take the humanity out of maintaining your community.
Why you might use AI moderation tools
Having said the above, there are cases where AI moderation tools can help. In a multilingual community, moderators may not have fluency in all of the languages people use. Anyone who has used AI translations know they can sometimes be hilariously wrong, but they're (usually) better than nothing.
AI tools are also ever-vigilant. They don't need sleep or vacations and they don't get pulled away by their day job, family obligations, or other hobbies. This is particularly valuable when a community spans many time zones and the moderation team does not.
Making a decision for your project
"AI" is a broad term, so you shouldn't write off everything that has that label. Machine learning algorithms can be very helpful in detecting spam and other forms of antisocial behavior. The people who I've heard express moral or ethical objections to large language models seem to generally be okay with machine learning models in appropriate contexts.
Using spam filters and other abuse detection tools to support human moderators is a good thing. It's reasonable to allow them to take basic reversible actions, like hiding a post until a human has had the chance to review it. However, I don't recommend using AI models to take more permanent actions or to interact with people who have potentially violated your project's code of conduct. It's hard, but you need to keep the humanity in your community.
This post's featured photo by Mohamed Nohassi on Unsplash.
The post Using AI moderation tools appeared first on Duck Alignment Academy.
02 Jul 2025 12:00pm GMT
Akashdeep Dhar: Pagure Exporter v0.1.4 Released
The first and second quarters of 2025 was the time when a bunch of free and open source software communities seemed to be actively moving away from Pagure to either GitLab (in case of CentOS Project and OpenSUSE Project) and Forgejo (in case of Fedora Project). Having written Pagure Exporter about a couple of years back and being deeply involved in the Fedora To Forgejo initiative, I found myself in the middle of all the Git Forge migration craziness. With a bunch of feature requests and feature requests reaching the doors of the project, I wanted to make the best use of my time to deliver the first release of 2025 for Pagure Exporter using the effective workflows and community personnel at my disposal. I would cover my experiences with the efforts in making this release possible in this article.

Impressions
Contributing to a hustling and bustling free and open source software community like those of Fedora Project and CentOS Project means that there are always some tasks required to completed soon. Thankfully, there are also a bunch of passionate contributors willing to roll up their sleeves and hit the ground running as long as they are aware of it. While I was sometimes affected by the unreliability of certain software libraries and the intermittent AI scraper attack on Pagure, I was also joined by the likes of Greg Sutcliffe, Fabian Arrotin, Yashwanth Rathakrishnan, Shounak Dey, Peter Olamide in the efforts. Furthermore, I made it a point to use assistive artificial intelligence technologies for purposes like explaining extended logs and generating code inspirations to kick things off from, at my discretion.

Apes (Are) Strong Together
The request for working on extending Pagure Exporter to support various other hostnames (like those of Fedora Dist Git and CentOS Git Server) was scoped first at around January 2025. With me occupied with the Fedora To Forgejo migration efforts, it was only until March 2025 when the work on it was started by an Outreachy applicant, Rajesh Patel. As the request had an increase in priority by April 2025, I decided to briefly context switch from my existing work to implement the support for different Pagure hostnames. While this was reviewed positively by Michal Konecny and Aurelien Bompard, the readability of the introduced codebase itself was in question so that had to be resolved separately and by someone else, to ensure that I do not end up introducing code changes that only I could understand.

Leading up to the v0.1.4 release of Pagure Exporter, I was helped by Greg who himself explored the GitLab API to build a simple Python script that automatically created projects on GitLab under a certain namespace. Pagure Exporter was expected to work in tandem with the said script to migrate repository contents and issue tickets from Pagure as soon as the projects are created on GitLab. We also discussed the possibility of offloading the migration to the GitLab infrastructure to minimize potential network hiccups during the transfer process. Davide Cavalca also joined in to help tailor fit the approach of the migration proceedings and Fabian imported the CentOS Board and CentOS Infra namespaces as dry runs while making observations as to how the tool can be used at scale in automation.

Gifted With Zealous Mentees
While Rajesh's work could not be merged, I did appreciate the effort that he put into understanding the project and I hoped that I was able to provide learnings. Just like him, we had another enthusiastic Outreachy applicant, Peter who helped in fixing the deprecation status of the datetime
library. Yashwanth helped out with going around the codebase to update the copyright years across the code headers. The one contributor who was immensely helpful was Shounak who assisted in moving from using absolute imports to relative ones and in renaming identifiers for improved readability, thus addressing the previously stated concerns. Finding external contributors was difficult due to the challenges we faced with the VCR.py library failing inexplicably but amazing mentees use this as a learning opportunity.

Patience probably is one of the most defining characteristics for those working on free and open source projects. While I try to keep my turnaround time under a week to address any open issue tickets or pull requests as evidenced by those under the v0.1.4 release, sometimes it could take months to get back to a certain work as evidenced by the codebase changes for improving readability. As I have been taking on more work after my promotion to Senior Software Engineer, I have also begun to include open source artificial intelligence tooling like Ramalama, Ollama and Cursor in my workflow for reviewing external codebase changes and finding alternative performance optimizations - all to ensure that the quality of my work remains high while I context switch from one task to another in momentum.

The AI Scraper Attack
While I wrote about how including open source artificial intelligence technologies in my workflow was helpful in making me productive in the previous section, this section is more about how external AI scrapers hindered the progress of the v0.1.4 release of Pagure Exporter. Pagure has been receiving unreasonable amounts of traffic from various AI scrapers for a while now, but things seemed to worsen at the second half of June 2025 when the bombardment of millions of heavy requests led to the service becoming inaccessible to legitimate users. As the project relied on making actual HTTPS Git requests (but masqueraded HTTPS REST requests) for testing purposes, we could not reliably verify the correctness of the codebase changes, thus negatively affecting the initiative of moving CentOS repos to GitLab.

Even though I run a bunch of selfhosted applications and services on my homelab infrastructure, I am by no means a system administrator, so I had to rely on Kevin Fenzi to block out the offending IP addresses. I have had fair share of problems from AI scrapers on my testing deployment of Forgejo that I had to keep it behind the Cloudflare verification so I understood just how difficult it must have been for him to keep the unreasonable requestors at bay. Learning from the deployment of Codeberg, I have been looking into Anubis to understand just how we can leverage it to protect the upstream resources from the AI scrapers. Given that the Fedora Infrastructure was undergoing a datacenter move as of the first week of July 2025, the experimentation (or implementation) of this solution has to wait for later.

Unreliable Libraries For Testing
Imagine something pissing me off so much that I had to write about my experience with that in its own dedicated section! I want to preface the section by saying that for whatever trouble VCR.py had given me since the beginning of 2025, it had been immensely helpful in ensuring that I do not have to make a bunch of requests to an actual server. For some reason, the tests involving VCR.py used to work just fine during development but fail inexplicably on GitHub Actions - and error messages would be of no help especially when they are related to failing matchers, existing cassettes, non-existent cassettes, count mismatch etc. There happened to be a bunch of pull requests lined up to address to mentioned concerns, but they were not actively looked into - so I decided that it was about time for me to move away.

And move away I did - to Responses. It was more than methodology switch though as it included a shift in philosophy as unlike VCR.py which used to record real HTTP requests and replay them, Responses mocks the HTTP call entirely. With the increasing roster of over 90 testcases that ensured a stellar 100% codebase coverage, converting the cassettes to Responses would have been a chore. In came my trustworthy AMD Radeon RX6800XT and Ramalama to rescue, I was able to parse through the VCR.py cassettes to obtain Response Definition objects during the testing runtime. The solution was great, even if I say so myself, as I saved approximately ten to fifteen hours of trudging along (and of course, boredom) to painstakingly port the associated recordings to the respective HTTP testcases.

Changelog
Published on PyPI - Pagure Exporter v0.1.4
Published on Fedora Linux - Pagure Exporter v0.1.4
Published on GitHub - Pagure Exporter v0.1.4

From maintainers
- Fixed the deprecation status of the
datetime
library usage - Tailor fitted the filters to remove credentials before recordings are stored locally
- Updated the Packit configuration to satiate Packit v1.0.0 release
- Moved away from using absolute imports to using relative imports
- Introduced support for CentOS Git Server (i.e. https://git.centos.org)
- Introduced support for Fedora Dist Git (i.e. https://src.fedoraproject.org)
- Introduced support for different custom Pagure hostnames
- Updated copyright headers across all the codebase headers
- Renamed the identifiers for improved codebase readability
- Moved away from
VCR.py
toResponses
for test caching purposes - Made various automated dependency and security updates
- Marked the first release of Pagure Exporter in
2025
From GitHub
- Automated dependency updates by @renovate in #90
- Automated dependency updates by @renovate in #91
- Attempt to not mess up the repository secrets by @gridhead in #155
- Fix the deprecation status of the datetime library usage by @olamidepeterojo in #157
- Update dependency black to v25 by @renovate in #159
- Update dependency ruff to ^0.0.285 || ^0.1.0 || ^0.2.0 || ^0.3.0 || ^0.4.0 || ^0.5.0 || ^0.6.0 || ^0.9.0 by @renovate in #146
- Update dependency vcrpy to v7 by @renovate in #150
- Automated dependency updates by @renovate in #149
- Update Packit config after Packit v1.0.0 release by @gridhead in #160
- Automated dependency updates by @renovate in #161
- Automated dependency updates by @renovate in #162
- Automated dependency updates by @renovate in #169
- Automated dependency updates by @renovate in #170
- Automated dependency updates by @renovate in #171
- Update dependency ruff to ^0.0.285 || ^0.1.0 || ^0.2.0 || ^0.3.0 || ^0.4.0 || ^0.5.0 || ^0.6.0 || ^0.9.0 || ^0.10.0 by @renovate in #173
- Update dependency ruff to ^0.0.285 || ^0.1.0 || ^0.2.0 || ^0.3.0 || ^0.4.0 || ^0.5.0 || ^0.6.0 || ^0.9.0 || ^0.10.0 || ^0.11.0 by @renovate in #174
- Update dependency pytest-cov to v6 by @renovate in #147
- Automated dependency updates by @renovate in #175
- Automated dependency updates by @renovate in #176
- Automated dependency updates by @renovate in #177
- Automated dependency updates by @renovate in #178
- Automated dependency updates by @renovate in #179
- Move from using relative imports instead of absolute imports by @sdglitched in #185
- chore: updated copyright years across all the codebase headers by @iamyaash in #183
- chore(deps): automated dependency updates by @renovate in #189
- Introduce support for different Pagure hostnames by @gridhead in #188
- chore(deps): automated dependency updates by @renovate in #192
- chore(deps): automated dependency updates by @renovate in #193
- chore(deps): automated dependency updates by @renovate in #194
- fix(deps): update dependency requests to v2.32.4 [security] by @renovate in #195
- chore(deps): automated dependency updates by @renovate in #196
- Rename identifiers for improved readability by @sdglitched in #191
- Move away from
VCR.py
toResponses
by @gridhead in #200 - chore(deps): update dependency ruff to ^0.0.285 || ^0.1.0 || ^0.2.0 || ^0.3.0 || ^0.4.0 || ^0.5.0 || ^0.6.0 || ^0.9.0 || ^0.10.0 || ^0.11.0 || ^0.12.0 by @renovate in #197
- chore(deps): automated dependency updates by @renovate in #198
- Version bump from
v0.1.3
tov0.1.4
by @gridhead in #202
New contributors
- @olamidepeterojo made their first contribution in #157
- @sdglitched made their first contribution in #185
- @iamyaash made their first contribution in #183
02 Jul 2025 6:30am GMT
01 Jul 2025
Fedora People
Fedora Community Blog: Fedora DEI Outreachy Intern โ My first month Recap ๐
Hey everyone!
It's already been a month, I can't imagine how time flies so fast, busy time?? Flock, Fedora DEI and Documentation workshop?? All in one month.
As a Fedora Outreachy intern, my first month has been packed with learning and contributions. This blog shares what I worked on and how I learned to navigate open source communities.
First, I would like to give a shoutout to my amazing Mentor, Jona Azizaj for all the effort she has put into supporting me. Thank You, Jona!
Highlights from June
Fedora DEI & Docs Workshop
One of the biggest milestones this month was planning and hosting my first Fedora DEI & Docs Workshop. This virtual event introduced new contributors to Fedora documentation, showed them how to submit changes, and gave a live demo of fixing an issue - definitely a learning experience in event organizing!
You can check the Discourse post; all information is in the post itself, including slides and comments.
Flock 2025 recap
I wrote a detailed Flock to Fedora recap article, covering the first two days of talks streamed from Prague. From big announcements about Fedora's future to deep dives into mentorship, the sessions were both inspiring and practical. Read the blog magazine recap.
Documentation contributions
This month, I have contributed to multiple docs areas, including:
- DEI team docs - Updated all the broken links in the docs.
- Outreachy DEI page, and Outreachy mentored projects pages(under review) - I updated content and added examples of past interns, how Outreachy shaped their journey even beyond the internship.
- How to Organize events section - Created a step guide for event planning.
- Past event section - Documented successful Fedora DEI activities. It serves as an archive for our past events.
Collaboration and learning
The good part? It's great to work closely with others, and I'm learning this in the open source space. I spend some time working with other teams as well:
- Mindshare Committee - Learned how to request funding for events
- Design team - I had an amazing postcards prepared, thanks to the Design team
- Marketing - Got the Docs workshop promoted to different Fedora social accounts
- Documentation team - Especially with Petr Bokoc, who shared a detailed guide on how you can easily contribute to the Docs pages.
A great learning experience. One thing I could say about people in Open source (in Fedora), they're super amazing, gentle Cheers - I'm enjoying my journey.
My role in Join Fedora SIG
Oh, I thought it's good to mention this as well, I am also part of the Join SIG, which helps newcomers find their place in Fedora. I've been able to understand how the community works, onboarding and mentorship.
What I've learned
- How to collaborate asynchronously - Video calls, and chats.
- How to chair meetings - I chaired two DEI Team meetings this month. The first one was challenging, but the second, I felt confident and even enjoyed it. I can tell I didn't know how meetings are held in text
- How open source works - From budgeting to marketing, I'm learning how many moving pieces make Fedora possible.
What's next
I plan to revisit the Event checklist and revamp it, work with my mentor Jona and make it meaningful and useful for future events.
Also to continue improving the DEI docs, and promoting Fedora's DEI work.
Last word
This month has already been full of learning and growth. If you're also interested in helping out the DEI work, reach out to us in the matrix room.
Thanks for reading!
Your Friend in Open Source.
The post Fedora DEI Outreachy Intern - My first month Recap ๐ appeared first on Fedora Community Blog.
01 Jul 2025 12:00pm GMT
Fedora Community Blog: Simplifying Fedora Package Submission Progress Report โ GSoC โ25
Student: Mayank Singh
- Fedora Account: manky201
About Project
Hi everyone, I'm working on building a service to make it easier for packagers to submit new packages to Fedora, improving upon and staying in line with the current submission process. My main focus is to automate away trivial tasks, provide fast and clear feedback, and tightly integrate with Git-based workflows that developers are familiar with.
This month
I focused on presenting a high-level architecture of the service of the project to the Fedora community and collecting early feedback. These discussions were incredibly helpful in shaping the design of the project. In particular, they helped surface early concerns and identify important edge cases that we will need to support.
The key decision is to go with a monorepo model:
Each new package submission will be a Pull Request to a central repository where contributors submit their spec files and related metadata.
The service will focus on:
- Running a series of automated checks on the package (e.g rpmlint)
- Detecting common issues early.
- Reporting the feedback and results in the same PR thread for fast feedback loops.
- Keeping the logic abstract and forge-agnostic, reuse packit-service's code and layer new handlers on top of it.
Currently, Working on setting up the local development environment and testing for the project with packit-service.
What's Next ?
I'll be working on getting a reliable testing environment ready and write code for COPR integration for builds and the next series of post build checks. All the code can be found at avant .
Thanks to my mentor Frantisek Lachman and the community for the great feedback and support.
Looking forward to share further updates.
The post Simplifying Fedora Package Submission Progress Report - GSoC '25 appeared first on Fedora Community Blog.
01 Jul 2025 10:07am GMT
30 Jun 2025
Fedora People
Fedora Infrastructure Status: Datacenter Move outage
30 Jun 2025 1:00am GMT