09 May 2025
Fedora People
Remi Collet: ⚙️ PHP version 8.3.21 and 8.4.7
RPMs of PHP version 8.4.7 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.3.21 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ There is no security fix this month, so no update for version 8.1.32 and 8.2.28.
⚠️ PHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.
These versions are also available as Software Collections in the remi-safe repository.
Version announcements:
ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.
Replacement of default PHP by version 8.4 installation (simplest):
dnf module switch-to php:remi-8.4/common
Parallel installation of version 8.4 as Software Collection
yum install php84
Replacement of default PHP by version 8.3 installation (simplest):
dnf module switch-to php:remi-8.3/common
Parallel installation of version 8.3 as Software Collection
yum install php83
And soon in the official updates:
- Fedora Rawhide now has PHP version 8.4.7
- Fedora 42 - PHP 8.4.7
- Fedora 41 - PHP 8.3.21
⚠️ To be noticed :
- EL-10 RPMs are built using RHEL-10.0-beta
- EL-9 RPMs are built using RHEL-9.5
- EL-8 RPMs are built using RHEL-8.10
- intl extension now uses libicu74 (version 74.2)
- mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
- oci8 extension now uses the RPM of Oracle Instant Client version 23.7 on x86_64 and aarch64
- a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page
ℹ️ Information:
Base packages (php)
Software Collections (php83 / php84)
09 May 2025 5:24am GMT
08 May 2025
Fedora People
Peter Czanik: syslog-ng 4.8.2 is now available
08 May 2025 10:56am GMT
07 May 2025
Fedora People
Fedora Magazine: Nominate Your Fedora Heroes: Mentor and Contributor Recognition 2025
The Fedora Project is built on the dedication, mentorship, and relentless efforts of contributors who continuously go above and beyond. From reviewing pull requests to onboarding new community members, from writing documentation to organizing events - it's these quiet champions who make Fedora thrive. As a part of the Fedora Mentor Summit , we would be declaring the results. This wiki underscores the sentiment and the thought that went into this recognition programme.
As we gear up to recognize outstanding mentors and contributors in our community, we invite you to nominate those individuals who've made a lasting impact - the ones who've guided, inspired, or stood out through their unwavering contributions. Whether it's a long-time mentor who helped you take your first steps, or a contributor whose work has left a mark across Fedora's landscape - now is the time to celebrate them! Read more about the nomination process and submit your nomination at the link below:
Submit your nominations here: https://forms.gle/xB8ng7GH9niT2Sza8
Deadline: 16 May 2025
Let's spotlight the amazing humans who power Fedora. Your nomination could be the recognition someone has long deserved - and a moment of pride for our whole community.
07 May 2025 11:41pm GMT
Ben Cotton: Use reserved domains and IPs in examples
A while back I posted in frustration on various social media platforms - I was reading software documentation and it used some made up domain as example text. This is bad! But in the replies to my post, some people weren't aware of reserved domains and IP addresses, so this seems like a good opportunity to share what I know.
Why reserve domains and IPs?
The most important answer is to protect the users. Imagine I was writing documentation or building an example configuration file for some software. I might think "duckalignment.academy" is a fun domain name to use as a placeholder. It's unregistered, so there's no harm.
Until someone registers it. Then it could be whatever the registrant wants, including a malicious service. If someone forgets to update the example configuration before launching the software, they're at the mercy of the domain owner.
The other reason to use reserved domains and IPs is that it makes placeholders more obvious. If a configuration file or documentation contains "duckalignment.academy", it's less obvious that you need to replace it than using "example.com." Example values that are unambiguously examples are much friendlier to your users.
Which domains and IPs are reserved?
Several standards define reserved domains and IPs. RFC 2606 defines several reserved top-level domains, including .example for use in examples and documentation. It also reserves example.com, example.net, and example.org. RFC 6761 gives instructions on how those domains should be treated.
RFC 5737 reserves three IP address blocks for documentation: 192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24. Using IPv6? RFC 3849 reserves 2001:DB8::/32. RFC 9637 added a reservation for 3fff::/20 last year in order to preserve a range big enough to encompass modern real-world networks.
Using example domains and IPs
Please don't use domains like "foo" or "bar" in your documentation and sample configuration files. They're not helpful and can actually prove harmful to your users. The reserved domains and IP blocks are almost always what you need. If they aren't for whatever reason, ensure that you own the domain you're using and commit to owning it for at least the life of your project (and ideally far beyond that).
This post's featured photo by mk. s on Unsplash.
The post Use reserved domains and IPs in examples appeared first on Duck Alignment Academy.
07 May 2025 12:00pm GMT
Fedora Magazine: Start Planning Fedora 43 Test Days!
Each Fedora release is only possible thanks to the dedication of many contributors. One of the most important ways you can get involved is by participating in Test Days! This article describes the steps in proposing and scheduling test days.
As Fedora 43 development moves ahead, it's time to start planning and proposing Test Days. A Test Day is a focused event where contributors and users come together to test a specific feature, component, or area of the Fedora distribution. These events usually happen around a test-day Matrix channel for live interaction. The results are coordinated through a Fedora Wiki page and Test Days App. Test Days play a critical role in ensuring Fedora Linux continues to deliver a stable and high-quality experience.
Test Days can focus on many things - not just code! We regularly host Test Days for localization (l10n), internationalization (i18n), desktop environments like GNOME, and major system components like the Linux kernel. You can learn more about Fedora QA Test Days here!
How to propose a Test Day
Anyone can propose and host a Test Day! Whether you want to lead it yourself, collaborate with the Fedora QA team, or just need a little help getting started, you're welcome to participate.
To propose a Test Day, simply file a ticket in the fedora-qa pagure and tag it with test days. You can see an example here.
If you're new to organizing, we have a full guide to help you set up and run a successful event. The information at SOP: Test Day Management will go a long way to help you
The current schedule of Test Days and available slots are available here. When selecting a date, please keep in mind the Fedora 43 development milestones, such as the Beta Freeze and Final Freeze.
Scheduling notes
We traditionally schedule Test Days on Thursdays. However, if you are organizing a series of related Test Days (for example, the Kernel or GNOME Test Weeks), we often schedule them over Tuesday, Wednesday, and Thursday. If Thursday slots are full, or special timing is needed for your topic, don't worry - we can open up additional days.
Just note your preferred dates when filing your ticket, and we'll work with you!
Help with ongoing Test Days
If you don't want to host your own Test Day but would still like to help, you can participate in ongoing events, including:
- GNOME Test Day
- i18n Test Day
- Kernel Test Week(s)
- Upgrade Test Day
- IoT Test Week
- Cloud Test Day
- Fedora CoreOS Test Week
These recurring Test Days help ensure that major areas of Fedora are working well across the release cycle.
Questions?
If you have any questions about Test Days - whether proposing, organizing, or participating - please don't hesitate to contact the Fedora QA team via Matrix!
Or via email test@lists.fedoraproject.org, or IRC
#fedora-qa on Libera Chat: Join here
We look forward to seeing you at Fedora 43 Test Days!
07 May 2025 8:00am GMT
06 May 2025
Fedora People
Felipe Borges: It’s alive! Welcome to the new Planet GNOME!
A few months ago, I announced that I was working on a new implementation of Planet GNOME, powered by GitLab Pages. This work has reached a point where we're ready to flip the switch and replace the old Planet website.
You can check it out at planet.gnome.org
This was only possible thanks to various other contributors, such as Jakub Steiner, who did a fantastic job with the design and style, and Alexandre Franke, who helped with various papercuts, ideas, and improvements.
As with any software, there might be regressions and issues. It would be a great help if you report any problems you find at https://gitlab.gnome.org/Teams/Websites/planet.gnome.org/-/issues
If you are subscribed to the old Planet's RSS feed, you don't need to do anything. But if you are subscribed to the Atom feed at https://planet.gnome.org/atom.xml, you will have to switch to the RSS address at https://planet.gnome.org/rss20.xml
Here's to blogs, RSS feeds, and the open web!
06 May 2025 3:46pm GMT
05 May 2025
Fedora People
Fedora Magazine: Building your own Atomic (bootc) Desktop
Bootc and associated tools provide the basis for building a personalised desktop. This article will describe the process to build your own custom installation.
Disclaimer
Building and using a custom installation is "at your own risk". Your installation may be harder to find support for when compared with a mainstream solution.
Motivation
There has been an increasing interest in atomic distros, which offer significant benefits in terms of stability and security.
These distros apply updates as a single transaction, known as atomic upgrades, which means if an update doesn't work as expected, the system can instantly roll back to its last stable state, saving users from potential issues. The immutable nature of the filesystem components reduces the risk of system corruption and unauthorised modifications as the core system files are read-only, making them impossible to modify.
If you are planning to spin off various instances from the same image (e.g. setting up computers for members of your family or work), atomic distros provide a reliable desktop experience where every instance of the desktop is consistent with each other, reducing discrepancies in software versions and behaviour.
Mainstream sources like Fedora and Universal Blue offer various atomic desktops with curated configurations and package selections for the average user. But what if you're ready to take control of your desktop and customise it entirely, from packages and configurations to firewall, DNS, and update schedules?
Thanks to bootc and the associated tools, building a personalised desktop experience is no longer difficult.
What is bootc?
Using existing container building techniques, bootc allows you to build your own OS. Container images adhere to the OCI specification and utilise container tools for building and transporting your containers. Once installed on a node, the container functions as a regular OS.
The filesystem structure follows ostree specifications:
- The /usr directory is read-only, with all changes managed by the container image.
- The /etc directory is editable, but any changes applied in the container image will be transferred to the node unless the file was modified locally.
- Changes to /var (including /var/home) are made during the first boot. Afterwards, /var remains untouched.
You can find the full documentation for bootc here: https://bootc-dev.github.io/bootc/
Creating your own bootc desktop
The approach described in this article uses quay.io/fedora/fedora-bootc as a base image to create a customizable container for building your personalised Fedora KDE atomic desktop.
Although tailored to KDE Plasma, most of the concepts and methodologies described here also apply to other desktop environments.
The kde-bootc repository
I published kde-bootc as a repository available in GitHub, and I will use it as a reference. It will help this explanation providing additional details, and a source to clone and experiment. You may wish to clone kde-bootc to following along.
Folder structure:
- scripts/
- system/
- systemd/
- Containerfile
scripts: Scripts to be ran from the Containerfile during building
system: Files to be copied to /usr and /etc
systemd: Systemd unit files to be copied to /usr/lib/systemd
Each file follows a specific naming convention. For instance a file /usr/lib/credstore/home.create.admin is named as usr__lib__credstore__home.create.admin
Explaining the Containerfile
The following will describe and show, step by step, the contents of the example Containerfile created.
Image base
The fedora-bootc project is part of the Cloud Native Computing Foundation (CNCF) Sandbox projects and generates reference "base images" of bootable containers designed for use with the bootc project.
In this example, I'm using quay.io/fedora/fedora-bootc as the base image. The containerfile starts off with:
FROM quay.io/fedora/fedora-bootc
Setup filesystem
If you plan to install software on day 2, i.e. after the kde-bootc installation is complete, you may need to link /opt to /var/opt. Otherwise, /opt will remain an immutable directory that you can only populate from the container build.
RUN rmdir /opt
RUN ln -s -T /var/opt /opt
In some cases, for successful package installation, the /var/roothome directory must exist. If this folder is missing, the container build may fail. It is advisable to create this directory before installing the packages.
RUN mkdir /var/roothome
Prepare packages
To simplify the installation, and to have a record of installed and removed packages for future reference, I found it useful to keep them as a resource under /usr/share.
- All additional packages to be installed on top of fedora-bootc and the KDE environment are documented in packages-added.
COPY --chmod=0644 ./system/usr__local__share__kde-bootc__packages-added /usr/local/share/kde-bootc/packages-added
- Packages to be removed from fedora-bootc and the KDE environment are documented in packages-removed.
COPY --chmod=0644 ./system/usr__local__share__kde-bootc__packages-removed /usr/local/share/kde-bootc/packages-removed
- For convenience, the packages included in the base fedora-bootc are documented in packages-fedora-bootc.
RUN jq -r .packages[] /usr/share/rpm-ostree/treefile.json > /usr/local/share/kde-bootc/packages-fedora-bootc
Install repositories
This section handles adding extra repositories needed before installing packages.
In this example, I'm adding Tailscale, but the same principle applies to any other source you may add to your repositories.
Adding repositories uses the config-manager verb, available as a DNF5 plugin. This plugin is not pre-installed by default in fedora-bootc, so it will need to be installed beforehand.
RUN dnf -y install dnf5-plugins
RUN dnf config-manager addrepo --from-repofile=https://pkgs.tailscale.com/stable/fedora/tailscale.repo
Install packages
For clarity and task separation, I divided the installation into two steps:
Installation of environment and groups.
RUN dnf -y install @kde-desktop-environment
And the installation of all other individual packages. The script will select all lines not starting with # passing them as arguments to dnf -y install. The --allowerasing option is necessary for cases like installing vim-default-editor, which would conflict with nano-default-editor, removing the latter first.
RUN grep -vE '^#' /usr/local/share/kde-bootc/packages-added | xargs dnf -y install --allowerasing
PACKAGES-ADDED
# LibreOffice
libreoffice
libreoffice-help-en
# Utilities
vim-default-editor
git
....
Remove packages
Some of the standard packages included in @kde-desktop-environment don't behave well and sometimes conflict with an immutable desktop, so we will remove them.
This is also an opportunity to remove software you may never use, saving resources and storage.
RUN grep -vE '^#' /usr/local/share/kde-bootc/packages-removed | xargs dnf -y remove
RUN dnf -y autoremove
RUN dnf clean all
The criteria used to remove some packages is listed below:
Conflict with bootc and its immutable nature.
plasma-discover-offline-updates
plasma-discover-packagekit
PackageKit-command-not-found
Bring unwanted dependencies.
tracker
tracker-miners
mariadb-server-utils
abrt
at
dnf-data
Deprecated services.
iptables-services
iptables-utils
Packages that are resource-heavy, or bring unnecessary services.
rsyslog
dracut-config-rescue
Configuration
This section will copy all necessary configuration files to /usr and /etc. As recommended by the bootc project, prioritise using /usr and use /etc as a fallback if needed.
Bash scripts that will be used by systemd services are stored in /usr/local/bin:
COPY --chmod=0755 ./system/usr__local__bin/* /usr/local/bin/
Custom configuration for new users' home directories will be added to /etc/skel/. As an example you can customise bash.
COPY --chmod=0644 ./system/etc__skel__kde-bootc /etc/skel/.bashrc.d/kde-bootc
If you're building your container image on GitHub and keeping it private, you'll need to create a GITHUB_TOKEN to download the image. Further information is available at GitHub container registry.
COPY --chmod=0600 ./system/usr__lib__ostree__auth.json /usr/lib/ostree/auth.json
Users
I opted for systemd-homed users because they are better suited than regular users for immutable desktops, preventing potential drift in case of local modifications in /etc/passwd. Additionally, each user home benefits from LUKS encrypted volume.
The process begins when firstboot-setup runs, triggered by firstboot-setup.service during boot. It executes homectl firstboot, which checks if any regular home areas exist. If none are found, it searches for service credentials starting with home.create. to create users at boot.
The parameter below imports service credentials into the systemd service:
FIRSTBOOT-SETUP.SERVICE
...
ImportCredential=home.create.*
For more details, refer to the homectl and systemd.exec manual pages.
The homed identity file (usr__lib__credstore__home.create.admin) sets the user's parameters, including username, real name, storage type, etc.
Common systemd-homed parameters:
- userName: A single word for your username and home directory. In this example, it is admin.
- realName: Full name for the user
- diskSize: The size of the LUKS storage volume, calculated in bytes. For instance, 1 GB equals 1024x1024x1024 bytes, which is 1073741824 bytes.
- rebalanceWeight: Relevant only when multiple user accounts share the available storage. If diskSize is defined, this parameter can be set to false.
- uid/gid: User and Group ID. The default range for regular users is 1000-6000, and for systemd-homed users, it is 60001-60513. However, you can assign uid/gid for systemd-homed users from both ranges.
- memberOf: The groups the user belongs to. As a power user, it should be part of the wheel group.
- hashedPassword: This is the hashed version of the password stored under secret. Setting up an initial password allows homectl firstboot to create the user without prompting. This password should be changed afterwards (homectl passwd admin). The hash password can be created using the mkpasswd utility.
We are storing the identity file in one of the directories where systemd-homed expects to find credentials.
COPY --chmod=0644 ./system/usr__lib__credstore__home.create.admin /usr/lib/credstore/home.create.admin
For more information on user records, visit: https://systemd.io/USER_RECORD/
This section also creates a temporary password for the root user. As I will explain later, having a root user as an alternative login is important.
echo "Temp#SsaP" | passwd root -s
Subuid and Subgid:
Another key parameter to set up is the range for /etc/subuid and /etc/subgid for the admin user. This range is necessary for running rootless containers since each uid inside the container will be mapped to a uid outside the container within this range. Systemd-homed predefines ranges for uid/gid.
The available range is 524288…1879048191. Choosing 1000001 makes it easy to identify the service running in the container. For instance, if the container is running Apache with uid=48, the volume or folder bound to it will have uid=1000048.
echo "admin:1000001:65536">/etc/subuid
echo "admin:1000001:65536">/etc/subgid
For more information on available ranges, visit: https://systemd.io/UIDS-GIDS/
The next step will set up authselect to enable authenticating the admin user on the login page. To achieve this, we need to enable the features with-systemd-homed and with-fingerprint (if your computer has a fingerprint reader) for the local profile.
authselect enable-feature with-systemd-homed
authselect enable-feature with-fingerprint
Systemd services
I decided to install at least two services; One to complete the configuration during machine boot, to run commands that require systemd (firstboot-setup.service), and the other one to automate updates (bootc-fetch.service).
We are enabling, by default, the first systemd service firstboot-setup:
COPY --chmod=0644 ./systemd/usr__lib__systemd__system__firstboot-setup.service /usr/lib/systemd/system/firstboot-setup.service
RUN systemctl enable firstboot-setup.service
USR__LIB__SYSTEMD__SYSTEM__FIRTBOOT-SETUP.SERVICE
[Unit]
Description=Setup USERS and /VAR at boot
After=multi-user.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/firstboot-setup
ImportCredential=home.create.*
[Install]
WantedBy=multi-user.target
And it runs the script below:
FIRSTBOOT-SETUP
# Setup hostname
HOST_NAME=kde-bootc
hostnamectl hostname $HOST_NAME
# Create user(s)
homectl firstboot
# Setup firewall to allow kdeconnect to functions
firewall-cmd --set-default-zone=public
firewall-cmd --add-service=kdeconnect --permanent
We are triggering bootc-fetch daily by a timer as a second systemd service:
COPY --chmod=0644 ./systemd/usr__lib__systemd__system__ bootc-fetch.service /usr/lib/systemd/system/bootc-fetch.service
COPY --chmod=0644 ./systemd/usr__lib__systemd__system__bootc-fetch.timer /usr/lib/systemd/system/bootc-fetch.timer
USR__LIB__SYSTEMD__SYSTEM__BOOTC-FETCH.TIMER
[Unit]
Description=Fetch bootc image daily
[Timer]
OnCalendar=*-*-* 12:30:00
Persistent=true
[Install]
WantedBy=timers.target
USR__LIB__SYSTEMD__SYSTEM__BOOTC-FETCH.SERVICE
[Unit]
Description=Fetch bootc image
After=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/bin/bootc update --quiet
This service replaces bootc-fetch-apply-updates, which would download and apply updates as soon as they are available. This approach is problematic because it causes your computer to shut down without warning, so it is better to disable by masking the timer:
RUN systemctl mask bootc-fetch-apply-updates.timer
How to create an ISO?
The instructions that follow will build the container locally. You need to do it as root so bootc-image-builder can use the image to make the ISO.
cd /path-to-your-repo
sudo podman build -t kde-bootc .
Then, outside the repository on a different directory, create a folder named output for the ISO image. And also you need to create the configuration file config.toml to feed the installer.
CONFIG.TOML
[customizations.installer.kickstart]
contents = "graphical"
[customizations.installer.modules]
disable = [
"org.fedoraproject.Anaconda.Modules.Users"
]
It instructs the installer to use the graphical interface and disable the module for user creation. We do not need to set up a user during installation, as this is already being taken care of.
Within the directory where ./output/ and ./config.toml exists, run bootc-image-builder utility which is available as a container. It must be run as root.
sudo podman run --rm -it --privileged --pull=newer \
--security-opt label=type:unconfined_t \
-v ./output:/output \
-v /var/lib/containers/storage:/var/lib/containers/storage \
-v ./config.toml:/config.toml:ro \
quay.io/centos-bootc/bootc-image-builder:latest \
--type iso \
--chown 1000:1000 \
localhost/kde-bootc
If everything goes well, the ISO image will be available in the ./output directory. You can use Fedora Media Writer to create a USB and put your images on a portable drive such as flash disk.
At the time of writing, the installer uses Anaconda and functions like any other Fedora flavor installation.
For more information on bootc-image-builder, visit: https://github.com/osbuild/bootc-image-builder
Post installation
The first step is to restore the SELinux context for the systemd-homed home directory. Without this, you may not be able to log in as admin. To complete this task, log in as root, activate admin home area, and then run restorecon to restore the SELinux context.
homectl activate admin
<< enter password for admin
restorecon -R /home/admin
homectl deactivate admin
At this point, you can change the passwords for root and admin:
passwd root
homectl passwd admin
After completing these steps, you can log out from root and log in to admin.
If your computer has a fingerprint reader, setting it up is not possible from Plasma's user settings, as systemd-homed is not yet recognised by the desktop. However, you can manually enroll your fingerprint by running fprintd-enroll and placing your finger on the reader as you normally would.
sudo fprintd-enroll admin
Same as above, you cannot set up the avatar from Plasma's user settings, but you can copy an available avatar (PNG file) from Plasma's avatar directory to the account service's directory. The file name needs to be the same as the username:
/usr/share/plasma/avatars/<avatar.png> -> /var/lib/AccountsService/icons/admin
Finally, enable the service to keep your system updated and any other desired services:
systemctl enable --now bootc-fetch.timer
systemctl enable --now tailscaled
Troubleshooting
Drifts on /etc
Please note that a configuration file in /etc drifts when it is modified locally. Consequently, bootc will no longer manage this file, and new releases won't be transferred to your installation. While this might be desired in some cases, it can also lead to issues.
For instance, if /etc/passwd is locally modified, uid or gid allocations for services may not get updated, resulting in service failures.
Use ostree admin config-diff to list the files in your local /etc that are no longer managed by bootc, because they are modified or added.
If a particular configuration file needs to be managed by bootc, you can revert it by copying the version created by the container build from /usr/etc to /etc.
Adding packages after first installation
The /var directory is populated in the container image and transferred to your OS during initial installation. Subsequent updates to the container image will not affect /var. This is the expected behavior of bootc and generally works fine. However, some RPM packages execute scriptlets after installation, resulting in changes to /var that will not be transferred to your OS.
Instead of trying to identify and update the missing bits in /var, I found it easier to overlay /usr (bootc usr-overlay) and reinstall the packages (dnf reinstall ..) after updating and rebooting bootc.
References
GitHub - kde-bootc: https://github.com/sigulete/kde-bootc
GitHub - bootc: https://github.com/bootc-dev/bootc
GitLab - fedora-bootc: https://gitlab.com/fedora/bootc
05 May 2025 8:00am GMT
04 May 2025
Fedora People
Guillaume Kulakowski: OpenWRT 24.10 derrière une Freebox: IPv6, DMZ et Bridge
04 May 2025 8:12am GMT
03 May 2025
Fedora People
Akashdeep Dhar: Expecting Accountability In Open Source
For the longest time in my professional career and while contributing to free and open source software communities, I have struggled with expecting accountability from others. Much of this came from the anxiety that I experienced during the process of holding someone accountable. It often stemmed from concerns about potential conflicts, fears of being perceived negatively or doubts about self-worth. The situation only worsened when it were my friends that I was seeking on holding responsible for their decisions. It made me wonder if at all it was worth risking relationships just to get things done, or if I should rather settle for compromise.
That is, of course, a rhetorical question. I do not want to look like a surgeon who amputates an entire arm just because of a papercut on the little finger. I cannot expect the situations to change if I give up on individuals entirely just to avoid potential friction. After all, letting folks know about how uncomfortable the situation feels is often the best way to prevent the dangerous precedents created from instances of irresponsibility. Since accountability is a two-way street, I want to use this reflective (and possibly, therapeutic) post to share some grounded strategies that I rely on - and maybe, they will be useful to you too someday.
Assume best intentions

Last minute meeting cancellations are frustrating - especially when someone gives you the same excuse for the forty second time. But still, assume (not necessarily believe) that they missed the one-one meeting because their truck indeed broke down again. It is of paramount importance to reduce defensiveness by assuming that they are doing their best and figuring out what could be improved. Maybe the meeting time just doesn't work for them - but that is not worth straining the relationship over. It is crucial to create an environment that is safe for owning mistakes, not one focussed on determining who is right or wrong in an argument.
Elucidate your objective

If you are anything like me, the very thought of being misinterpreted must fill you with excruciating pain. Clarifying where you are coming from helps communicate that you are asking for responsibility (not justification) and improvement (not punishment). This metaphorical white flag prevents others from feeling blindsided or attacked while establishing that both sides are trying to achieve the same goal in different subjective ways. Of course, I have been guilty of over-explaining myself, and that has only opened doors to weakened conviction and unnecessary debate. So, definitely stick to being clear while staying concise in these conversations.
Assemble your facts

Keeping track of what actually happened is perhaps the best way to ensure that the conversation stays grounded and objective. As someone who is good at crucial conversations but avoids unnecessary conflicts, this silver bullet has increased the likelihood of my concerns being taken seriously. I have also noted that people are more open to feedback when I emphasize observable behaviours over subjective opinions. By focusing on what requires changing and respecting their dignity, the energy in the conversation is directed more toward fixing things and personal improvements, and less toward venting problems or emotional escalation.
Discomfort is expected

While it might sound silly until put into practice, deep breaths truly help calm the fight-or-flight system. During synchronous conversations like calls, a pause creates a mental buffer that can help you craft effective responses and avoid regrettable reactions. Heck, with asynchronous conversations like emails, sleep on it - even though it can be difficult to shake off the uncomfortable feeling - so you can return with an emotional state that you are in control of. For what it's worth, the energy spent steadying your nerves under pressure helps build your resilience for future exchanges, which might be just as uncomfortable but necessary, at the same time.
Consider the conditions

If time travel were possible, I would tell my younger self that conversations are inherently difficult. There are many ways things can backfire, so one must choose their tone and timing intentionally to ensure the message is respected and requests are enacted. While you should approach people in good faith, it is important to be explicit about the same to avoid assumptions of conflict. If someone is stressed or unprepared, wait it out - you need to ensure that your message lands effectively while respecting their state of mind. Of course, don't wait forever - but definitely establish a professional standard for emotional protection in conversations.
Rehearse with friends

Or contributors. Or associates. Or managers. Basically, anyone you feel safe with. Rely on them to vent your problems while preparing your messaging. Once they help you avoid unclear language, emotional tone, or unintended blame, your message can become more refined in purpose, and you can become more confident in your stance. Practicing crucial conversations with your safe people helps build your (and arguably their) resilience, so those become more natural going forward. Also, if you are like me who overthinks their problems, these safe conversations help you cut to the chase without spiralling into perfectionism and agitation.
Recognize your safety

I wrote this, but I know it will take me at least a decade more to fully internalize this idea into practice. I often find myself checking my messages - every now and then - after sending an accountability expecting request because, somehow, my imposter syndrome leads me to believe that it is not my place to ask questions. Of course, I could not be more mistaken in believing so when outcomes, relations and commitments are on the line. My mental trick is evaluating which is worse - myself being misinterpreted or them breaking commitments - and suddenly, my doubts start clearing away, and I find myself composing an email or message to folks.
Believe me, as someone who has been misunderstood many times, I know just how tricky it can be to resist the temptation to let things slide. But casting problems aside would only mean that I do not care enough about the professional career and community circles I contribute to. That is not me, and I am pretty sure you feel the same way too. Our perspectives are valid, and we do not need permission to raise concerns. As building confidence is a continuous journey, we should set a healthy precedent of shared responsibility and an open culture that rewards commitment to authority fairness and willingness to raise concerns of the people involved.
Embrace imperfect outcomes

This one, like the previous point, is a mindset shift, and it will take a considerable period before it comes naturally. Conversations expecting accountability often end up in splitting the differences to ensure both sides are comfortable with what was agreed upon. I could be willing to deliver 150% of my potential, but it would be criminally wrong on my part to expect the same from others. Acknowledging that growth and fixing can be complicated, accountability should be viewed as a long-term goal to work toward, rather than an immediate remedy. Such exchanges need both parties to be flexible and focused, with emphasis on potential over perfection.
Being an evolving process, accountability requires a flywheel effect, where the role of driving it shifts among folks to maintain momentum. Since accountability is a two-way street, questions will be raised about your commitment too - but holding the same standards you expect from others will resolve that situation. In a culture where growth is rewarded, your pursuit of accountability can become sustainable, with people joining your efforts and personal relationships getting better. You will have more success making an influential change with most (if not all) hands on deck and taking it gradually, rather than expecting things to transform overnight.
03 May 2025 6:30pm GMT
Kevin Fenzi: review of the SLZB-06M
I've been playing with Homeassistant a fair bit of late and I've collected a bunch of interesting gadgets. Today I'd like to talk about / review the SLZB-06M.
So the first obvious question: what is a SLZB-06M?
It is a small, Ukrainian designed device that is a: "Zigbee 3.0 to Ethernet, USB, and WiFi Adapter" So, basically you connect it to your wired network, or via usb or via wifi and it gateways that to a Zigbee network. It's really just a esp32 with a shell and ethernet/wifi/bluetooth/zigbee, but all assembled for you and ready to go.
I'm not sure if my use case is typical for this device, but it worked out for me pretty nicely. I have a pumphouse that is down a hill and completely out of line-of-sight of the main house/my wifi. I used some network over power/powerline adapters to extend a segment of my wired network over the power lines that run from the house to it, and that worked great. But then I needed some way to gateway the zigbee devices I wanted to put there back to my homeassistant server.
The device came promptly and was nicely made. It has a pretty big antenna and everything is pretty well labeled. On powering it home assistant detected it no problem and added it. However, then I was a bit confused. I already have a usb zigbee adapter on my home assistant box and the integration was just showing things like the temp and firmware. I had to resort to actually reading the documentation! :)
Turns out the way the zigbee integration works is via zigbee2mqtt. You add the repo for that, install the add on and then configure a user. Then you configure the device via it's web interface on the network to match that. Then, the device shows up in a zigbee2mqtt pannel. Joining devices to it is a bit different from a normal wifi setup, you need to tell it to 'permit join', either anything, or specific devices. Then you press the pair button or whatever on the device and it joins right up. Note that devices can only be joined to one zigbee network, so you have to make sure you do not add them to other zigbee adapters you have. You can set a seperate queue for each one of these adapters, so you can have as many networks as you have coordinator devices for.
You can also have the SLZB-06M act as a bluetooth gateway. I may need to do that if I ever add any bluetooth devices down there.
The web interface lets you set various network config. You can set it as a zigbee coordinator or just a router in another network. You can enable/disable bluetooth, do firmware updates (but homeassistant will do these directly via the normal integration), adjust the leds on the device (off, or night mode, etc). It even gives you a sample zigbee2mqtt config to start with.
After that it's been working great. I now have a temp sensor and a smart plug (on a heater we keep down there to keep things from freezing when it gets really cold). I'm pondering adding a sensor for our water holding tank and possibly some flow meters for the pipes from the well and to the house from the holding tank.
Overall this is a great device and I recommend it if you have a use case for it.
Slava Ukraini!
03 May 2025 5:55pm GMT
Kevin Fenzi: Beginning of May infra bits 2025
Wow, it's already May now. Time races by sometimes. Here's a few things I found notable in the last week:
Datacenter Move
Actual progress to report this week! Managed to get access to the mgmt on all our new hardware in the new datacenter. Most everything is configured right in dhcp config now (aarch64 and power10's need still some tweaking there).
This next week will be updating firmware, tweaking firmware config, setting up access, etc on all those interfaces. I want to try and do some testing on various raid configs for storage and standardize the firmware configs. We are going to need to learn how to configure the lpars on the power10 machines next week as well.
Then, the following week hopefully we will have at least some normal network for those hosts and can start doing installs on them.
The week after that I hope to start moving some 'early' things: possibly openqa and coreos and some of our more isolated openshift applications. That will continue the week after that, then it's time for flock, some more moving and then finally the big 'switcharoo' week on the 16th.
Also some work on moving some of our soon to be older power9 hardware into a place where it can be added to copr for more/better/faster copr builders.
OpenShift cluster upgrades
Our openshift clusters (prod and stg) were upgraded from 4.17 to 4.18. OpenShift upgrades are really pretty nice. There was not much in the way of issues (although a staging compute node got stuck on boot and had to be power cycled).
One interesting thing with this upgrade was that support for cgroups v1 was listed as going away in 4.19. It's not been the default in a while, but our clusters were installed so long ago that they were still using it as a default.
I like that the upgrade is basically to edit one map and change a 1 to a 2 and then openshift reboots nodes and it's done. Very slick. I've still not done the prod cluster, but likely next week.
Proxy upgrades
There's been some instablity with our proxies in particular in EU and APAC. We are going to be over the coming weeks rolling out newer/bigger/faster instances which should hopefully reduce or eliminate problems folks have sometimes been seeing.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/114445144640282791
03 May 2025 4:52pm GMT
Piju 9M2PJU: Docker vs Virtual Machines: What Every Ham Should Know
Before container technologies like Docker came into play, applications were typically run directly on the host operating system-either on bare metal hardware or inside virtual machines (VMs). While this method works, it often leads to frustrating issues, especially when trying to reproduce setups across different environments.
This becomes even more relevant in the amateur radio world, where we often experiment with digital tools, servers, logging software, APRS gateways, SDR applications, and more. Having a consistent and lightweight deployment method is key when tinkering with limited hardware like Raspberry Pi, small form factor PCs, or cloud VPS systems.
The Problem with Traditional Software Deployment
Let's say you've set up an APRS iGate, or maybe you're experimenting with WSJT-X for FT8, and everything runs flawlessly on your laptop. But the moment you try deploying the same setup on a Raspberry Pi or a remote server-suddenly things break.
Why?
Common culprits include:
- Different versions of the operating system
- Mismatched library versions
- Varying configurations
- Conflicting dependencies
These issues can be particularly painful in amateur radio projects, where specific software dependencies are critical, and stability matters for long-term operation.
You could solve this by running each setup inside a virtual machine, but VMs are often overkill-especially for ham radio gear with limited resources.
Enter Docker: The Ham's Best Friend for Lightweight Deployment
Docker is an open-source platform that allows you to package applications along with everything they need-libraries, configurations, runtimes-into one neat, portable unit called a container.
Think of it like packaging up your entire ham radio setup (SDR software, packet tools, logging apps, etc.) into a container, then being able to deploy that same exact setup on:
- A Raspberry Pi
- A cloud server
- A homelab NUC
- Another ham's machine
Why It's Great for Hams:
Lightweight - great for Raspberry Pi or low-power servers
Fast startup - ideal for services that need to restart quickly
Reproducible environments - makes sharing setups with fellow hams easier
Isolation - keeps different radio tools from interfering with each other
Many amateur radio tools like Direwolf, Xastir, Pat (Winlink), and even JS8Call can be containerized, making experimentation safer and more efficient.
Virtual Machines: Still Relevant in the Shack
Virtual Machines (VMs) have been around much longer and still play a crucial role. Each VM acts like a complete computer, with its own OS and kernel, running on a hypervisor like:
- VirtualBox
- VMware
- KVM
- Hyper-V
With VMs, you can spin up an entire Windows or Linux machine, perfect for:
- Running legacy ham radio software (e.g., old Windows-only apps)
- Simulating different operating systems for testing
- Isolating potentially unstable setups from your main system
However, VMs require more horsepower. They're heavy, boot slowly, and take up more disk space-often not ideal for small ham radio PCs or low-powered nodes deployed in the field.
Quick Comparison: Docker vs Virtual Machines for Hams
Feature | Docker | Virtual Machine |
---|---|---|
OS | Shares host kernel | Full OS per VM |
Boot Time | Seconds | Minutes |
Resource Use | Low | High |
Size | Lightweight | Heavy (GBs) |
Ideal For | Modern ham tools, APRS bots, SDR apps | Legacy systems, OS testing |
Portability | High | Moderate |
Ham Radio Use Cases for Docker
Here's how Docker fits into amateur radio workflows:
Run an APRS iGate with Direwolf and YAAC in isolated containers.
Deploy SDR receivers like rtl_433, OpenWebRX, or CubicSDR as containerized services.
Set up a Winlink gateway using Pat + ax25 tools, all in one container.
Automate and scale your APRS bot, or APRS gateway using Docker + cron + scripts.
Docker makes it easier to test and share these setups with other hams-just export your Docker Compose file or image.
When to Use Docker, When to Use a VM
Use Docker if:
- You're building or experimenting with modern ham radio apps
- You want to deploy quickly and repeatably
- You're using Raspberry Pi, VPS, or low-power hardware
- You're setting up CI/CD pipelines for your scripts or bots
Use VMs if:
- You need to run legacy apps (e.g., old Windows logging software)
- You want to simulate full system environments
- You're working on something that could crash your main system
Final Thoughts
Both Docker and VMs are powerful tools that have a place in the modern ham shack. Docker offers speed, portability, and resource-efficiency-making it ideal for deploying SDR setups, APRS bots, or automation scripts. VMs, on the other hand, still shine when you need full system emulation or deeper isolation.
At the end of the day, being a ham means being an experimenter. And tools like Docker just give us more ways to explore, automate, and share our radio projects with the world.
The post Docker vs Virtual Machines: What Every Ham Should Know appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.
03 May 2025 3:16am GMT
02 May 2025
Fedora People
Fedora Community Blog: Infra and RelEng Update – Week 18
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 28th April - 2nd May 2025

Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It's responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues
Fedora Infra
- In progress:
- Move copr_hypervisor group from iptables to nftables
- Several proxies are missing ipv6 addresses in DNS
- tmpwatch removed from ansible
- FedoraQA - Blockerbugs - OIDC setup
- please don't remove enrolled centos machines from IPA in staging
- wiki upgrade before f40 eol
- FedoraQA - Testdays - OIDC setup
- Request #centos-proposed-updates:fedoraproject.org as additional address for #centos-proposed-updates:fedora.im
- redeploy aws proxies
- Disable self-serve project creation on pagure.io
- Link in staging distgit instance leads to prod auth
- re-add datagrepper nagios checks (and add to zabbix?)
- Fix logrotate on kojipkgs01/02
- 2025 datacenter move (IAD2->RDU3)
- Broken link for STG IPA CA certificate, needed for staging CentOS Koji cert
- [CommOps] Open Data Hub on Communishift
- retire easyfix
- Deploy Element Server Suite operator in staging
- Pagure returns error 500 trying to open a PR on https://src.fedoraproject.org/rpms/python-setuptools-gettext
- Create a POC integration in Konflux for fedora-infra/webhook-to-fedora-messaging
- maubot-meetings bot multi line paste is cut
- setup ipa02.stg and ipa03.stg again as replicas
- Move OpenShift apps from deploymentconfig to deployment
- The process to update the OpenH264 repos is broken
- httpd 2.4.61 causing issue in fedora infrastructure
- Support allocation dedicated hosts for Testing Farm
- EPEL minor version archive repos in MirrorManager
- Add fedora-l10n pagure group as an admin to the fedora-l10n-docs namespace projects
- vmhost-x86-copr01.rdu-cc.fedoraproject.org DOWN
- Add yselkowitz to list to notify when ELN builds fail
- Cleaning script for communishift
- Move from iptables to firewalld
- Port apps to OIDC
- Help me move my discourse bots to production?
- Replace Nagios with Zabbix in Fedora Infrastructure
- Migration of registry.fedoraproject.org to quay.io
- Done:
- Cannot update software due to 503 on https://mirrors.fedoraproject.org/metalink?repo=fedora-42&arch=x86_64
- 404 Errors for fedoraproject-updates-archive.fedoraproject.org
- [IPv6] mirrors.fedoraproject.org & mirrormanager.fedoraproject.org return 503 HTTP errors
- Request to increase disk space on fedorapeople.org
- Spam inside Desktop mailing list
- Silverblue upgrade (to 42)
- Provision f42-test
- Connection to pagure.io refused
- give Michael Armijo koji permissions for coreos repos
- Owner Access on Fedora Podcast Gitlab Repo
- bvmhost-s390x-01.stg can't resolve log01 server
- admin access to https://github.com/fedora-infra/siguldry
- Fedora + EPEL Mirror Request - mirror.maeen.sa
- Planned Outage - firewall update / possible reboots - 2025-04-24 20:00UTC
- ostree repo for fedora-iot (aarch64) oudated/unmaintained?
- Find old docs content on proxies and remove it
- fedora-scm-requests repo moved?
- rpminspect task failing on stratisd bodhi updates in a consistent(?) manner
- mirror.gtlib.gatech.edu is listed as a Tier 1 mirror but hasn't been updated for years
- Are fedora-rawhide-ppc64le composes broken?
CentOS Infra including CentOS CI
- In progress:
- Write a script to cleanup Duffy DB after retiring a pool
- Add proposed-updates c9s and c10s tags to hyperscale9s and hyperscale10s tags
- Missing s390x extras/extras-common repo
- Ensuring ansible automation in place can manage/control el10 nodes
- [spike] : investigating ppc64le kvm host option with el9/el10 (for Power10)
- (Cloud SIG) OKDerator CI/CD space
- [spike] : investigating needed deps (poetry) for duffy on el9
- [spike] : investigating options/alternatives for the upcoming DC move
- Release Improvements
- Done:
- Issue authentication against id.stg.centos.org
- [CloudSIG] Remove cloud9s-openstack-dalmatian-testing from inherited tags in cloud9s-openstack-epoxy-el9s-build
- download.autosd.sig.centos.org CNAME for cloudfront
- Reconcile duffy DB and AWS EC2 instance
- Create sig-council FAS group
- Fedora-messaging usage inventory (Fedora infra change)
- Upgrade our various OCP cluster to latest version from 4.16.x branch
- Decommission ppc64le duffy nodes in CI infra/env
- Upgrade our openshift cluster to supported version (aligned with rest of centos infra)
- Update Fedora-messaging TLS files due to Fedora infra change
Release Engineering
- In progress:
- Re-push root-6.34.08-4.fc42 to stable
- F42 Post Release Cleanup
- Fix OpenH264 tagging issues
- Turn EPEL minor branching scripts into playbooks
- Mass retirement of packages with uninitialized rawhide branch
- Please send openh264-2.6.0 to Cisco
- 300+ F42FTBFS bugzillas block the F41FTBFS tracker
- Packages that have not been rebuilt in a while or ever
- Send compose reports to a to-be-created separate ML
- 10 builds still koji tagged with signing-pending
- Could we have fedoraproject-updates-archive.fedoraproject.org for Rawhide?
- Investigate and untag packages that failed gating but were merged in via mass rebuild
- a few mass rebuild bumps failed to git push - script should retry or error
- Package retirements are broken in rawhide
- Update pungi filters
- Implement checks on package retirements
- Untag containers-common-0.57.1-6.fc40
- Provide stable names for images
- Packages that fail to build SRPM are not reported during the mass rebuild bugzillas
- When orphaning packages, keep the original owner as co-maintainer
- Create an ansible playbook to do the mass-branching
- RFE: Integration of Anitya to Packager Workflow
- Fix tokens for ftbfs_weekly_reminder. script
- Update bootloader components assignee to "Bootloader Engineering Team"for Improved collaboration
- Done:
- Remove f40 signing key from coreos-pool
- Stalled EPEL package request: python-json5
- Remove branch fix/sysusers in testcloud repository
- Enable epel10 branch for ipset package
- Request for permissions to access bodhi server
- Unretire gtranslator
- Stalled EPEL package: a2jmidid
- Orphan kdissert, flowcanvas
- Remove manually created f41/f42 branches from getmail6 repository
- Remove v4.3.0 and versionbump branches from pgcli repository
- Unretire rust-arraydeque
- Accidentally created a branch in `rust-tonic-types`
If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.
The post Infra and RelEng Update - Week 18 appeared first on Fedora Community Blog.
02 May 2025 2:31pm GMT
01 May 2025
Fedora People
Jonathan McDowell: Local Voice Assistant Step 2: Speech to Text and back
01 May 2025 6:05pm GMT
29 Apr 2025
Fedora People
Guillaume Kulakowski: Restaurer la géolocalisation sous Linux après l’arrêt du service Mozilla
29 Apr 2025 6:44am GMT
Rénich Bon Ćirić: MxOS: soberanía tecnológica en marcha
Me parece excelente la propuesta que hizo Rubert Riemann en Bruselas, Bélgica. Su iniciativa, EU OS, es muy valiosa y tiene gran potencial. Aún no es una iniciativa oficial de la Unión Europea, pero esa es la intención: que la adopten.
Existe mucho interés en Europa por este tipo de iniciativas. Algunos pueden caer en la anarquía o la apatía ante esto. Pero, la neta, ese no es el camino. Al contrario, podemos colaborar de maneras significativas para fortalecer nuestra soberanía tecnológica sin necesidad de aislarnos por completo; más bien, aportando al ecosistema existente y potenciándolo para nuestro propio beneficio y el de los demás.
Oportunidad
Esta iniciativa es muy valiosa desde una perspectiva tecnológica. Es fundamental que aprendamos a hacer las cosas nosotros mismos, que sepamos cómo crear, modificar, mantener y distribuir nuestro propio sistema operativo.
Hoy en día, iniciar es muy accesible. Contamos con muchísimas herramientas que nos permiten reutilizar lo que ya está hecho. Hay abundante documentación disponible. Existe todo lo necesario para que, incluso una sola persona, pueda empezar a trabajar en esto desde alguna capacidad.
Pero este no debe ser un proyecto solitario. Debe ser un proyecto de nación. Requiere colaboración, fondeo, infraestructura, documentación en español, capacitación y promoción activa.
Nos permitiría colaborar entre naciones compartiendo parches, empaquetado y desarrollo. Pone más ojos en el código para detectar vulnerabilidades o abusos. Crea un terreno fértil donde los mexicanos podemos sembrar y crecer nuestra propia tecnología.
Considero que esta iniciativa también debe impulsarse en México. Debemos explorar cómo colaborar con otras propuestas y aprovechar, al máximo, el verdadero potencial del software libre, que reside en:
- Poder reutilizar lo que ya existe para no tener que empezar desde cero.
- Aprender de lo que otros han hecho para poder desarrollar nuestras propias soluciones.
- Compartir nuestro trabajo para que otros puedan beneficiarse.
- Aprovechar las contribuciones que otros hagan al ecosistema.
Eso es factible hoy mismo. Ya. Con los recursos y conocimientos disponibles, podemos entrar en ese círculo virtuoso de desarrollo y aprovechamiento tecnológico. Solo necesitamos el impulso.
Alrededor de todo esto hay un ecosistema de negocio. También hay oportunidades para quienes las buscan. Es, en gran medida, una cuestión de conocimiento y voluntad para construirlo. Una distribución mexicana de GNU/Linux, bien soportada y con software libre adaptado para el sector empresarial, el gobierno y la comunidad en general, sería inmensamente beneficiosa para el país.
29 Apr 2025 6:00am GMT