30 May 2025
Fedora People
Fedora Badges: New badge: Fedora Mentor Summit 2025 !
30 May 2025 3:09pm GMT
Fedora Community Blog: Infra and RelEng Update – Week 22 2025
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 26 May - 30 May 2025
Read more: Infra and RelEng Update - Week 22 2025

Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It's responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues
Fedora Infra
- In progress:
- Please register my testing instance of Fedora Infrastructure apps to OIDC
- Planned Outage - update/reboots - 2025-05-22 21:00 UTC
- Forgejo: Owner access to @jflory7 for @CommOps
- Move copr_hypervisor group from iptables to nftables
- tmpwatch removed from ansible
- please don't remove enrolled centos machines from IPA in staging
- wiki upgrade before f40 eol
- redeploy aws proxies
- Disable self-serve project creation on pagure.io
- Link in staging distgit instance leads to prod auth
- re-add datagrepper nagios checks (and add to zabbix?)
- Fix logrotate on kojipkgs01/02
- 2025 datacenter move (IAD2->RDU3)
- Broken link for STG IPA CA certificate, needed for staging CentOS Koji cert
- [CommOps] Open Data Hub on Communishift
- retire easyfix
- Deploy Element Server Suite operator in staging
- Pagure returns error 500 trying to open a PR on https://src.fedoraproject.org/rpms/python-setuptools-gettext
- maubot-meetings bot multi line paste is cut
- setup ipa02.stg and ipa03.stg again as replicas
- Move OpenShift apps from deploymentconfig to deployment
- The process to update the OpenH264 repos is broken
- httpd 2.4.61 causing issue in fedora infrastructure
- Support allocation dedicated hosts for Testing Farm
- EPEL minor version archive repos in MirrorManager
- Add fedora-l10n pagure group as an admin to the fedora-l10n-docs namespace projects
- vmhost-x86-copr01.rdu-cc.fedoraproject.org DOWN
- Add yselkowitz to list to notify when ELN builds fail
- Cleaning script for communishift
- Move from iptables to firewalld
- Port apps to OIDC
- Help me move my discourse bots to production?
- Replace Nagios with Zabbix in Fedora Infrastructure
- Migration of registry.fedoraproject.org to quay.io
- Closed:
CentOS Infra including CentOS CI
- In progress:
- Release Improvements
- bump buildsys-macros pkgs for cbs.centos.org
- Please retarget hs+gnome to c10s
- Pleasechange dist -> distprefix for Proposed Updates SIG
- Please add a new asahi tag for the Hyperscale SIG
- New Build Targets and Tags for the Kmods SIG
- Wrapper to check / create projects on GitLab using the REST API
- Write a script to cleanup Duffy DB after retiring a pool
- create Oauth2 tokens for testing instance for OpenQA CentOS stream testing purposes
- [spike] : investigating needed deps (poetry) for duffy on el9
- [spike] : investigating options/alternatives for the upcoming DC move
- Closed:
Release Engineering
- In progress:
- please create epel10.1 based el10.1-openjdk tag
- Fedora-KDE-42-1.1-x86_64-CHECKSUM has wrong ISO filename
- F40 end of life
- Fix OpenH264 tagging issues
- Turn EPEL minor branching scripts into playbooks
- Mass retirement of packages with uninitialized rawhide branch
- Please send openh264-2.6.0 to Cisco
- 300+ F42FTBFS bugzillas block the F41FTBFS tracker
- Packages that have not been rebuilt in a while or ever
- Send compose reports to a to-be-created separate ML
- Could we have fedoraproject-updates-archive.fedoraproject.org for Rawhide?
- Investigate and untag packages that failed gating but were merged in via mass rebuild
- a few mass rebuild bumps failed to git push - script should retry or error
- Package retirements are broken in rawhide
- Update pungi filters
- Implement checks on package retirements
- Untag containers-common-0.57.1-6.fc40
- Provide stable names for images
- Packages that fail to build SRPM are not reported during the mass rebuild bugzillas
- When orphaning packages, keep the original owner as co-maintainer
- Create an ansible playbook to do the mass-branching
- RFE: Integration of Anitya to Packager Workflow
- Fix tokens for ftbfs_weekly_reminder. script
- Update bootloader components assignee to "Bootloader Engineering Team"for Improved collaboration
- Closed:
- Error 500 when creating branch
- repo creation process for python-xnat4tests failed
- untag rust-1.87.0-1.fc43 from rawhide
- Unretire rust-migrations_macros
- untag meson-1.8.0-1.fc43 please
- Conflicting OpenH264 package issues
- Please rescue partially-initialized repo rpms/rust-postcard from releng-bot
- Remove branch epel10 in testcloud repository
- Side tag for Python 3.14
- Please add lua-cqueues and lua-basexx to epel10 tag
- Stalled EPEL package: python-google-cloud-storage
- Epel branch for credcheck is not working with koji
- Hyprlock package is blocked in koji
- Stalled EPEL package: python-hatch-requirements-txt
- Unretire datanommer-commands and python-datanommer-models packages
- Fedora Kinoite Rawhide aarch64 builds failing since 2025-01-31
If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.
The post Infra and RelEng Update - Week 22 2025 appeared first on Fedora Community Blog.
30 May 2025 10:00am GMT
Maxim Burgerhout: How I manage SSL certificates for my homelab with Letsencrypt and Ansible
How I manage SSL certificates for my homelab with Letsencrypt and Ansible
I have a fairly sizable homelab, consisting of some Raspberry Pi 4s, some Intel Nucs, a Synology NAS with a VM running on it and a number of free VMs in Oracle cloud. All these machines run RHEL 9 or RHEL 10 and all of them are managed from an instance of Red Hat Ansible Automation Platform that runs on the VM on my NAS.
On most of these machines, I run podman containers behind caddy (which takes care of any SSL certificate management automatically). But for some services, I really needed an automated way of managing SSL certificates that didn't involve Caddy. An example for this is cockpit, which I use on some occasions. I hate those "your connection is not secure messages", so I needed real SSL certificates that my whole network would trust without the need of me having to load custom CA certificates in every single device.
I also use this method for securing my internal Postfix relay, and (in a slightly different way) for setting up certificates for containers running on my NAS.
So. Ansible to the rescue. It turns out, there is a surprisingly easy way to do this with Ansible. I found some code floating around the internet. To be honest, I forgot where I got it, it was probably a GitHub gist, but I really don't remember: I wrote this playbook months and months ago - I would love to attribute credit for this, but I simply can't :(
The point of the playbook is that it takes a list of certificates that should exist on a machine, and it makes sure those certificates exist on the target machine. Because this is for machines that are not connected to the internet, it's not possible to use the standard HTTP verification. Instead, it creates temporary DNS records to verify my ownership of the domain.
Let's break down how the playbook works. I'll link to the full playbook at the end.
Keep in mind that all tasks below are meant to be run as a playbook looping over a list of dictionaries that are structures as follows:
le_certificates:
- common_name: "mymachine.example.com"
basedir: "/etc/letsencrypt"
domain: ".example.com"
email: security-team@example.com
First, we make sure a directory exists to store the certificate. We check for the existence of a Letsencrypt account key and if that does not exist, we create it and copy it over to the client:
- name: Create directory to store certificate information
ansible.builtin.file:
path: "{{ item.basedir }}"
state: directory
mode: "0710"
owner: "{{ cert_directory_user }}"
group: "{{ cert_directory_group }}"
- name: Check if account private key exists
ansible.builtin.stat:
path: "{{ item.basedir }}/account_{{ item.common_name }}.key"
register: account_key
- name: Generate and copy over the acme account private key
when: not account_key.stat.exists | bool
block:
- name: Generate private account key for letsencrypt
community.crypto.openssl_privatekey:
path: /tmp/account_{{ item.common_name }}.key
type: RSA
delegate_to: localhost
become: false
when: not account_key.stat.exists | bool
- name: Copy over private account key to client
ansible.builtin.copy:
src: /tmp/account_{{ item.common_name }}.key
dest: "{{ item.basedir }}/account_{{ item.common_name }}.key"
mode: "0640"
owner: root
group: root
The next step is to check for the existence of a private key for the domain we are handling, and create it and copy it to the client if it doesn't exist:
- name: Check if certificate private key exists
ansible.builtin.stat:
path: "{{ item.basedir }}/{{ item.common_name }}.key"
register: cert_key
- name: Generate and copy over the acme cert private key
when: not cert_key.stat.exists | bool
block:
- name: Generate private acme key for letsencrypt
community.crypto.openssl_privatekey:
path: /tmp/{{ item.common_name }}.key
type: RSA
delegate_to: localhost
become: false
when: not cert_key.stat.exists | bool
- name: Copy over private acme key to client
ansible.builtin.copy:
src: /tmp/{{ item.common_name }}.key
dest: "{{ item.basedir }}/{{ item.common_name }}.key"
mode: "0640"
owner: root
group: root
Then, we create a certificate signing request (CSR) based on the private key, and copy that to the client:
- name: Generate and copy over the csr
block:
- name: Grab the private key from the host
ansible.builtin.slurp:
src: "{{ item.basedir }}/{{ item.common_name }}.key"
register: remote_cert_key
- name: Generate the csr
community.crypto.openssl_csr:
path: /tmp/{{ item.common_name }}.csr
privatekey_content: "{{ remote_cert_key['content'] | b64decode }}"
common_name: "{{ item.common_name }}"
delegate_to: localhost
become: false
- name: Copy over csr to client
ansible.builtin.copy:
src: /tmp/{{ item.common_name }}.csr
dest: "{{ item.basedir }}/{{ item.common_name }}.csr"
mode: "0640"
owner: root
group: root
Now the slightly more complicated stuff starts. This next task contacts the Letsencrypt API and requests a certificate. It specifies a dns-01
challenge, which means that Letsencrypt will respond with a challenge that we can validate our request through the creation of a special DNS record. All we need is in the response, which well store as cert_challenge
.
- name: Create a challenge using an account key file.
community.crypto.acme_certificate:
account_key_src: "{{ item.basedir }}/account_{{ item.common_name }}.key"
account_email: "{{ item.email }}"
src: "{{ item.basedir }}/{{ item.common_name }}.csr"
cert: "{{ item.basedir }}/{{ item.common_name }}.crt"
challenge: dns-01
acme_version: 2
acme_directory: "{{ acme_dir }}"
# Renew if the certificate is at least 30 days old
remaining_days: 60
terms_agreed: true
register: cert_challenge
Now, I'll be using DigitalOcean's API to create the temporary DNS records, but you can use whatever DNS service you want, as long as it's publicly available for Letsencrypt to query. The following block will only run if two things are true: 1. the cert_challenge
is changed
, which is only so if we need to renew the certificate. Letsencrypt certificates are valid for 90 days only. We specified remaining_days: 60
, so if we run this playbook 30 or more days after its previous run, cert_challenge
will be changed
and the certificate will be renewed. 2. item.common_name (which is a variable that holds the requested DNS record) is part of the challenge_data
structure in cert_challenge
. This is to verify we actually got the correct data from the Letsencrypt API, and not just some metadata change.
The block looks like this:
- name: Actual certificate creation
when: cert_challenge is changed and item.common_name in cert_challenge.challenge_data
block:
- name: Create DNS challenge record on DO
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
domain: "{{ item.domain[1:] }}"
type: TXT
ttl: 60
name: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].record | replace(item.domain, '') }}"
data: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].resource_value }}"
delegate_to: localhost
become: false
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
community.crypto.acme_certificate:
account_key_src: "{{ item.basedir }}/account_{{ item.common_name }}.key"
account_email: "{{ item.email }}"
src: "{{ item.basedir }}/{{ item.common_name }}.csr"
cert: "{{ item.basedir }}/{{ item.common_name }}.crt"
fullchain: "{{ item.basedir }}/{{ item.domain[1:] }}-fullchain.crt"
chain: "{{ item.basedir }}/{{ item.domain[1:] }}-intermediate.crt"
challenge: dns-01
acme_version: 2
acme_directory: "{{ acme_dir }}"
remaining_days: 60
terms_agreed: true
data: "{{ cert_challenge }}"
- name: Remove DNS challenge record on DO
community.digitalocean.digital_ocean_domain_record:
state: absent
oauth_token: "{{ do_api_token }}"
domain: "{{ item.domain[1:] }}"
type: TXT
name: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].record | replace(item.domain, '') }}"
data: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].resource_value }}"
delegate_to: localhost
become: false
You'll notice that the TTL for this record is intentionally very low, because we don't need it other than for validation of the challenge, and we'll remove it after vertification. If you do not use DigitalOcean as a DNS provider, the first task in the block above will look different, obviously.
The second task in the block reruns the acme_certificate
task, and this time we pass the contents of the cert_challenge
variable as the data
parameter. Upon successful validation, we can store retrieve the new certificate, full chain and intermediate chain to disk. Basically, at this point, we are done without having to use certbot :)
Of course, in the third task, we clean up the temporary DNS record again.
I have a slightly different playbook to manage certificates on my NAS, and some additional tasks that configure Postfix to use this certificate, too, but those are probably useful for me only.
TL;DR: it you want to create a (set of) certificate(s) for a (group of) machine(s), running this playbook from AAP every month makes that really easy.
The main playbook looks like this:
---
# file: letsencrypt.yml
- name: Configure letsencrypt certificates
hosts: rhel_machines
gather_facts: false
become: true
vars:
debug: false
acme_dir: https://acme-v02.api.letsencrypt.org/directory
pre_tasks:
- name: Gather facts subset
ansible.builtin.setup:
gather_subset:
- "!all"
- default_ipv4
- default_ipv6
tasks:
- name: Include letsencrypt tasks for each certificate
ansible.builtin.include_tasks: letsencrypt_tasks.yml
loop: "{{ le_certificates }}"
The letsencrypt_tasks.yml
file is all of the above tasks combined into a single playbook:
---
# file: letsencrypt_tasks.yml
- name: Create directory to store certificate information
ansible.builtin.file:
path: "{{ item.basedir }}"
state: directory
mode: "0710"
owner: "{{ cert_directory_user }}"
group: "{{ cert_directory_group }}"
- name: Check if account private key exists
ansible.builtin.stat:
path: "{{ item.basedir }}/account_{{ item.common_name }}.key"
register: account_key
- name: Generate and copy over the acme account private key
when: not account_key.stat.exists | bool
block:
- name: Generate private account key for letsencrypt
community.crypto.openssl_privatekey:
path: /tmp/account_{{ item.common_name }}.key
type: RSA
delegate_to: localhost
become: false
when: not account_key.stat.exists | bool
- name: Copy over private account key to client
ansible.builtin.copy:
src: /tmp/account_{{ item.common_name }}.key
dest: "{{ item.basedir }}/account_{{ item.common_name }}.key"
mode: "0640"
owner: root
group: root
- name: Check if certificate private key exists
ansible.builtin.stat:
path: "{{ item.basedir }}/{{ item.common_name }}.key"
register: cert_key
- name: Generate and copy over the acme cert private key
when: not cert_key.stat.exists | bool
block:
- name: Generate private acme key for letsencrypt
community.crypto.openssl_privatekey:
path: /tmp/{{ item.common_name }}.key
type: RSA
delegate_to: localhost
become: false
when: not cert_key.stat.exists | bool
- name: Copy over private acme key to client
ansible.builtin.copy:
src: /tmp/{{ item.common_name }}.key
dest: "{{ item.basedir }}/{{ item.common_name }}.key"
mode: "0640"
owner: root
group: root
- name: Generate and copy over the csr
block:
- name: Grab the private key from the host
ansible.builtin.slurp:
src: "{{ item.basedir }}/{{ item.common_name }}.key"
register: remote_cert_key
- name: Generate the csr
community.crypto.openssl_csr:
path: /tmp/{{ item.common_name }}.csr
privatekey_content: "{{ remote_cert_key['content'] | b64decode }}"
common_name: "{{ item.common_name }}"
delegate_to: localhost
become: false
- name: Copy over csr to client
ansible.builtin.copy:
src: /tmp/{{ item.common_name }}.csr
dest: "{{ item.basedir }}/{{ item.common_name }}.csr"
mode: "0640"
owner: root
group: root
- name: Create a challenge using an account key file.
community.crypto.acme_certificate:
account_key_src: "{{ item.basedir }}/account_{{ item.common_name }}.key"
account_email: "{{ item.email }}"
src: "{{ item.basedir }}/{{ item.common_name }}.csr"
cert: "{{ item.basedir }}/{{ item.common_name }}.crt"
challenge: dns-01
acme_version: 2
acme_directory: "{{ acme_dir }}"
# Renew if the certificate is at least 30 days old
remaining_days: 60
terms_agreed: true
register: cert_challenge
- name: Actual certificate creation
when: cert_challenge is changed and item.common_name in cert_challenge.challenge_data
block:
- name: Create DNS challenge record on DO
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
domain: "{{ item.domain[1:] }}"
type: TXT
ttl: 60
name: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].record | replace(item.domain, '') }}"
data: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].resource_value }}"
delegate_to: localhost
become: false
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
community.crypto.acme_certificate:
account_key_src: "{{ item.basedir }}/account_{{ item.common_name }}.key"
account_email: "{{ item.email }}"
src: "{{ item.basedir }}/{{ item.common_name }}.csr"
cert: "{{ item.basedir }}/{{ item.common_name }}.crt"
fullchain: "{{ item.basedir }}/{{ item.domain[1:] }}-fullchain.crt"
chain: "{{ item.basedir }}/{{ item.domain[1:] }}-intermediate.crt"
challenge: dns-01
acme_version: 2
acme_directory: "{{ acme_dir }}"
remaining_days: 60
terms_agreed: true
data: "{{ cert_challenge }}"
- name: Remove DNS challenge record on DO
community.digitalocean.digital_ocean_domain_record:
state: absent
oauth_token: "{{ do_api_token }}"
domain: "{{ item.domain[1:] }}"
type: TXT
name: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].record | replace(item.domain, '') }}"
data: "{{ cert_challenge.challenge_data[item.common_name]['dns-01'].resource_value }}"
delegate_to: localhost
become: false
And finally, as part of host_vars, for each of my hosts a letsencrypt.yml
file exists containing:
---
le_certificates:
- common_name: "myhost.example.com"
basedir: "/etc/letsencrypt"
domain: ".example.com"
email: security-team@example.com
To be fair, there could probably be a lot of optimization done in that playbook, and I can't remember why I did it with .example.com
(with the leading dot) and then use item.domain[1:]
in so many places. But, I'm a lazy IT person, and I'm not fixing what isn't inherently broken :)
Hope this helps!
M
30 May 2025 7:12am GMT
29 May 2025
Fedora People
Fedora Community Blog: Another update on the Fedoraproject Datacenter Move
Here's another update on the upcoming fedoraproject Datacenter move.
Summary: there have been some delays, the current target switch week
to the new Datacenter is now the week of 2025-06-30.
( formerly 2025-05-16 ).
The plans we mentioned last month are all still in our plan, just moved out two weeks.
Why the delay? Well, there were some delays in getting networking
setup in the new datacenter, but thats now been overcome and we are
back on track, just with a delay.
Here's a rundown of the current plan:
- We now have access to all the new hardware, it's firmware has been
updated and configured. - We have a small number of servers installed and this week we
are installing OS on more servers as well as building out vm's for
various services. - Next week is flock, so we will probibly not make too much progress,
but we might do some more installs/configuration if time permits. - The week after flock we hope to get openshift clusters all setup
and configured. - The week after that we will start moving some applications that
aren't closely tied to the old datacenter. If they don't have storage
or databases, they are good candidates to move. - The next week will be any other applications we can move
- The week before the switch will be getting things ready for that
(making sure data is synced, plans are reviewed, etc) - Finally the switch week (week of june 30th):
Fedora Project users should not notice much during this change.
Mirrorlists, mirrors, docs, and other user facing applications
should continue working as always. Updates pushes may be delayed
a few days while the switch happens.
Our goal is to keep any end user impact to a minimum. - For Fedora Contributors, Monday and Tuesday we plan to "move" the bulk of applications and services. Contributors should avoiding doing much on those days as services may be moving around or syncing in various ways. Starting Wednesday, we will make sure everything is switched and fix problems or issues as they are found. Thursday and Friday will continue stabilization work.
- The week after the switch, some newer hardware in our old datacenter
will be shipped down to the new one. This hardware will be added
to increase capacity (more builders, more openqa workers, etc).
This move should get us in a nicer place with faster/newer/better hardware.
The post Another update on the Fedoraproject Datacenter Move appeared first on Fedora Community Blog.
29 May 2025 11:07am GMT
28 May 2025
Fedora People
Ben Cotton: It’s okay to be partial to your work
I often see leaders in open source projects not wanting to promote their own work in the interest of fairness. That's a noble idea, but it's unnecessary. It's okay to be partial to - and promote - your own work, so long as you follow the community's process.
Real world examples
What does this look like in practice? You may be a member of a steering committee that approves feature proposals. You didn't earn that spot just because you're good at meetings, you mostly likely earned it on sustained technical and interpersonal merit. This, in turn, means you're probably still writing new feature proposals sometimes. That doesn't mean you have to recuse yourself when it comes up for a vote. Everyone knows you wrote it, and you're a member of the committee, not an independent judge presiding over a criminal trial.
Or you might be leading a project and have a tool that would help the project meet its goals. You can propose that the project adopt your tool. Again, it's going to be clear that you wrote it, so go ahead and make the proposal.
The need for process
As I alluded to in the opening paragraph, your community needs a process for these sorts of proposals. It doesn't have to be elaborate. Something simple as "a majority of the steering committee must approve of the proposal" counts as a process. Following the process is what keeps the decision fair, even when you have a predisposition to like what you're proposing. If your proposal gets the same treatment as everyone else's, that's all that matters.
When to recuse yourself
Of course, there are times when it's appropriate to recuse yourself. If your proposal is particularly contentious (let's say a roughly 50-50 split, not a 75-25 split in favor), it's best that you're not the deciding vote. If you can't amend your proposal in such a way that you win some more support, then it may be better to note vote.
If the community policy and processes require the author or a proposal to recuse themselves, then that's obviously a good reason to do so. "But Ben said I shouldn't!" won't win you any points, even if the policy is misguided (and it may or may not be!).
Also, if the context is a pull request, you should not vote to approve it to get it over the approval requirement threshold. That is a separate case, and one that most forges will prohibit anyway.
This post's featured photo by Piret Ilver on Unsplash.
The post It's okay to be partial to your work appeared first on Duck Alignment Academy.
28 May 2025 12:00pm GMT
Fedora Badges: New badge: Rock the Boat !
28 May 2025 9:42am GMT
Fedora Badges: New badge: Flock 2025 Attendee !
28 May 2025 9:37am GMT
Fedora Magazine: How to use Authselect to configure PAM in Fedora Linux
Authselect is a utility tool that manages PAM configurations using profiles. Starting with Fedora 36, Authselect became a hard requirement for configuring PAM. In this article, you will learn how to configure PAM using Authselect.
Introduction.
Unauthorized access is a critical risk factor in computer security. Cybercriminals engage in data theft, cyber-jacking, crypto-jacking, phishing, and ransomware attacks once they gain unauthorized access. A common vulnerability exploit for unauthorized access is poor configuration authentication. Pluggable authentication module (PAM) plays a critical role in mitigating this vulnerability, by acting as a middleware layer between your application and authentication mechanisms. For instance, you can use PAM to configure a server to deny login after 6pm, and any login attempts afterwards will require a token. PAM does not carry out authentication itself, instead it forwards requests to the authentication module you specified in its configuration file.
This article will cover the following three topics:
- PAM
- Authselect, and authselect profiles
- How to configure PAM
Prerequisites:
- Fedora, Centos or RHEL server. This guide uses Fedora 41 server edition. Fedora, Centos, and RHEL servers are interchangeable.
- A user account with sudo privileges on the server.
- Command line familiarity.
What is PAM?
The Pluggable Authentication Modules (PAM) provides a modular framework for authenticating users, systems, and applications on Fedora Linux. Before PAM, file-based authentication was the prevalent authentication scheme. File-based authentication stores, usernames, passwords, id's, names, and other optional information in one file. This was simple, and everyone was happy until security requirements changed, or new authentication mechanisms were adopted.
Here's an excerpt from Red Hat's PAM documentation:
Pluggable Authentication Modules (PAMs) provide a centralized authentication mechanism which system applications can use to relay authentication to a centrally configured framework. PAM is pluggable because there is a PAM module for different types of authentication sources (such as Kerberos, SSSD, NIS, or the local file system). Different authentication sources can be prioritized.
PAM acts as a middleware between applications and authentication modules. It receives authentication requests, looks at its configuration files and forwards the request to the appropriate authentication module. If any module detects that the credentials do not meet the required configuration, PAM denies the request and prevents unauthorized access. PAM guarantees that every request is consistently validated before it denies or grants access.
Why PAM?
- Support for various authentication schemes using pluggable modules. These may include two-factor authentication (2FA), password authentication (LDAP), tokens (OAuth, Kerberos), biometrics (fingerprint, facial), Hardware (YubiKey), and much more.
- Support for stacked authentication. PAM can combine one or more authentication schemes.
- Flexibility to support new or future authentication technology with minimal friction.
- High performance, and stability under significant load.
- Support for granular/custom configuration across users and applications. For example, PAM can disallow access to an application from 5pm to 5am, if an authenticated user does not possess a role.
Authselect replaces Authconfig
Authselect was introduced in Fedora 28 to replace Authconfig. By Fedora 35 Authconfig was removed system-wide. In Fedora 36 Authselect became a hard dependency making it a requirement for configuring PAM in subsequent Fedora versions.
This tool does not configure your applications (LDAP, AD, SSH); it is a configuration management tool designed to set up and maintain PAM. Authselect selects and applies pre-tested authentication profiles that determine which PAM modules are active and how they are configured.
Here's an excerpt from the Fedora 27 changeset which announced Authselect as a replacement for Authconfig
Authselect is a tool to select system authentication and identity sources from a list of supported profiles.
It is designed to be a replacement for authconfig but it takes a different approach to configure the system. Instead of letting the administrator build the pam stack with a tool (which may potentially end up with a broken configuration), it would ship several tested stacks (profiles) that solve a use-case and are well tested and supported.
From the same changeset, the authors report that Authconfig was error prone, hard to maintain due to technical debt, caused system regressions after updates, and was hard to test.
Authconfig does its best to always generate a valid pam stack but it is not possible to test every combination of options and identity and authentication daemons configuration. It is also quite regression prone since those daemons are in active development on their own and independent on authconfig. When a new feature is implemented in an authentication daemon it takes some time to propagate this feature into authconfig. It also may require a drastic change to the pam stack which may easily introduce regressions since it is hard to test properly with so many possible different setups.
Authselect profiles, and what they do.
As mentioned above, Authselect manages PAM configuration using ready-made profiles. A profile is a set of features and functions that describe how the resulting system configuration will look. One selects a profile and Authselect applies the configuration to PAM.
In Fedora, Authselect ships with four profiles;
$ authselect list - local Local users only - nis Enable NIS for system authentication - sssd Enable SSSD for system authentication (also for local users only) - winbind Enable winbind for system authentication
For descriptions of each profile, visit Authselect's readme page for profiles, and the wiki, available on GitHub.
You can view the current profile with;
$ authselect current Profile ID: local Enabled features: - with-silent-lastlog - with-fingerprint - with-mdns4
You can change the current profile with;
$ sudo authselect select local Profile "local" was selected.
List the features in any profile with;
$ authselect list-features local with-altfiles with-ecryptfs with-faillock with-fingerprint with-libvirt with-mdns4 with-mdns6 with-mkhomedir with-pam-gnome-keyring with-pam-u2f with-pam-u2f-2fa with-pamaccess with-pwhistory with-silent-lastlog with-systemd-homed without-lastlog-showfailed without-nullok without-pam-u2f-nouserok
You can enable or disable features in a profile with;
$ sudo authselect enable-feature with-fingerprint $ sudo authselect disable-feature with-fingerprint
Configure PAM with Authselect.
Scenario: You have noticed a high number of failed login attempts on your Fedora Linux server. As a preemptive action you want to configure PAM for lockouts. Any user with 3 failed login attempts, gets locked out of your server for 24 hours.
The pam_faillock.so module maintains a list of failed authentication attempts per user during a specified interval and locks the account in case there were more than the stipulated consecutive failed authentications.
The Authselect profile "with-faillock" feature handles failed authentication lockouts.
Step 1. Check if current profile on the server has with-faillock enabled;
$ authselect current
Profile ID: local
Enabled features:
- with-silent-lastlog
- with-mdns4
- with-fingerprint
As you can see, with-faillock is not enabled in this profile.
Step 2. Enable with-faillock
$ sudo authselect enable-feature with-faillock $ authselect current Profile ID: local Enabled features: - with-silent-lastlog - with-mdns4 - with-fingerprint - with-faillock
Authselect has now configured PAM to support lockouts. Check the /etc/pam.d/system-auth file, and the /etc/pam.d/password-auth files, to see that the files are updated by authselect.
From the vimdiff image below you can see the changes authselect added to /etc/pam.d/system-auth.

Authselect added the following lines
auth required pam_faillock.so preauth silent auth required pam_faillock.so authfail account required pam_faillock.so
Step 3. Check if the current configuration is valid.
$ authselect check Current configuration is valid.
Step 4. Apply changes
$ sudo authselect apply-changes Changes were successfully applied.
Step 5. Configure faillock
$ vi /etc/security/faillock.conf

Uncomment the file to match the following parameters
silent audit deny=3 unlock_time=86400 dir = /var/run/faillock
Step 6. Test PAM configuration
6.1 Attempt to login consecutive times with the wrong password, to trigger a lockout

6.2 Check failure records

Important: As a best practice, always backup your current Authselect profile before making any change.
Back up the current Authselect profile as follows;
$ authselect select local -b Backup stored at /var/lib/authselect/backups/2025-05-23-22-41-33.UyM1lJ Profile "local" was selected.
To list backed up profiles;
$ authselect backup-list 2025-05-22-15-17-41.fe92T8 (created at Thu 22 May 2025 11:17:41 AM EDT) 2025-05-23-22-41-33.UyM1lJ (created at Fri 23 May 2025 06:41:33 PM EDT)
Restore a profile from backup using this;
$ authselect backup-restore 2025-05-23-22-41-33.UyM1lJ
28 May 2025 8:00am GMT
27 May 2025
Fedora People
Fedora Magazine: Don’t Panic! There’s an F42 Release Party on Thursday!
On Thursday, May 29 (yes, two days away!) we will host the F42 release party on Matrix.
We would love for you to join us to celebrate all things F42 in a private event room from 1300 - 1600 UTC. You will hear from our new FPL Jef Spaleta, learn about the design process for each release, and hear about some of the great new features in Fedora Workstation, Fedora KDE and our installer. Plus there's a git forge update and a mentor summit update too, plus lots more.
You can see the schedule on the event page wiki, and how to attend is simple: please register for the event, for free, in advance. Using your Matrix ID, you will receive an invitation to a private event room where we will be streaming presentations via ReStream.
Events will be a mixture of live and pre-recorded. All will be available after the event on the Fedora YouTube channel.
27 May 2025 4:24pm GMT
Peter Czanik: Testing the new syslog-ng wildcard-file() source options on Linux
27 May 2025 1:20pm GMT
Avi Alkalay: Andy Warhol na FAAP e uma retrospectiva de Arte Contemporânea
Então eu fui na exposição do Andy Warhol.
É sempre assim. Embaço prá ir ver arte contemporânea, mas ai quando vou saio empolgado, enlouquecido, maravilhado, inspirado, intrigado.
Foi assim também com a portuguesa Joana Vasconcelos, no MaAM, e sua monumental e estonteante Valkyrie Mumbet. Foi assim com as ilusões geométricas do Julio Le Parc na Tomie Ohtake. Ou com a inventividade de Anish Kapoor que vi na Corpartes. Ou os necessários parques de arte como Inhotim, o inesquecível De Hoge Veluwe, o deCordova. Ou ainda o ICA de Boston. Richard Serra no Guggenheim de Bilbao. E muitos outros. Consigo ser exaustivo com essa lista pois guardei fotos de todas essas visitas que muito me marcaram.
Essa do Warhol, na FAAP, é obrigatória. Tem centenas de obras originais, muitas são extremamente conhecidas, tudo muito bem curado. Warhol como retratista retratou inúmeras celebridades como Marlyn, Michael Jackson, Joan Collins (foto, e uma das composições mais impressionantes da exposição), Sylvester Stallone, Jacqueline Kennedy etc. Mas tem também sua veia política, onde tratou a espetacularização da morte, tema que acho ainda muito atual.
Uma das características intrigantes de sua obra é sua técnica. Qualquer pessoa que assistisse Warhol trabalhando por 2 dias com fotografia, serigrafia e tinta, conseguiria fazer parecido. E digo isso não para diminuir, mas porque é sensacional onde se pode chegar com tão pouco. O diferencial de Warhol creio que foi o meio em que estava inserido, as festas que frequentava, as pessoas com quem se relacionava. E coragem. Muita coragem para fazer arte daquele jeito simples e novo, e naquela escala.
Fico feliz que amigos também estão nas artes plásticas, inclusive com exposições recentes. Marcia Cymbalista que claramente transmite a singeleza de seu caráter para suas pinturas. Rogério Pasqua que me impressionou com os desenhos semi-abstratos que tem produzido. Taly Cohen, que já se lança para fama internacional. Babak Fakhamzadeh que se aventura por vários tipos de expressões artísticas. É uma honra ter vocês por perto.
27 May 2025 11:54am GMT
Fedora Badges: New badge: Let's have a party (Fedora 42) !
27 May 2025 7:48am GMT
24 May 2025
Fedora People
Kevin Fenzi: Third week of May 2025 fedora infra bits
Oh look, it's saturday already. Another busy week here with lots going on, so without further adieu, lets discuss some things!
Datacenter Move
Due to delays in getting network to new servers and various logistics, We are going to be moving the switcharoo week to the week of June 30th. It was set for June 16th, but thats just too close timing wise, so we are moving it out two weeks. Look for a community blog post and devel-announce post next week on this. I realize that that means that friday is July 4th (a holiday in the US), but we hope to do the bulk of switching things on monday and tuesday of that week, and leave only fixing things for wed and thursday.
We did finally get network for the new servers last week. Many thanks to all the networking folks who worked hard to get things up and running. With some network I was able to start bootstrapping infrastructure up. We now have a bastion host, a dhcp/tftp host and a dns server all up and managed via our existing ansible control host like all the rest of our hosts.
Friday was a recharge day at Red Hat, and monday is the US Memorial day holiday, but I should be back at deploying things on tuesday. Hopefully next week I will get a initial proxy setup and can then look at doing openshift cluster installs.
Flock
The week after next is flock! It came up so fast. I do plan on being there (I get into prague late morning on the 3rd). Hope to see many folks there, happy to talk about most anything. I'm really looking forward to the good energy that comes from being around so many awesome open source folks!
Of course that means I may well not be online as much as normal (when traveling, in talks, etc), so Please plan accordingly if you need my help with something.
Laptop
So, I got this lenovo slim7x snapdragon X laptop quite a long time ago, and finally I decided I should see if I can use it day to day, and if so, use it for the flock trip, so I don't have to bring my frame.work laptop.
So, I hacked up a aarch64 rawhide live with a dtb for it and was able to do a encrypted install and then upgrade the kernel. I did have to downgrade linux-firmware for the ath12k firmware bug, but thats fine.
So far it's looking tenable (I am typing this blog post on it now). I did have to add another kernel patch to get bluetooth working, but it seems to be fine with the patch. The OLED screen on this thing is wonderfull. Battery life seems ok, although it's hard to tell without a 'real life' test.
Known things not working: camera (there's patches, but it's really early so I will wait for them), sound (there's also patches, but it has the same issue the mac laptops had with there being no safeguards so you can easily destroy your speakers if you adjust too loud).
Amusing things: no discord flatpak available (the one on flathub is x86_64 only), but the web version works fine. (Although amusingly it tells you to install the app (which doesn't exist).
Also, no chrome, but there is chromium, which should be fine for sites that firefox doesn't work with.
I'll see if I can get through the weekend and upcoming week and decide what laptop I will take traveling.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/114564927864167658
24 May 2025 9:01pm GMT
23 May 2025
Fedora People
Hans de Goede: IPU6 cameras with ov02c10 / ov02e10 now supported in Fedora
23 May 2025 4:09pm GMT
Hans de Goede: IPU6 FOSS and proprietary stack co-existence
23 May 2025 3:42pm GMT
Piju 9M2PJU: DockFlare: Securely Expose Docker Services with Cloudflare Tunnels
Introduction: What Is DockFlare?
Self-hosting applications has become increasingly popular among developers, tech enthusiasts, and homelabbers. However, securely exposing internal services to the internet is often a complicated task. It involves:
- Opening firewall ports
- Dealing with dynamic IPs
- Managing TLS certificates
- Handling reverse proxies
- Setting up access control
This is where DockFlare comes in.
DockFlare is a lightweight, self-hosted Cloudflare Tunnel automation tool for Docker users. It simplifies the process of publishing your internal Docker services to the public internet through Cloudflare Tunnels, while providing optional Zero Trust security, DNS record automation, and a sleek web interface for real-time management.
Objectives of DockFlare
DockFlare was created to solve three key problems:
- Simplicity: Configure secure public access to your Docker containers using just labels-no reverse proxy, SSL setup, or manual DNS records needed.
- Security: Protect your services behind Cloudflare's Zero Trust Access, supporting identity-based authentication (Google, GitHub, OTP, and more).
- Automation: Automatically create tunnels, subdomains, and security policies based on your Docker service metadata. No scripting. No re-deploys.
Why Use DockFlare?
Here's how DockFlare benefits its users:
Quick Setup: Set up secure tunnels and expose services in seconds with Docker labels.
Zero Trust Security: Enforce authentication for any service using Cloudflare Access.
No Public IP Required: No need to forward ports or expose your home IP-perfect for CG-NAT and mobile ISPs.
Safe by Default: TLS encryption, no open ports, and access rules built-in.
User-Friendly UI: Visualize tunnels, view logs, and manage configurations in a web dashboard.
DevOps Ready: Works seamlessly in CI/CD pipelines or home labs.
How to Install DockFlare
Requirements
- Docker and Docker Compose
- A Cloudflare account
- A domain connected to Cloudflare
- A Cloudflare API Token with:
- Zone DNS edit
- Zero Trust policy management
- Tunnel management
Step 1: Create Your Project Directory
mkdir dockflare && cd dockflare
Step 2: Create .env
File
Create a file named .env
with the following contents:
CLOUDFLARE_API_TOKEN=your_token_here
CLOUDFLARE_ACCOUNT_ID=your_account_id
CLOUDFLARE_ZONE_ID=your_zone_id
TZ=Asia/Kuala_Lumpur
Keep this file private!
Step 3: Create docker-compose.yml
version: '3.8'
services:
dockflare:
image: alplat/dockflare:stable
container_name: dockflare
restart: unless-stopped
env_file:
- .env
ports:
- "5000:5000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- dockflare_data:/app/data
labels:
- cloudflare.tunnel.enable=true
- cloudflare.tunnel.hostname=dockflare.yourdomain.com
- cloudflare.tunnel.service=http://dockflare:5000
volumes:
dockflare_data:
Step 4: Deploy DockFlare
docker compose up -d
Access the UI: http://localhost:5000
Exposing a Docker Service
Here's an example of exposing a service like myapp
running on port 8080:
services:
myapp:
image: myapp:latest
labels:
cloudflare.tunnel.enable: "true"
cloudflare.tunnel.hostname: "app.yourdomain.com"
cloudflare.tunnel.service: "http://myapp:8080"
cloudflare.tunnel.access.policy: "authenticate"
cloudflare.tunnel.access.allowed_idps: "your-idp-uuid"
This will automatically:
- Create a Cloudflare Tunnel
- Point your subdomain to it
- Enforce secure login
Add Non-Docker Services
Want to expose your home router or NAS?
- Go to DockFlare UI.
- Click "Add Hostname".
- Enter:
- Hostname (e.g., nas.yourdomain.com)
- Internal IP/port
- Access policy (bypass/authenticate)
- Done!
This works for any service, not just Docker.
Configuring Zero Trust Access
To secure your services:
- Go to Cloudflare Zero Trust dashboard
- Add an identity provider (Google, GitHub, etc.)
- Use the IDP UUID in your container labels
- Example:
cloudflare.tunnel.access.policy: authenticate
cloudflare.tunnel.access.allowed_idps: abc123-def456
cloudflare.tunnel.access.session_duration: 8h
Advanced Tips
- Expose multiple hostnames:
cloudflare.tunnel.hostname=api.domain.com,admin.domain.com
- Customize session duration:
cloudflare.tunnel.access.session_duration=12h
- Monitor logs via the web UI or
docker logs dockflare
Resources
- GitHub: ChrispyBacon-dev/DockFlare
- Docker Compose Docs: docker.com/compose
- Cloudflare Tunnels Guide: developers.cloudflare.com
Conclusion
DockFlare is a game-changer for developers, sysadmins, and homelabbers who want an easy, secure, and automated way to expose their applications to the web. With support for Cloudflare Tunnels, Zero Trust Access, DNS automation, and a clean UI-it's the only tool you'll need to publish your services safely.
No more port forwarding. No more SSL headaches.
Just Docker + DockFlare + Cloudflare = Done.
The post DockFlare: Securely Expose Docker Services with Cloudflare Tunnels appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.
23 May 2025 6:55am GMT