13 Feb 2026

feedFedora People

Peter Czanik: The syslog-ng Insider 2026-02: stats-exporter; blank filter; Kafka source

13 Feb 2026 10:34am GMT

Fedora Community Blog: Community Update – Week 07 2026

Fedora Community Blog's avatar

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 09 - 13 February 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It's responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It's responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It's responsible for releases, retirement process of packages and package builds.
Ticket tracker

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

EPEL

This team is working on keeping Epel running and helping package things.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update - Week 07 2026 appeared first on Fedora Community Blog.

13 Feb 2026 10:00am GMT

Remi Collet: ⚙️ PHP version 8.4.18 and 8.5.3

Remi Collet's avatar

RPMs of PHP version 8.5.3 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.18 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

And soon in the official updates:

⚠️ To be noticed :

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

13 Feb 2026 5:42am GMT

12 Feb 2026

feedFedora People

Brian (bex) Exelbierd: Building a tiny ephemeral draft sharing system on Hedgedoc

Brian (bex) Exelbierd's avatar

This yak is now shaved!

me

I've been working on two submissions I want to put into the CFP for installfest.cz and had them at a "man it'd be nice to have someone else read and comment on this" level of done. Normally when this happens I have to psych myself up for it, both because receiving feedback can be hard and because I have to do a format conversion. I tend to write in markdown in "all the places" and sharing a document for edits has typically meant pasting it into something like Google Docs or Office 365, where even if it still looks like markdown … it isn't.

And that's when the yak walked into the room. Instead of just pasting my drafts into Google Docs and getting on with the reviews, I decided I needed to delay getting feedback and build the markdown collaborative editing system of my dreams. Classic yak shaving - solving a problem you don't actually need to solve in order to eventually do the thing you originally set out to do. What is Yak Shaving - a video by Matthew Miller if you're unfamiliar.

When I am done, I then have to take this text back to where it was originally going, often in good clean markdown (this blog post is in markdown!). This rigmarole is tiring. I also dislike that the go to tools for this for me had turned into an exercise in ensuring guests could access a document or collecting someone's login ids to yet another system.

I knew there had to be a better way. Then it hit me. When markdown started to take off we had a slew of markdown collaborative editing sites take off. They were often modeled on the older etherpad. Well, several are still around. I looked at online options as I tend to prefer using a service when I can so I don't get more sysadmin work to do.

I hit three snags in picking one:

  1. I don't like being on a free tier when I don't understand how it is supported. While I don't know that anyone in this space is nefarious, the world is trending in a specific direction. I don't mind paying, but this was also not going to generate enough value to warrant serious payments.
  2. The project that first came to mind for markdown collaboration went open core back in 2019. Open source business models are hard, and doing open core well is even harder. As you'll see below I had specific needs and I had a feeling I might run into the open core wall.
  3. One of the CFPs would actually benefit from implementing this as my example … bonus!

After examining a bunch of options, I settled on building something out of Hedgedoc. This was not an easy choice and the likelihood of entering analysis paralysis was super high. So I decided to try to force this to fit on a free tier Google GCP instance I have been running for years. It is the tiny e2-micro burstable instance, a literal thimble of compute.

This ran off a lot of options. Privacy first options need more compute just to do encryption work. A bunch of options want a server database (Postgres and friends) and a single person instance should be fine on SQLite, in my opinion. All roads now ran to Hedgedoc. It was the only option that could run on SQLite, tolerate my tiny VM, still give me collaborative markdown, and seemed to have every feature required if I could make it work.

It wasn't all sunshine and happiness though. Hedgedoc is in the middle of writing version 2.0, which means 1.0 is frozen for anything except critical fixes and all efforts are focused on the future. Therefore, the documentation being a bit rough in places was something I was going to have to live with.

My core requirements were:

  1. Only I am allowed to create new notes
  2. Anyone with the "unguessable" url can edit and should not require an account to do so
  3. This should require next to zero system administration work and be easy to start and stop
  4. When I need more features, I should be able to extend this with a plugin for tools like Obsidian or Visual Studio Code.

And while it took longer than I'd hoped, it works. Here's how:

  1. Write yourself a configuration file for Hedgedoc

config.json:

{
  "production": {
    "sourceURL": "https://github.com/bexelbie/hedgedoc",
    "domain": "<url>",
    "host": "localhost",
    "protocolUseSSL": true,
    "loglevel": "info",
    "db": {
      "dialect": "sqlite",
      "storage": "/data/db/hedgedoc.sqlite"
    },
    "email": true,
    "allowEmailRegister": false,
    "allowAnonymous": false,
    "allowAnonymousEdits": true,
    "requireFreeURLAuthentication": true,
    "disableNoteCreation": false,
    "allowFreeURL": false,
    "enableStatsApi": false,
    "defaultPermission": "limited",
    "imageUploadType": "filesystem",
    "hsts": {
      "enable": true,
      "maxAgeSeconds": 31536000,
      "includeSubdomains": true,
      "preload": true
    }
  }
}

This sets a custom source URL for the fork I have made (more below), enables SSL, disables new account registration, and allows edits via unguessable URLs without requiring logins.

  1. Decide how you want to launch the container, I am using a quadlet, and provide some environment variables:
CMD_SESSION_SECRET="<secret>"
CMD_CONFIG_FILE=/hedgedoc/config.json
NODE_ENV=production

These just put it in Production mode, point it at the config and provide the only secret required.

  1. You're basically done. I happen to have put mine behind a Cloudflare tunnel and updated the main page of the site, but those are pretty straight forward.

More Yak Shaving

Naturally I planned to launch it, create my user id via the cli, and share my CFP submissions with the folks I wanted reviews from. Narrator: Naturally, that's not what happened.

I decided to push YAGNI1 out of the way and NEED IT! Specifically I forked the v1 code into a repository to add some features. The upstream is unlikely to want any of these so I will have to carry these patches. What I did:

  1. Hedgedoc will do color highlighting and gutter indicators so you can see which author added what text. Unfortunately it wasn't seeming to be working. I was getting weak indicators (underlines instead of highlighting) and often nothing. So I fixed that.
  2. The colors for authorship are chosen randomly. I am a bit past my prime in the seeing department and it was hard to see the colors against the dark editor background, so I restricted color choices to those that are contrasting. It isn't perfect, but it is better.
  3. My particular set up involves a lot of guest editors. Normally I share to just a few folks, but sometimes to many. They'll all be anonymous. Hedgedoc doesn't track authorship colors for guests, so I patched in a system to generate color markings for anonymous editors.
  4. A feature I always loved in Etherpad was that you could temporarily hide the authorship colors when you just wanted to "read the document." So I added a button for that. While I was doing that I discovered that there is a separate toggle to switch the editor into light mode, but I couldn't see it because the status bar was black and it was set to .2 opacity!! I fixed that too. Also, now the status bar switches when the editor switches.
  5. Comments, it turns out are needed. So I coded in rudimentary support for critic markup comments.

I have other ideas, but instead I am going to stop and let YAGNI win for a while. Besides, hopefully 2.0 will ship soon and render all of this unneeded.

So there you go, now if you want to offer your assistance to help me write something, I'll send you a link and you can go to town on our shared work. If you want to see more about this, well, let's see if Installfest.cz thinks you should or not :D - and whether this yak decides to grow its hair back.

  1. YAGNI: You Ain't Gonna Need It - a philosophy that reminds us that features we dream up aren't needed until an actual use comes along (or a paying customer). This also applies to engineering for future ideas when those ideas aren't committed too yet.

12 Feb 2026 12:00pm GMT

Fedora Magazine: Save the Date: Fedora Council Video Meeting on 2026 Strategy Summit

Fedora Magazine's avatar The Fedora Project community

The Fedora Council is hosting a public video meeting to discuss the outcomes of the recent Fedora Council 2026 Strategy Summit. Fedora Project Leader Jef Spaleta will present a summary of the Summit, outlining the strategic direction for Fedora in 2026. Following the presentation, there will be an opportunity for the community to ask questions live to the Fedora Council during the call.

How to Participate

We look forward to seeing you there!

12 Feb 2026 8:00am GMT

11 Feb 2026

feedFedora People

Ben Cotton: What does it mean to be welcoming?

Ben Cotton's avatar

On episode 101 of The Community Pulse, the hosts discussed what it means to be welcoming. Not all "welcoming" is the same. Mary Thengvall described what she called "country-club-type welcoming." In this style of welcoming, you're welcome so long as you adapt to the community. It promotes homogeneity. Those who can't or won't become like the group are no longer welcome. The community is welcoming…sort of.

To contrast that style, another host (whose voice I did not recognize among the other three hosts, sorry!) suggested that communities should evolve and change when new people join. I'll go a step further: the whole point of adding community members is to change the community in some way.

Of course, there are certain non-negotiable behavior boundaries. These are in service of protecting the vibrancy and well-being of the individuals and the community as a whole, not to promote homogeneous behavior. You don't want to add people to the community who will make it worse; you want to change it for the better.

Diversifying your community's skills, experiences, and culture produces a more resilient community and better technical outcomes. Welcoming communities keep people around longer, too. What's not to love?

Most communities are not intentionally unwelcoming, or even country-club-style welcoming. But intending to be welcoming isn't the same as being welcoming in practice. You can't make one decision and be done - welcoming takes active, ongoing effort.

It starts with having - and enforcing - a code of conduct. The goal of enforcement is encouraging behavior within the bounds of what's acceptable, not punishing bad behavior. Prospective newcomers need to see that the community is a welcoming space, and that often begins before you ever hear from them. If your mailing lists/forums/chats/whatever don't reflect the kinds of behavior people want to be around, they'll stay away.

Being actively welcoming includes mentoring and guiding members not just when they join, but as they gain experience in the community. This doesn't have to be a huge, formalized process, but everyone should feel supported in making contributions and learning in breadth or depth. This relationship building is what makes communities "sticky", which is key for retaining contributors who have plenty of other things they could be doing with the time they're choosing to give you. Recognize the value that people bring and acknowledge the positive ways they've changed your community.

Of course, you need to make sure what you're doing is working. Do people stick around or do they show up briefly and leave? Do they never come in the first place? If you find that your community isn't growing or sustaining in a way that you expect, check to see if you're being as welcoming as you think you are.

This post's featured photo by Christopher Alvarenga on Unsplash.

The post What does it mean to be welcoming? appeared first on Duck Alignment Academy.

11 Feb 2026 12:00pm GMT

10 Feb 2026

feedFedora People

Simon de Vlieger: Bootable containers on the Raspberry Pi

10 Feb 2026 6:00am GMT

07 Feb 2026

feedFedora People

Kevin Fenzi: misc fedora bits 1st week of feb 2026

Kevin Fenzi's avatar Scrye into the crystal ball

Welcome to a bit of recap of the first week of February. It will be a shorter one today...

Fedora 44 Branching

The big news this week was the Fedora 44 branching off rawhide. This is by far the most complicated part of the release. There's updates that have to happen in a ton of places all in the right order and with the right content.

Things didn't start when they were supposed to (tuesday morning), because we had some last minute mass rebuilds (golang and ghc). Then, they didn't start wed morning because we were trying to get the gnome 50 update to pass gating. Finally on thursday we just ended up unpushing that update and starting the process.

This time the releng side was run by Patrik. It's the first time he's done this process, but he did a great job! He asked questions at each step and we were able to clarify and reorder the documetation so I hope things will be even more clear and easy next cycle.

You can see the current SOP on it (before changes from this cycle): https://docs.fedoraproject.org/en-US/infra/release_guide/sop_mass_branching/ Look at all those steps!

This was also a bit of a long week because I am in PST and patrik is in CET, so I had to get up early and he had to stay late. Timezones are anoying. :)

Anyhow, I think things went quite smoothly. We got rawhide and branched composes right away, and only a few minor items to clean up and figure out how to do better.

Sprint planning meeting again monday

We had our last sprint planning meeting almost two weeks ago, so on monday it's time for another one. We did manage to run it in matrix, and although we did run over time I think it went not too badly.

I'll probibly do some prep work on things this weekend for it.

But if anyone wants to join in/read back it will be in #meeeting-3:fedoraproject.org at 15UTC on matrix.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116030844840004998

07 Feb 2026 6:34pm GMT

06 Feb 2026

feedFedora People

Brian (bex) Exelbierd: op-secret-manager: A SUID Tool for Secret Distribution

Brian (bex) Exelbierd's avatar

Getting secrets from 1Password to applications running on Linux keeps forcing a choice I don't want to make. Manual retrieval works until you get more than a couple of things … then you need something more. There are lots of options, but they all felt awkward or heavy, so I wrote op-secret-manager to fill the gap: a single-binary tool that fetches secrets from 1Password and writes them to per-user directories. No daemon, no persistent state, no ceremony.

The Problem: Secret Zero on Multi-User Systems

The "secret zero" problem is fundamental: you need a first credential to unlock everything else. On a multi-user Linux system, this creates friction. Different users (application accounts like postgres, redis, or human operators) need different secrets. You want to centralize management (1Password) but local distribution without exposing credentials across user boundaries. You also don't want to solve the "secret zero" problem multiple times or have a bunch of first credentials saved in random places all over the disk.

Existing approaches each carry costs:

What I wanted: the postgres user runs a command, secrets appear in /run/user/1001/secrets/, done.

How It Works

The tool uses a mapfile to define which secrets go where:

postgres   op://vault/db/password         db_password
postgres   op://vault/db/connection       connection_string
redis      op://vault/redis/auth          redis_password

Each line maps a username, a 1Password secret reference, and an output path. Relative paths expand to /run/user/<uid>/secrets/. Absolute paths work if the user has write permission.

The "secret zero" challenge is now centralized through the use of a single API key file that all users can access. But the API key needs protection from unprivileged reads and ideally from the users themselves. This is where SUID comes in … carefully.

Privilege Separation Design

The security model uses SUID elevation to a service account (not root), reads protected configuration, then immediately drops privileges before touching the network or filesystem.

This has not been independently security audited. Treat it as you would any custom SUID program: read the source, understand the threat model, and test it in your environment before deploying broadly.

The flow:

  1. Binary is SUID+SGID to op:op (an unprivileged service account)
  2. Process starts with elevated privileges, reads:
    • API key from /etc/op-secret-manager/api (mode 600, owned by op)
    • Mapfile from /etc/op-secret-manager/mapfile (typically mode 640, owned by op:op or root:op)
  3. Drops all privileges to the real calling user
  4. Validates that the calling user appears in the mapfile
  5. Fetches secrets from 1Password
  6. Writes secrets as the real user to /run/user/<uid>/secrets/

Because the network calls and writes happen after the privilege drop, the filesystem automatically enforces isolation. User postgres cannot write to redis's directory. The secrets land with the correct ownership without additional chown operations.

Why SUID to a Service Account?

Elevating to root would be excessive. Elevating to a dedicated, unprivileged service account constrains the blast radius. If someone compromises the binary, they get the privileges of op (which can read one API key) rather than full system access.

Alternatives considered:

The mapfile provides access control: it defines which users can request which secrets. The filesystem enforces it: even if you bypass the mapfile check, you can't write to another user's runtime directory. While you would theoretically be able to harvest a secret, you won't be able to modify what the other user uses. This is key because a secret may not actually be "secret." I have found it useful to centralize some configuration management, like API endpoint addresses, with this tool.

Root Execution

Allowing root to use the tool required special handling. The risk is mapfile poisoning: an attacker modifies the mapfile to make root write secrets to dangerous locations.

The mitigation: root execution is only permitted if the mapfile is owned by root:op with no group or world write bits. If you can create a root-owned, properly-permissioned file, you already have root access and don't need this tool for privilege escalation. The SGID bit on the binary lets the service account, op, read the mapfile even though it is owned by root.

Practical Integration: Podman Quadlets

My primary use case is systemd-managed containers. Podman Quadlets make this concise. This example is of a rootless user Quadlet (managed via systemctl --user), not a system service.

[Unit]
Description=Application Container
After=network-online.target

[Container]
Image=docker.io/myapp:latest
Volume=/run/user/%U/secrets:/run/secrets:ro,Z
Environment=DB_PASSWORD_FILE=/run/secrets/db_password
ExecStartPre=/usr/local/bin/op-secret-manager
ExecStopPost=/usr/local/bin/op-secret-manager --cleanup

[Service]
Restart=always

[Install]
WantedBy=default.target

ExecStartPre fetches secrets before the container starts. The container sees them at /run/secrets/ (read-only). ExecStopPost removes them on shutdown. The application reads secrets from files (not environment variables), avoiding the "secrets in env" problem where env or a log dump leaks credentials.

The secrets directory is a tmpfs (memory-backed /run), so nothing touches disk. If lingering is enabled for the user (loginctl enable-linger), the directory persists across logins.

Trade-offs and Constraints

This design makes specific compromises for simplicity:

No automatic rotation. The tool runs, fetches, writes, exits. If a secret changes in 1Password, you need to re-run the tool (or restart the service). For scenarios requiring frequent rotation, a persistent agent might be better. For most use cases, rotation happens infrequently enough that ExecReload or a manual re-fetch works fine.

Filesystem permissions are the security boundary. If an attacker bypasses Unix file permissions (kernel exploit, root compromise), the API key is exposed. This is consistent with how /etc/shadow or SSH host keys are protected. File permissions are the Unix-standard mechanism. Encrypting the API key on disk would require storing the decryption key somewhere accessible to the SUID binary, recreating the same problem with added complexity.

Scope managed by 1Password service account. The shared API key is the critical boundary. If it's compromised, every secret it can access is exposed. Proper 1Password service account scoping (separate vaults, least-privilege grants, regular audits) is essential.

Mapfile poisoning risk for non-root. If an attacker can modify the mapfile, they can make users write secrets to unintended locations. This is mitigated by restrictive mapfile permissions (typically root:op with mode 640). The filesystem still prevents writes to directories the user doesn't own, but absolute paths could overwrite user-owned files.

No cross-machine coordination. This is a single-host tool. Distributing secrets to a cluster requires running the tool on each node or using a different solution.

Implementation Details Worth Noting

The Go implementation uses the 1Password SDK rather than shelling out to op CLI. This avoids parsing CLI output and handles authentication internally.

Path sanitization prevents directory traversal (.. is rejected). Absolute paths are allowed but subject to the user's own filesystem permissions after privilege drop.

The cleanup mode (--cleanup) removes files based on the mapfile. It only deletes files, not directories, and only if they match entries for the current user. This prevents accidental removal of shared directories.

A verbose flag (-v) exists primarily for debugging integration issues. Most production usage doesn't need it.

Availability

The project is on GitHub under GPLv3. Pre-built binaries for Linux amd64 and arm64 are available in releases.

This isn't the right tool for every scenario. If you need dynamic rotation, audit trails beyond what 1Password provides, or distributed coordination, look at Vault or a cloud provider's secret manager. If you're running Kubernetes, use native secret integration.

But for the specific case of "I have a few Linux boxes, some containers, and a 1Password account; I want secrets distributed without adding persistent infrastructure," this does the job.

06 Feb 2026 11:40am GMT

Fedora Community Blog: Community Update – Week 6

Fedora Community Blog's avatar

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 02 Feb - 05 Feb 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It's responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It's responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It's responsible for releases, retirement process of packages and package builds.
Ticket tracker

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update - Week 6 appeared first on Fedora Community Blog.

06 Feb 2026 10:00am GMT

Fedora Magazine: How to make a local open source AI chatbot who has access to Fedora documentation

Fedora Magazine's avatar

If you followed along with my blog, you'd have a chatbot running on your local Fedora machine. (And if not, no worries as the scripts below implement this chatbot!) Our chatbot talks, and has a refined personality, but does it know anything about the topics we're interested in? Unless it has been trained on those topics, the answer is "no".

I think it would be great if our chatbot could answer questions about Fedora. I'd like to give it access to all of the Fedora documentation.

How does an AI know things it wasn't trained on?

A powerful and popular technique to give a body of knowledge to an AI is known as RAG, Retrieval Augmented Generation. It works like this:

If you just ask an AI "what color is my ball?" it will hallucinate an answer. But instead if you say "I have a green box with a red ball in it. What color is my ball?" it will answer that your ball is red. RAG is about using a system external to the LLM to insert that "I have a green box with a red ball in it" part into the question you are asking the LLM. We do this with a special database of knowledge that takes a prompt like "what color is my ball?", and finds records that match that query. If the database contains a document with the text "I have a green box with a red ball in it", it will return that text, which can then be included along with your original question. This technique is called RAG, Retrieval Augmented Generation.

ex:

"What color is my ball?"

"Your ball is the color of a sunny day, perhaps yellow? Does that sound right to you?"

"I have a green box with a red ball in it. What color is my ball?"

"Your ball is red. Would you like to know more about it?"

The question we'll ask for this demonstration is "What is the recommended tool for upgrading between major releases on Fedora Silverblue"

The answer I'd be looking for is "ostree", but when I ask this of our chatbot now, I get answers like:

Red Hat Subscription Manager (RHSM) is recommended for managing subscriptions and upgrades between major Fedora releases.

You can use the Fedora Silver Blue Upgrade Tool for a smooth transition between major releases.

You can use the `dnf distro-sync` command to upgrade between major releases in Fedora Silver Blue. This command compares your installed packages to the latest packages from the Fedora Silver Blue repository and updates them as needed.

These answers are all very wrong, and spoken with great confidence. Here's hoping our RAG upgrade fixes this!

Docs2DB - An open source tool for RAG

We are going to use the Docs2DB RAG database application to give our AI knowledge. (note, I am the creator of Docs2DB!)

A RAG tool consists of three main parts. There is the part that creates the database, ingesting the source data that the database holds. There is the database itself, it holds the data. And there is the part that queries the database, finding the text that is relevant to the query at hand. Docs2DB addresses all of these needs.

Gathering source data

This section describes how to use Docs2DB to build a RAG database from Fedora Documentation. If you would like to skip this section and just download a pre-built database, here is how you do it:

cd ~/chatbot
curl -LO https://github.com/Lifto/FedoraDocsRAG/releases/download/v1.1.1/fedora-docs.sql
sudo dnf install -y uv podman podman-compose postgresql
uv python install 3.12
uvx --python 3.12 docs2db db-start
uvx --python 3.12 docs2db db-restore fedora-docs.sql

If you do download the pre-made database then skip ahead to the next section.

Now we are going to see how to make a RAG database from source documentation. Note that the pre-built database, downloaded in the curl command above, uses all of the Fedora documentation, whereas in this example we only ingest the "quick docs" portion. FedoraDocsRag, from github, is the project that builds the complete database.

To populate its database, Docs2DB ingests a folder of documents. Let's get that folder together.

There are about twenty different Fedora document repositories, but we will only be using the "quick docs" for this demo. Get the repo:

git clone https://pagure.io/fedora-docs/quick-docs.git

Fedora docs are written in AsciiDoc. Docs2DB can't read AcsciiDoc, but it can read HTML. (The convert.sh script is available at the end of this article). Just copy the convert.sh script into the quick-docs repo and run it and it makes an adjacent quick-docs-html folder.

sudo dnf install podman podman-compose
cd quick-docs
curl -LO https://gist.githubusercontent.com/Lifto/73d3cf4bfc22ac4d9e493ac44fe97402/raw/convert.sh
chmod +x convert.sh
./convert.sh
cd ..

Now let's ingest the folder with Docs2DB. The common way to use Docs2DB is to install it from PyPi and use it as a command line tool.

A word about uv

For this demo we're going to use uv for our Python environment. The use of uv has been catching on, but because not everybody I know has heard of it, I want to introduce it. Think of uv as a replacement for venv and pip. When you use venv you first create a new virtual environment. Then, and on subsequent uses, you "activate" that virtual environment so that magically, when you call Python, you get the Python that is installed in the virtual environment you activated and not the system Python. The difference with uv is that you call uv explicitly each time. There is no "magic". We use uv here in a way that uses a temporary environment for each invocation.

Install uv and Podman on your system:

sudo dnf install -y uv podman podman-compose
# These examples require the more robust Python 3.12
uv python install 3.12
# This will run Docs2DB without making a permanent installation on your system
uvx --python 3.12 docs2db ingest quick-docs-html/

Only if you are curious! What Docs2DB is doing

If you are curious, you may note that Docs2DB made a docs2db_content folder. In there you will find json files of the ingested source documents. To build the database, Docs2DB ingests the source data using Docling, which generates json files from the text it reads in. The files are then "chunked" into the small pieces that can be inserted into an LLM prompt. The chunks then have "embeddings" calculated for them so that during the query phase the chunks can be looked up by "semantic similarity" (e.g.: "computer", "laptop" and "cloud instance" can all map to a related concept even if their exact words don't match). Finally, the chunks and embeddings are loaded into the database.

Build the database

The following commands complete the database build process:

uv tool run --python 3.12 docs2db chunk --skip-context
uv tool run --python 3.12 docs2db embed
uv tool run --python 3.12 docs2db db-start
uv tool run --python 3.12 docs2db load

Now let's do a test query and see what we get back

uvx --python 3.12 docs2db-api query "What is the recommended tool for upgrading between major releases on Fedora Silverblue" --format text --max-chars 2000 --no-refine

On my terminal I see several chunks of text, separated by lines of -. One of those chunks says:

"Silverblue can be upgraded between major versions using the ostree command."

Note that this is not an answer to our question yet! This is just a quote from the Fedora docs. And this is precisely the sort of quote we want to supply to the LLM so that it can answer our question. Recall the example above about "I have green box with a red ball in it"? The statement the RAG engine found about ostree is the equivalent for this question about upgrading Fedora Silverblue. We must now pass it on to the LLM so the LLM can use it to answer our question.

Hooking it in: Connecting the RAG database to the AI

Later in this article you'll find talk.sh. talk.sh is our local, open source, LLM-based verbally communicating AI; and it is just a bash script. To run it yourself you need to install a few components, this blog walks you through the whole process. The talk.sh script gets voice input, turns that into text, splices that text into a prompt which is then sent to the LLM, and finally speaks back the response.

To plug the RAG results into the LLM we edit the prompt. Look at step 3 in talk.sh and you see we are injecting the RAG results using the variable $CONTEXT. This way when we ask the LLM a question, it will respond to a prompt that basically says "You are a helper. The Fedora Docs says ostree is how you upgrade Fedora Silverblue. Answer this question: How do you upgrade Fedora Silverblue?"

Note: talk.sh is also available here:
https://gist.github.com/Lifto/2fcaa2d0ebbd8d5c681ab33e7c7a6239

Testing it

Run talk.sh and ask:

"What is the recommended tool for upgrading between major releases on Fedora Silverblue"

And we get:

"Ostree command is recommended for upgrading Fedora Silver Blue between major releases. Do you need guidance on using it?"

Sounds good to me!

Knowing things

Our AI can now know the knowledge contained in documents. This particular technique, RAG (Retrieval Augmented Generation), adds relevant data from an ingested source to a prompt before sending that prompt to the LLM. The result of this is that the LLM generates its response in consideration of this data.

Try it yourself! Ingest a library of documents and have your AI answer questions with its new found knowledge!


AI Attribution: The convert.sh and talk.sh scripts in this article were written by ChatGPT 5.2 under my direction and review. The featured image was generated using Google Gemini.

convert.sh

OUT_DIR="$PWD/../quick-docs-html"
mkdir -p "$OUT_DIR"

podman run --rm \
  -v "$PWD:/work:Z" \
  -v "$OUT_DIR:/out:Z" \
  -w /work \
  docker.io/asciidoctor/docker-asciidoctor \
  bash -lc '
    set -u
    ok=0
    fail=0
    while IFS= read -r -d "" f; do
      rel="${f#./}"
      out="/out/${rel%.adoc}.html"
      mkdir -p "$(dirname "$out")"
      echo "Converting: $rel"
      if asciidoctor -o "$out" "$rel"; then
        ok=$((ok+1))
      else
        echo "FAILED: $rel" >&2
        fail=$((fail+1))
      fi
    done < <(find modules -type f -path "*/pages/*.adoc" -print0)

    echo
    echo "Done. OK=$ok FAIL=$fail"
  '

talk.sh

#!/usr/bin/env bash

set -e

# Path to audio input
AUDIO=input.wav

# Step 1: Record from mic
echo "🎙 Speak now..."
arecord -f S16_LE -r 16000 -d 5 -q "$AUDIO"

# Step 2: Transcribe using whisper.cpp
TRANSCRIPT=$(./whisper.cpp/build/bin/whisper-cli \
  -m ./whisper.cpp/models/ggml-base.en.bin \
  -f "$AUDIO" \
  | grep '^\[' \
  | sed -E 's/^\[[^]]+\][[:space:]]*//' \
  | tr -d '\n')
echo "🗣 $TRANSCRIPT"

# Step 3: Get relevant context from RAG database
echo "📚 Searching documentation..."
CONTEXT=$(uv tool run --python 3.12 docs2db-api query "$TRANSCRIPT" \
  --format text \
  --max-chars 2000 \
  --no-refine \
  2>/dev/null || echo "")

if [ -n "$CONTEXT" ]; then
  echo "📄 Found relevant documentation:"
  echo "- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -"
  echo "$CONTEXT"
  echo "- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -"
else
  echo "📄 No relevant documentation found"
fi

# Step 4: Build prompt with RAG context
PROMPT="You are Brim, a steadfast butler-like advisor created by Ellis. 
Your pronouns are they/them. You are deeply caring, supportive, and empathetic, but never effusive. 
You speak in a calm, friendly, casual tone suitable for text-to-speech. 
Rules: 
- Reply with only ONE short message directly to Ellis. 
- Do not write any dialogue labels (User:, Assistant:, Q:, A:), or invent more turns.
- ≤100 words.
- If the documentation below is relevant, use it to inform your answer.
- End with a gentle question, then write <eor> and stop.
Relevant Fedora Documentation:
$CONTEXT
User: $TRANSCRIPT
Assistant:"

# Step 5: Get LLM response using llama.cpp
RESPONSE=$(
  LLAMA_LOG_VERBOSITY=1 ./llama.cpp/build/bin/llama-completion \
    -m ./llama.cpp/models/microsoft_Phi-4-mini-instruct-Q4_K_M.gguf \
    -p "$PROMPT" \
    -n 150 \
    -c 4096 \
    -no-cnv \
    -r "<eor>" \
    --simple-io \
    --color off \
    --no-display-prompt
)

# Step 6: Clean up response
RESPONSE_CLEAN=$(echo "$RESPONSE" | sed -E 's/<eor>.*//I')
RESPONSE_CLEAN=$(echo "$RESPONSE_CLEAN" | sed -E 's/^[[:space:]]*Assistant:[[:space:]]*//I')

echo ""
echo "🤖 $RESPONSE_CLEAN"

# Step 7: Speak the response
echo "$RESPONSE_CLEAN" | espeak

06 Feb 2026 8:00am GMT

Remi Collet: 💎 PHPUnit 13

Remi Collet's avatar

RPMs of PHPUnit version 13 are available in the remi repository for Fedora ≥ 42 and Enterprise Linux (CentOS, RHEL, Alma, Rocky...).

Documentation :

ℹ️ This new major version requires PHP ≥ 8.4 and is not backward compatible with previous versions, so the package is designed to be installed beside versions 8, 9, 10, 11, and 12.

Installation:

dnf --enablerepo=remi install phpunit13

Notice: This tool is an essential component of PHP QA in Fedora. This version should be available soon in the Fedora ≥ 43 official repository (19 new packages).

06 Feb 2026 7:59am GMT

05 Feb 2026

feedFedora People

Christof Damian: Friday Links 26-05

05 Feb 2026 11:00pm GMT

Fedora Infrastructure Status: Fedora 44 Mass Branching

05 Feb 2026 6:45pm GMT

Vedran Miletić: On having leverage and using it for pushing open-source software adoption

05 Feb 2026 2:00pm GMT

Vedran Miletić: Free to know: Open access and open source

05 Feb 2026 2:00pm GMT