07 Feb 2026

feedFedora People

Kevin Fenzi: misc fedora bits 1st week of feb 2026

Kevin Fenzi's avatar Scrye into the crystal ball

Welcome to a bit of recap of the first week of February. It will be a shorter one today...

Fedora 44 Branching

The big news this week was the Fedora 44 branching off rawhide. This is by far the most complicated part of the release. There's updates that have to happen in a ton of places all in the right order and with the right content.

Things didn't start when they were supposed to (tuesday morning), because we had some last minute mass rebuilds (golang and ghc). Then, they didn't start wed morning because we were trying to get the gnome 50 update to pass gating. Finally on thursday we just ended up unpushing that update and starting the process.

This time the releng side was run by Patrik. It's the first time he's done this process, but he did a great job! He asked questions at each step and we were able to clarify and reorder the documetation so I hope things will be even more clear and easy next cycle.

You can see the current SOP on it (before changes from this cycle): https://docs.fedoraproject.org/en-US/infra/release_guide/sop_mass_branching/ Look at all those steps!

This was also a bit of a long week because I am in PST and patrik is in CET, so I had to get up early and he had to stay late. Timezones are anoying. :)

Anyhow, I think things went quite smoothly. We got rawhide and branched composes right away, and only a few minor items to clean up and figure out how to do better.

Sprint planning meeting again monday

We had our last sprint planning meeting almost two weeks ago, so on monday it's time for another one. We did manage to run it in matrix, and although we did run over time I think it went not too badly.

I'll probibly do some prep work on things this weekend for it.

But if anyone wants to join in/read back it will be in #meeeting-3:fedoraproject.org at 15UTC on matrix.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116030844840004998

07 Feb 2026 6:34pm GMT

06 Feb 2026

feedFedora People

Brian (bex) Exelbierd: op-secret-manager: A SUID Tool for Secret Distribution

Brian (bex) Exelbierd's avatar

Getting secrets from 1Password to applications running on Linux keeps forcing a choice I don't want to make. Manual retrieval works until you get more than a couple of things … then you need something more. There are lots of options, but they all felt awkward or heavy, so I wrote op-secret-manager to fill the gap: a single-binary tool that fetches secrets from 1Password and writes them to per-user directories. No daemon, no persistent state, no ceremony.

The Problem: Secret Zero on Multi-User Systems

The "secret zero" problem is fundamental: you need a first credential to unlock everything else. On a multi-user Linux system, this creates friction. Different users (application accounts like postgres, redis, or human operators) need different secrets. You want to centralize management (1Password) but local distribution without exposing credentials across user boundaries. You also don't want to solve the "secret zero" problem multiple times or have a bunch of first credentials saved in random places all over the disk.

Existing approaches each carry costs:

What I wanted: the postgres user runs a command, secrets appear in /run/user/1001/secrets/, done.

How It Works

The tool uses a mapfile to define which secrets go where:

postgres   op://vault/db/password         db_password
postgres   op://vault/db/connection       connection_string
redis      op://vault/redis/auth          redis_password

Each line maps a username, a 1Password secret reference, and an output path. Relative paths expand to /run/user/<uid>/secrets/. Absolute paths work if the user has write permission.

The "secret zero" challenge is now centralized through the use of a single API key file that all users can access. But the API key needs protection from unprivileged reads and ideally from the users themselves. This is where SUID comes in … carefully.

Privilege Separation Design

The security model uses SUID elevation to a service account (not root), reads protected configuration, then immediately drops privileges before touching the network or filesystem.

This has not been independently security audited. Treat it as you would any custom SUID program: read the source, understand the threat model, and test it in your environment before deploying broadly.

The flow:

  1. Binary is SUID+SGID to op:op (an unprivileged service account)
  2. Process starts with elevated privileges, reads:
    • API key from /etc/op-secret-manager/api (mode 600, owned by op)
    • Mapfile from /etc/op-secret-manager/mapfile (typically mode 640, owned by op:op or root:op)
  3. Drops all privileges to the real calling user
  4. Validates that the calling user appears in the mapfile
  5. Fetches secrets from 1Password
  6. Writes secrets as the real user to /run/user/<uid>/secrets/

Because the network calls and writes happen after the privilege drop, the filesystem automatically enforces isolation. User postgres cannot write to redis's directory. The secrets land with the correct ownership without additional chown operations.

Why SUID to a Service Account?

Elevating to root would be excessive. Elevating to a dedicated, unprivileged service account constrains the blast radius. If someone compromises the binary, they get the privileges of op (which can read one API key) rather than full system access.

Alternatives considered:

The mapfile provides access control: it defines which users can request which secrets. The filesystem enforces it: even if you bypass the mapfile check, you can't write to another user's runtime directory. While you would theoretically be able to harvest a secret, you won't be able to modify what the other user uses. This is key because a secret may not actually be "secret." I have found it useful to centralize some configuration management, like API endpoint addresses, with this tool.

Root Execution

Allowing root to use the tool required special handling. The risk is mapfile poisoning: an attacker modifies the mapfile to make root write secrets to dangerous locations.

The mitigation: root execution is only permitted if the mapfile is owned by root:op with no group or world write bits. If you can create a root-owned, properly-permissioned file, you already have root access and don't need this tool for privilege escalation. The SGID bit on the binary lets the service account, op, read the mapfile even though it is owned by root.

Practical Integration: Podman Quadlets

My primary use case is systemd-managed containers. Podman Quadlets make this concise. This example is of a rootless user Quadlet (managed via systemctl --user), not a system service.

[Unit]
Description=Application Container
After=network-online.target

[Container]
Image=docker.io/myapp:latest
Volume=/run/user/%U/secrets:/run/secrets:ro,Z
Environment=DB_PASSWORD_FILE=/run/secrets/db_password
ExecStartPre=/usr/local/bin/op-secret-manager
ExecStopPost=/usr/local/bin/op-secret-manager --cleanup

[Service]
Restart=always

[Install]
WantedBy=default.target

ExecStartPre fetches secrets before the container starts. The container sees them at /run/secrets/ (read-only). ExecStopPost removes them on shutdown. The application reads secrets from files (not environment variables), avoiding the "secrets in env" problem where env or a log dump leaks credentials.

The secrets directory is a tmpfs (memory-backed /run), so nothing touches disk. If lingering is enabled for the user (loginctl enable-linger), the directory persists across logins.

Trade-offs and Constraints

This design makes specific compromises for simplicity:

No automatic rotation. The tool runs, fetches, writes, exits. If a secret changes in 1Password, you need to re-run the tool (or restart the service). For scenarios requiring frequent rotation, a persistent agent might be better. For most use cases, rotation happens infrequently enough that ExecReload or a manual re-fetch works fine.

Filesystem permissions are the security boundary. If an attacker bypasses Unix file permissions (kernel exploit, root compromise), the API key is exposed. This is consistent with how /etc/shadow or SSH host keys are protected. File permissions are the Unix-standard mechanism. Encrypting the API key on disk would require storing the decryption key somewhere accessible to the SUID binary, recreating the same problem with added complexity.

Scope managed by 1Password service account. The shared API key is the critical boundary. If it's compromised, every secret it can access is exposed. Proper 1Password service account scoping (separate vaults, least-privilege grants, regular audits) is essential.

Mapfile poisoning risk for non-root. If an attacker can modify the mapfile, they can make users write secrets to unintended locations. This is mitigated by restrictive mapfile permissions (typically root:op with mode 640). The filesystem still prevents writes to directories the user doesn't own, but absolute paths could overwrite user-owned files.

No cross-machine coordination. This is a single-host tool. Distributing secrets to a cluster requires running the tool on each node or using a different solution.

Implementation Details Worth Noting

The Go implementation uses the 1Password SDK rather than shelling out to op CLI. This avoids parsing CLI output and handles authentication internally.

Path sanitization prevents directory traversal (.. is rejected). Absolute paths are allowed but subject to the user's own filesystem permissions after privilege drop.

The cleanup mode (--cleanup) removes files based on the mapfile. It only deletes files, not directories, and only if they match entries for the current user. This prevents accidental removal of shared directories.

A verbose flag (-v) exists primarily for debugging integration issues. Most production usage doesn't need it.

Availability

The project is on GitHub under GPLv3. Pre-built binaries for Linux amd64 and arm64 are available in releases.

This isn't the right tool for every scenario. If you need dynamic rotation, audit trails beyond what 1Password provides, or distributed coordination, look at Vault or a cloud provider's secret manager. If you're running Kubernetes, use native secret integration.

But for the specific case of "I have a few Linux boxes, some containers, and a 1Password account; I want secrets distributed without adding persistent infrastructure," this does the job.

06 Feb 2026 11:40am GMT

Fedora Community Blog: Community Update – Week 6

Fedora Community Blog's avatar

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 02 Feb - 05 Feb 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It's responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It's responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It's responsible for releases, retirement process of packages and package builds.
Ticket tracker

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update - Week 6 appeared first on Fedora Community Blog.

06 Feb 2026 10:00am GMT

Fedora Magazine: How to make a local open source AI chatbot who has access to Fedora documentation

Fedora Magazine's avatar

If you followed along with my blog, you'd have a chatbot running on your local Fedora machine. (And if not, no worries as the scripts below implement this chatbot!) Our chatbot talks, and has a refined personality, but does it know anything about the topics we're interested in? Unless it has been trained on those topics, the answer is "no".

I think it would be great if our chatbot could answer questions about Fedora. I'd like to give it access to all of the Fedora documentation.

How does an AI know things it wasn't trained on?

A powerful and popular technique to give a body of knowledge to an AI is known as RAG, Retrieval Augmented Generation. It works like this:

If you just ask an AI "what color is my ball?" it will hallucinate an answer. But instead if you say "I have a green box with a red ball in it. What color is my ball?" it will answer that your ball is red. RAG is about using a system external to the LLM to insert that "I have a green box with a red ball in it" part into the question you are asking the LLM. We do this with a special database of knowledge that takes a prompt like "what color is my ball?", and finds records that match that query. If the database contains a document with the text "I have a green box with a red ball in it", it will return that text, which can then be included along with your original question. This technique is called RAG, Retrieval Augmented Generation.

ex:

"What color is my ball?"

"Your ball is the color of a sunny day, perhaps yellow? Does that sound right to you?"

"I have a green box with a red ball in it. What color is my ball?"

"Your ball is red. Would you like to know more about it?"

The question we'll ask for this demonstration is "What is the recommended tool for upgrading between major releases on Fedora Silverblue"

The answer I'd be looking for is "ostree", but when I ask this of our chatbot now, I get answers like:

Red Hat Subscription Manager (RHSM) is recommended for managing subscriptions and upgrades between major Fedora releases.

You can use the Fedora Silver Blue Upgrade Tool for a smooth transition between major releases.

You can use the `dnf distro-sync` command to upgrade between major releases in Fedora Silver Blue. This command compares your installed packages to the latest packages from the Fedora Silver Blue repository and updates them as needed.

These answers are all very wrong, and spoken with great confidence. Here's hoping our RAG upgrade fixes this!

Docs2DB - An open source tool for RAG

We are going to use the Docs2DB RAG database application to give our AI knowledge. (note, I am the creator of Docs2DB!)

A RAG tool consists of three main parts. There is the part that creates the database, ingesting the source data that the database holds. There is the database itself, it holds the data. And there is the part that queries the database, finding the text that is relevant to the query at hand. Docs2DB addresses all of these needs.

Gathering source data

This section describes how to use Docs2DB to build a RAG database from Fedora Documentation. If you would like to skip this section and just download a pre-built database, here is how you do it:

cd ~/chatbot
curl -LO https://github.com/Lifto/FedoraDocsRAG/releases/download/v1.1.1/fedora-docs.sql
sudo dnf install -y uv podman podman-compose postgresql
uv python install 3.12
uvx --python 3.12 docs2db db-start
uvx --python 3.12 docs2db db-restore fedora-docs.sql

If you do download the pre-made database then skip ahead to the next section.

Now we are going to see how to make a RAG database from source documentation. Note that the pre-built database, downloaded in the curl command above, uses all of the Fedora documentation, whereas in this example we only ingest the "quick docs" portion. FedoraDocsRag, from github, is the project that builds the complete database.

To populate its database, Docs2DB ingests a folder of documents. Let's get that folder together.

There are about twenty different Fedora document repositories, but we will only be using the "quick docs" for this demo. Get the repo:

git clone https://pagure.io/fedora-docs/quick-docs.git

Fedora docs are written in AsciiDoc. Docs2DB can't read AcsciiDoc, but it can read HTML. (The convert.sh script is available at the end of this article). Just copy the convert.sh script into the quick-docs repo and run it and it makes an adjacent quick-docs-html folder.

sudo dnf install podman podman-compose
cd quick-docs
curl -LO https://gist.githubusercontent.com/Lifto/73d3cf4bfc22ac4d9e493ac44fe97402/raw/convert.sh
chmod +x convert.sh
./convert.sh
cd ..

Now let's ingest the folder with Docs2DB. The common way to use Docs2DB is to install it from PyPi and use it as a command line tool.

A word about uv

For this demo we're going to use uv for our Python environment. The use of uv has been catching on, but because not everybody I know has heard of it, I want to introduce it. Think of uv as a replacement for venv and pip. When you use venv you first create a new virtual environment. Then, and on subsequent uses, you "activate" that virtual environment so that magically, when you call Python, you get the Python that is installed in the virtual environment you activated and not the system Python. The difference with uv is that you call uv explicitly each time. There is no "magic". We use uv here in a way that uses a temporary environment for each invocation.

Install uv and Podman on your system:

sudo dnf install -y uv podman podman-compose
# These examples require the more robust Python 3.12
uv python install 3.12
# This will run Docs2DB without making a permanent installation on your system
uvx --python 3.12 docs2db ingest quick-docs-html/

Only if you are curious! What Docs2DB is doing

If you are curious, you may note that Docs2DB made a docs2db_content folder. In there you will find json files of the ingested source documents. To build the database, Docs2DB ingests the source data using Docling, which generates json files from the text it reads in. The files are then "chunked" into the small pieces that can be inserted into an LLM prompt. The chunks then have "embeddings" calculated for them so that during the query phase the chunks can be looked up by "semantic similarity" (e.g.: "computer", "laptop" and "cloud instance" can all map to a related concept even if their exact words don't match). Finally, the chunks and embeddings are loaded into the database.

Build the database

The following commands complete the database build process:

uv tool run --python 3.12 docs2db chunk --skip-context
uv tool run --python 3.12 docs2db embed
uv tool run --python 3.12 docs2db db-start
uv tool run --python 3.12 docs2db load

Now let's do a test query and see what we get back

uvx --python 3.12 docs2db-api query "What is the recommended tool for upgrading between major releases on Fedora Silverblue" --format text --max-chars 2000 --no-refine

On my terminal I see several chunks of text, separated by lines of -. One of those chunks says:

"Silverblue can be upgraded between major versions using the ostree command."

Note that this is not an answer to our question yet! This is just a quote from the Fedora docs. And this is precisely the sort of quote we want to supply to the LLM so that it can answer our question. Recall the example above about "I have green box with a red ball in it"? The statement the RAG engine found about ostree is the equivalent for this question about upgrading Fedora Silverblue. We must now pass it on to the LLM so the LLM can use it to answer our question.

Hooking it in: Connecting the RAG database to the AI

Later in this article you'll find talk.sh. talk.sh is our local, open source, LLM-based verbally communicating AI; and it is just a bash script. To run it yourself you need to install a few components, this blog walks you through the whole process. The talk.sh script gets voice input, turns that into text, splices that text into a prompt which is then sent to the LLM, and finally speaks back the response.

To plug the RAG results into the LLM we edit the prompt. Look at step 3 in talk.sh and you see we are injecting the RAG results using the variable $CONTEXT. This way when we ask the LLM a question, it will respond to a prompt that basically says "You are a helper. The Fedora Docs says ostree is how you upgrade Fedora Silverblue. Answer this question: How do you upgrade Fedora Silverblue?"

Note: talk.sh is also available here:
https://gist.github.com/Lifto/2fcaa2d0ebbd8d5c681ab33e7c7a6239

Testing it

Run talk.sh and ask:

"What is the recommended tool for upgrading between major releases on Fedora Silverblue"

And we get:

"Ostree command is recommended for upgrading Fedora Silver Blue between major releases. Do you need guidance on using it?"

Sounds good to me!

Knowing things

Our AI can now know the knowledge contained in documents. This particular technique, RAG (Retrieval Augmented Generation), adds relevant data from an ingested source to a prompt before sending that prompt to the LLM. The result of this is that the LLM generates its response in consideration of this data.

Try it yourself! Ingest a library of documents and have your AI answer questions with its new found knowledge!


AI Attribution: The convert.sh and talk.sh scripts in this article were written by ChatGPT 5.2 under my direction and review. The featured image was generated using Google Gemini.

convert.sh

OUT_DIR="$PWD/../quick-docs-html"
mkdir -p "$OUT_DIR"

podman run --rm \
  -v "$PWD:/work:Z" \
  -v "$OUT_DIR:/out:Z" \
  -w /work \
  docker.io/asciidoctor/docker-asciidoctor \
  bash -lc '
    set -u
    ok=0
    fail=0
    while IFS= read -r -d "" f; do
      rel="${f#./}"
      out="/out/${rel%.adoc}.html"
      mkdir -p "$(dirname "$out")"
      echo "Converting: $rel"
      if asciidoctor -o "$out" "$rel"; then
        ok=$((ok+1))
      else
        echo "FAILED: $rel" >&2
        fail=$((fail+1))
      fi
    done < <(find modules -type f -path "*/pages/*.adoc" -print0)

    echo
    echo "Done. OK=$ok FAIL=$fail"
  '

talk.sh

#!/usr/bin/env bash

set -e

# Path to audio input
AUDIO=input.wav

# Step 1: Record from mic
echo "🎙 Speak now..."
arecord -f S16_LE -r 16000 -d 5 -q "$AUDIO"

# Step 2: Transcribe using whisper.cpp
TRANSCRIPT=$(./whisper.cpp/build/bin/whisper-cli \
  -m ./whisper.cpp/models/ggml-base.en.bin \
  -f "$AUDIO" \
  | grep '^\[' \
  | sed -E 's/^\[[^]]+\][[:space:]]*//' \
  | tr -d '\n')
echo "🗣 $TRANSCRIPT"

# Step 3: Get relevant context from RAG database
echo "📚 Searching documentation..."
CONTEXT=$(uv tool run --python 3.12 docs2db-api query "$TRANSCRIPT" \
  --format text \
  --max-chars 2000 \
  --no-refine \
  2>/dev/null || echo "")

if [ -n "$CONTEXT" ]; then
  echo "📄 Found relevant documentation:"
  echo "- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -"
  echo "$CONTEXT"
  echo "- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -"
else
  echo "📄 No relevant documentation found"
fi

# Step 4: Build prompt with RAG context
PROMPT="You are Brim, a steadfast butler-like advisor created by Ellis. 
Your pronouns are they/them. You are deeply caring, supportive, and empathetic, but never effusive. 
You speak in a calm, friendly, casual tone suitable for text-to-speech. 
Rules: 
- Reply with only ONE short message directly to Ellis. 
- Do not write any dialogue labels (User:, Assistant:, Q:, A:), or invent more turns.
- ≤100 words.
- If the documentation below is relevant, use it to inform your answer.
- End with a gentle question, then write <eor> and stop.
Relevant Fedora Documentation:
$CONTEXT
User: $TRANSCRIPT
Assistant:"

# Step 5: Get LLM response using llama.cpp
RESPONSE=$(
  LLAMA_LOG_VERBOSITY=1 ./llama.cpp/build/bin/llama-completion \
    -m ./llama.cpp/models/microsoft_Phi-4-mini-instruct-Q4_K_M.gguf \
    -p "$PROMPT" \
    -n 150 \
    -c 4096 \
    -no-cnv \
    -r "<eor>" \
    --simple-io \
    --color off \
    --no-display-prompt
)

# Step 6: Clean up response
RESPONSE_CLEAN=$(echo "$RESPONSE" | sed -E 's/<eor>.*//I')
RESPONSE_CLEAN=$(echo "$RESPONSE_CLEAN" | sed -E 's/^[[:space:]]*Assistant:[[:space:]]*//I')

echo ""
echo "🤖 $RESPONSE_CLEAN"

# Step 7: Speak the response
echo "$RESPONSE_CLEAN" | espeak

06 Feb 2026 8:00am GMT

Remi Collet: 💎 PHPUnit 13

Remi Collet's avatar

RPMs of PHPUnit version 13 are available in the remi repository for Fedora ≥ 42 and Enterprise Linux (CentOS, RHEL, Alma, Rocky...).

Documentation :

ℹ️ This new major version requires PHP ≥ 8.4 and is not backward compatible with previous versions, so the package is designed to be installed beside versions 8, 9, 10, 11, and 12.

Installation:

dnf --enablerepo=remi install phpunit13

Notice: This tool is an essential component of PHP QA in Fedora. This version should be available soon in the Fedora ≥ 43 official repository (19 new packages).

06 Feb 2026 7:59am GMT

05 Feb 2026

feedFedora People

Christof Damian: Friday Links 26-05

05 Feb 2026 11:00pm GMT

Fedora Infrastructure Status: Fedora 44 Mass Branching

05 Feb 2026 6:45pm GMT

Vedran Miletić: The follow-up

05 Feb 2026 2:00pm GMT

Vedran Miletić: The academic and the free software community ideals

05 Feb 2026 2:00pm GMT

Vedran Miletić: Should I do a Ph.D.?

05 Feb 2026 2:00pm GMT

Vedran Miletić: Open-source magic all around the world

05 Feb 2026 2:00pm GMT

Vedran Miletić: Markdown vs reStructuredText for teaching materials

05 Feb 2026 2:00pm GMT

Vedran Miletić: Joys and pains of interdisciplinary research

05 Feb 2026 2:00pm GMT

Vedran Miletić: Free to know: Open access and open source

05 Feb 2026 2:00pm GMT

Vedran Miletić: Fly away, little bird

05 Feb 2026 2:00pm GMT

Vedran Miletić: Celebrating Graphics and Compute Freedom Day

05 Feb 2026 2:00pm GMT