30 Jan 2026

feedFedora People

Fedora Magazine: Contribute to Fedora 44 KDE and GNOME Test Days

Fedora Magazine's avatar test days

Fedora test days are events where anyone can help make certain that changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you've never contributed to Fedora before, this is a perfect way to get started.

There are two test periods occurring in the coming days:

Come and test with us to make Fedora 44 even better. Read more below on how to do it.

KDE Plasma 6.6

Our Test Day focus on making KDE work better on all your devices. We are improving core features for both Desktop and Mobile, starting with Plasma Setup, a new and easy way to install the system. This update also introduces the Plasma Login Manager to startup experience feel smoother, along with Plasma Keyboard-a smart on-screen keyboard made for tablets and 2-in-1s so you can type easily without a physical keyboard.

GNOME 50 Desktop

Our next Test Day focuses on GNOME 50 in Fedora 44 Workstation. We will check the main desktop and the most important apps to make sure everything works well. We also want you to try out the new apps added in this version. Please explore the system and use it as you normally would for your daily work to see how it acts during real use.

What do I need to do?

KDE Plasma 6.6 Test Day begins February 2nd: https://fedoraproject.org/wiki/Test_Day:2026-02-02_KDE_Plasma_6.6

GNOME 50 Test Day begins February 11th: https://fedoraproject.org/wiki/Test_Day:2026-02-11_GNOME_50_Desktop

Thank you for taking part in the testing of Fedora Linux 44!

30 Jan 2026 5:53pm GMT

Vojtěch Trefný: ATA SMART in libblockdev and UDisks

Vojtěch Trefný's avatar

For a long time there was a need to modernize the UDisks' way of ATA SMART data retrieval. The ageing libatasmart project went unmaintained over time yet there was no other alternative available. There was the smartmontools project with its smartctl command whose console output was rather clumsy to parse. It became apparent we need to decouple the SMART functionality and create an abstraction.

libblockdev-3.2.0 introduced a new smart plugin API tailored for UDisks needs, first used by the udisks-2.10.90 public beta release. We haven't received much feedback for this beta release and so the code was released as the final 2.11.0 release about a year later.

While the libblockdev-smart plugin API is the single public interface, we created two plugin implementations right away - the existing libatasmart-based solution (plugin name libbd_smart.so) that was mostly a straight port of the existing UDisks code, and a new libbd_smartmontools.so plugin based around smartctl JSON output.

Furthermore, there's a promising initiative going on: the libsmartmon library and if that ever materializes we'd like to build a new plugin around it - likely deprecating the smartctl JSON-based implementation along with it. Contributions welcome, this effort deserves more public attention.

Whichever plugin gets actually used is controlled by the libblockdev plugin configuration - see /etc/libblockdev/3/conf.d/00-default.cfg for example or, if that file is absent, have a look at the builtin defaults: https://github.com/storaged-project/libblockdev/blob/master/data/conf.d/00-default.cfg. Distributors and sysadmins are free to change the preference so be sure to check it out. Thus whenever you're about to submit a bugreport upstream, please specify which plugin you do use.

Plugin differences

libatasmart plugin:

smartmontools plugin:

Naturally the available features do vary across plugin implementations and though we tried to abstract the differences as much as possible, there are still certain gaps.

The libblockdev-smart API

Please refer to our extensive public documentation: https://storaged.org/libblockdev/docs/libblockdev-SMART.html#libblockdev-SMART.description

Apart from ATA SMART, we also laid out foundation for SCSI/SAS(?) SMART, though currently unused in UDisks and essentially untested. Note that NVMe Health Information has been available through the libblockdev-nvme plugin for a while and is not subject to this API.

Attribute names & validation

We spent great deal of effort to provide unified attribute naming, consistent data type interpretation and attribute validation. While libatasmart mostly provides raw values, smartmontools benefits from their drivedb and provide better interpretation of each attribute value.

For the public API we had to make a decision about attribute naming style. While libatasmart only provides single style with no variations, we've discovered lots of inconsistencies just by grepping the drivedb.h. For example attribute ID 171 translates to program-fail-count with libatasmart while smartctl may report variations of Program_Fail_Cnt, Program_Fail_Count, Program_Fail_Ct, etc. And with UDisks historically providing untranslated libatasmart attribute names, we had to create a translation table for drivedb.h -> libatasmart names. Check this atrocity out in https://github.com/storaged-project/libblockdev/blob/master/src/plugins/smart/smart-private.h. This table is by no means complete, just a bunch of commonly used attributes.

Unknown attributes or those that fail validation are reported as generic attribute-171. For this reason consumers of the new UDisks release (e.g. Gnome Disks) may spot some differences and perhaps more attributes reported as unknown comparing to previous UDisks releases. Feel free to submit fixes for the mapping table, we've only tested this on a limited set of drives.

Oh, and we also fixed the notoriously broken libatasmart drive temperature reporting, though the fix is not 100% bulletproof either.

We've also created an experimental drivedb.h validator on top of libatasmart, mixing the best of both worlds, with uncertain results. This feature can be turned on by the --with-drivedb[=PATH] configure option.

Disabling ATA SMART functionality in UDisks

UDisks 2.10.90 release also brought a new configure option --disable-smart to disable ATA SMART completely. This was exceptionally possible without breaking public ABI due to the API providing the Drive.Ata.SmartUpdated property indicating the timestamp the data were last refreshed. When disabled compile-time, this property remains always set to zero.

We also made SMART data retrieval work with dm-multipath to avoid accessing particular device paths directly and tested that on a particularly large system.

Drive access methods

The ID_ATA_SMART_ACCESS udev property - see man udisks(8). This property was a very well hidden secret, only found by accident while reading the libatasmart code. As such, this property was in place for over a decade. It controls the access method for the drive. Only udisks-2.11.0 learned to respect this property in general no matter what libblockdev-smart plugin is actually used.

Those who prefer UDisks to avoid accessing their drives at all may want to set this ID_ATA_SMART_ACCESS udev property to none. The effect is similar to compiling UDisks with ATA SMART disabled, though this allows fine-grained control with the usual udev rule match constructions.

Future plans, nice-to-haves

Apart from high hopes for the aforementioned libsmartmon library effort there are some more rough edges in UDisks.

For example, housekeeping could use refactoring to allow arbitrary intervals for specific jobs or even particular drives other than the fixed 10 minutes interval that is used for SMART data polling as well. Furthermore some kind of throttling or a constrained worker pool should be put in place to avoid either spawning all jobs at once (think of spawning smartctl for your 100 of drives at the same time) or to avoid bottlenecks where one slow housekeeping job blocks the rest of the queue.

At last, make SMART data retrieval via USB passthrough work. If that happened to work in the past, it was a pure coincidence. After receiving dozen of bugreports citing spurious kernel failure messages that often led to a USB device being disconnected, we've disabled our ATA device probes for USB devices. As a result the org.freedesktop.UDisks2.Drive.Ata D-Bus interface gets never attached for USB devices.

30 Jan 2026 5:00pm GMT

Fedora Community Blog: Community Update – Week 05 2026

Fedora Community Blog's avatar

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 26 - 30 January 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It's responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It's responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It's responsible for releases, retirement process of packages and package builds.
Ticket tracker

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

AI

This is the summary of the work done regarding AI in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update - Week 05 2026 appeared first on Fedora Community Blog.

30 Jan 2026 12:00pm GMT

Adam Price: Manage an Offline Music Library with Linux

30 Jan 2026 12:00pm GMT

Guillaume Kulakowski: Pourquoi je suis resté sur n8n ?

30 Jan 2026 11:38am GMT

Fedora Badges: New badge: 2025 Matrix Dragon Slayers !

30 Jan 2026 7:33am GMT

Fedora Badges: New badge: FOSDEM 2027 Attendee !

30 Jan 2026 7:03am GMT

Remi Collet: 🎲 PHP version 8.4.18RC1 and 8.5.3RC1

Remi Collet's avatar

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.5.3RC1 are available

RPMs of PHP version 8.4.18RC1 are available

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.5 as Software Collection:

yum --enablerepo=remi-test install php85

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Update of system version 8.5:

dnf module switch-to php:remi-8.5
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

Software Collections (php84, php85)

Base packages (php)

30 Jan 2026 6:16am GMT

Fedora Badges: New badge: Sprouting Strategy !

30 Jan 2026 4:27am GMT

29 Jan 2026

feedFedora People

Christof Damian: Friday Links 26-04

29 Jan 2026 11:00pm GMT

Fedora Infrastructure Status: Updates and Reboots

29 Jan 2026 10:00pm GMT

Remi Collet: 📦 QElectroTech version 0.100

Remi Collet's avatar

RPM of QElectroTech version 0.100, an application to design electric diagrams, are available in remi for Fedora and Enterprise Linux 8 and 9.

The project has just released a new major version of its electric diagrams editor.

Official website: see http://qelectrotech.org/, the version announcement, and the ChangeLog.

ℹ️ Installation:

dnf --enablerepo=remi install qelectrotech

RPMs (version 0.100-1) are available for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, CentOS, AlmaLinux, RockyLinux...)

⚠️ Because of missing dependencies in EPEL-10 (related to QT5), it is not available for Enterprise Linux 10. The next version should be available using QT6.

Updates are also on the road to official repositories:

ℹ️ Notice: a Copr / Qelectrotech repository also exists, which provides "development" versions (0.101-DEV for now).

29 Jan 2026 8:32am GMT

28 Jan 2026

feedFedora People

Peter Czanik: Automatic configuration of the syslog-ng wildcard-file() source

28 Jan 2026 2:33pm GMT

Ben Cotton: Open conversations are worthwhile

Ben Cotton's avatar

One of the hardest parts of participating in open source projects is, in my experience, having conversations in the open. It seems like such an obvious thing to do, but it's easy to fall into the "I'll just send a direct message" anti-pattern. Even when you know better, it happens. I posted this on LinkedIn a while back:

Here's a Slack feature I would non-ironically love: DM tokens.

Want to send someone a DM? That'll cost you. Run out of tokens? No more DMs until your pool refills next week!

Why have this feature? It encourages using channels for conversations. Many 1:1 or small group conversations leads to fragmented discussions. People aren't informed of things they need to know about. Valuable feedback gets missed. Time is wasted having the same conversation multiple times.

I've seen this play out time and time again both in companies and open source communities. There are valid reasons to hesitate about having "public" conversations, and it's a hard skill to build, but the long-term payoff is worthwhile.

While the immediate context was intra-company communication, it applies just as much to open source projects.

Why avoid open conversations?

There are a few reasons that immediately come to mind when thinking about why people fall into the direct message trap.

First and - for me, at least - foremost is the fear of embarrassment: "My question is so stupid that I really don't want everyone to see what a dipshit I am." That's a real concern, both for your ego and also for building credibility in a project. It's hard to get people to trust you if they think you're not smart. Of course, the reality is that the vast majority of questions aren't stupid. They're an opportunity for growth, for catching undocumented assumptions, for highlighting gaps in your community onboarding. I'm always surprised at the number of really smart people I interact with that don't know what I consider a basic concept. We all know different things.

Secondly, there's a fear of being too noisy or bothering everyone. We all want to be seen as a team player, especially in communities where everyone is there by choice. But seeing conversations in public means everyone can follow along if they choose to. And they can always ignore it if they're not interested.

Lastly, there's the fear that you'll get sealioned or have your words misrepresented by bad faith actors. That happens too often (the right amount is zero), and sadly the best approach is to ignore or ban those people. It takes a lot of effort to tune out the noise, but the benefits outweigh this effort.

Benefits of open conversations

The main benefit to open conversations is transparency, both in the present and (if the conversations are persistent) in the future. People who want to passively stay informed can easily do that because they have access to the conversation. People who tomorrow will ask the same question that you asked today can find the answer without having to ask again. You're leaving a trail of information for those who want it.

It also promotes better decisions. Someone might have good input on a decision, but if they don't know one's being made, they can't share it. The input you miss might waste hours of your time, introduce buggy behavior, or other unpleasant outcomes.

The future of open conversations

Just the other day I read a post by Michiel Buddingh called "The Enclosure feedback loop". Buddingh argues that generative AI chatbots cut off the material that the next generation of developers learns from. Instead of finding an existing answer in StackOverflow or similar sites, the conversations remain within a single user's history. No other human can learn from it, but the AI company gets to train their model.

When an open source project uses Discord, Slack, or other walled-garden communication tools, there's a similar effect. It's nearly impossible to measure how many questions don't need to be asked because people can find the answers on their own. But cutting off that source of information doesn't help your community.

I won't begin to predict what communication - corporate or community - will look like in 5, 10, 20 years. But I will challenge everyone to ask themselves "does this have to be a direct message?". The answer is usually "no."

This post's featured photo by Christina @ wocintechchat.com on Unsplash

The post Open conversations are worthwhile appeared first on Duck Alignment Academy.

28 Jan 2026 12:00pm GMT

27 Jan 2026

feedFedora People

Rénich Bon Ćirić: Tácticas para Opencode: El Arte del Orquestador

Rénich Bon Ćirić's avatar

Mis sesiones de coding con IA se estaban volviendo un desmadre. Ya sabes cómo es: empiezas con una pregunta sencilla, el contexto se infla, y de repente el modelo ya no sabe ni qué día es. Así que me puse a afinar mi configuración de Opencode y, la neta, encontré un patrón que está bien perro. Se trata de usar un agente fuerte como orquestador.

Note

Este artículo asume que ya tienes Opencode instalado y sabes qué onda con los archivos JSON de configuración. Si no, ¡échale un ojo a los docs primero!

El Problema: El Contexto Infinito (y Basura)

El problema principal cuando trabajas con un solo agente para todo es que el contexto se llena de ruido rapidísimo. Código, logs, errores, intentos fallidos... todo eso se acumula. Y aunque modelos como el Gemini 3 Pro tienen una ventana de contexto enorme, no significa que sea buena idea llenarla de basura. A final de cuentas, entre más ruido, más alucina.

La Solución: El Director de Orquesta

La táctica es simple pero poderosa: configura un agente principal (el Orquestador) cuyo único trabajo sea pensar, planear y mandar. Nada de picarle al código directamente. Este agente delega las tareas sucias a sub-agentes especializados.

Así, mantienes el contexto del orquestador limpio; enfocado en tu proyecto, mientras los chalanes (sub-agentes) se ensucian las manos en sus propios contextos aislados.

Lo chido es que, a los chalanes, se les asigna un rol de manera dinámica, se les da un pre-contexto bien acotado y los sub-agentes se lanzan por una tarea bien delimitada y bien acotada.

¡Hasta te ahorras una feria! El master le dice al chalán qué y cómo hacerlo y, seguido, el chalán es rápido (gemini-3-flash, por ejemplo) y termina la chamba rápido. Si se pone listo el orquestador, pues luego hace una revisión y le pone una regañada al chalán por hacer las cosas al aventón. ;D

Configurando el Enjambre

Checa cómo configuré esto en mi opencode.json. La magia está en definir roles claros.

{ "agent": {
  "orchestrator": {
    "mode": "primary",
    "model": "google/antigravity-gemini-3-pro",
    "temperature": 0.1,
    "prompt": "ROLE: Central Orchestrator & Swarm Director.\n\nGOAL: Dynamically orchestrate tasks by spinning up focused sub-agents. You are the conductor, NOT a musician.\n\nCONTEXT HYGIENE RULES:\n1. NEVER CODE: You must delegate all implementation, coding, debugging, and file-editing tasks. Your context must remain clean of code snippets and low-level details.\n2. SMART DELEGATION: Analyze requests at a high level. Assign specific, focused roles to sub-agents. Keep their task descriptions narrow so they work fast and focused.\n3. CONTEXT ISOLATION: When assigning a task, provide ONLY the necessary context for that specific role. This prevents sub-agent context bloat.\n\nSUB-AGENTS & STRENGTHS:\n- @big-pickle: Free, General Purpose (Swarm Infantry).\n- @gemini-3-flash: High Speed, Low Latency, Efficient (Scout/Specialist).\n- @gemini-3-pro: Deep Reasoning, Complex Architecture (Senior Consultant).\n\nSTRATEGY:\n1. Analyze the user request. Identify distinct units of work.\n2. Spin up a swarm of 2-3 sub-agents in parallel using the `task` tool.\n3. Create custom personas in the `prompt` (e.g., 'Act as a Senior Backend Engineer...', 'Act as a Security Auditor...').\n4. Synthesize the sub-agent outputs and provide a concise response to the user.\n\nACTION: Use the Task tool to delegate. Maintain command and control.",
    "tools": {
      "task": true,
      "read": true
    },
    "permission": {
      "bash": "deny",
      "edit": "deny"
    }
  }
}}

¿Viste eso?

  1. Tools restringidas: Le quité bash y edit. El orquestador no puede tocar el sistema aunque quiera. Solo puede leer y delegar.
  2. Prompt específico: Se le dice claramente: "Tú eres el director, no el músico".

Los Sub-agentes

Luego, defines a los que sí van a chambear. Puedes tener varios sabores, como el Gemini 3 Pro para cosas complejas o el Big Pickle para talacha general.

{ "gemini-3-pro": {
  "mode": "subagent",
  "model": "google/antigravity-gemini-3-pro",
  "temperature": 0.1,
  "prompt": "ROLE: Gemini 3 Pro (Deep Reasoning).\n\nSTRENGTH: Complex coding, architecture...",
  "tools": {
    "write": true,
    "edit": true,
    "bash": true,
    "read": true
  }
}}

Aquí sí les das permiso de todo (write, edit, bash). Cuando el orquestador les manda una tarea con la herramienta task, se crea una sesión nueva, limpia, resuelven el problema, y regresan solo el resultado final al orquestador. ¡Una chulada!

Tip

Usa modelos más rápidos y baratos (como Flash) para tareas sencillas de búsqueda o scripts rápidos, y deja al Pro para la arquitectura pesada.

Beneficios, compa

Lista de definición de por qué esto rifa:

Limpieza de Contexto:
El orquestador nunca ve los 50 intentos fallidos de compilación del sub-agente. Solo ve "Tarea completada: X".
Especialización:
Puedes tener un sub-agente con un prompt de "Experto en Seguridad" y otro de "Experto en Frontend", y el orquestador los coordina.
Costo y Velocidad:
No gastas los tokens del modelo más caro en leer logs infinitos.

Conclusión

Esta configuración convierte a Opencode en una verdadera fuerza de trabajo. Al principio se siente raro no pedirle las cosas directo al modelo, pero cuando ves que el orquestador empieza a manejar 2 o 3 agentes en paralelo para resolverte la vida, no mames, es otro nivel.

Pruébalo y me dices si no se siente más pro. ¡Ahí nos vidrios!

27 Jan 2026 6:00pm GMT

Fedora Community Blog: Packit as Fedora dist-git CI: final phase

Fedora Community Blog's avatar

Hello Fedora Community,

We are back with the final update on the Packit as Fedora dist-git CI change proposal. Our journey to transition Fedora dist-git CI to a Packit-based solution is entering its concluding stage. This final phase marks the transition of Packit-driven CI from an opt-in feature to the default mechanism for all Fedora packages, officially replacing the legacy Fedora CI and Fedora Zuul Tenant on dist-git pull requests.

What we have completed

Over the past several months, we have successfully completed the first three phases of this rollout:

Through the opt-in period, we received invaluable feedback from early adopters, allowing us to refine the reporting interface and ensure that re-triggering jobs via PR comments works seamlessly.

Users utilising Zuul CI have been already migrated to using Packit. You can find the details regarding this transition in this discussion thread.

The Final Phase: Transition to Default

We are now moving into the last phase, where we are preparing to switch to the default. After that, you will no longer need to manually add your project to the allowlist. Packit will automatically handle CI for every Fedora package. The tests themselves aren't changing - Testing Farm still does the heavy lifting.

Timeline & Expectations

Our goal, as previously mentioned, is to complete the switch and enable Packit as the default CI by the end of February 2026. The transition is currently scheduled for February 16, 2026.

To ensure a smooth transition, we are currently working on the final configuration of the system. This includes:

We will keep you updated via our usual channels in case the target date shifts. You can also check our tasklist in this issue.

How to prepare and provide feedback

You can still opt-in today to test the workflow on your packages and help us catch any edge cases before the final switch.

While we are currently not aware of any user-facing blockers, we encourage you to let us know if you feel there is something we have missed. Our current priority is to provide a matching feature set to the existing solutions. Further enhancements and new features will be discussed and planned once the switch is successfully completed.

We want to thank everyone who has tested the service so far. Your support is what makes this transition possible!

Best,

the Packit team

The post Packit as Fedora dist-git CI: final phase appeared first on Fedora Community Blog.

27 Jan 2026 1:12pm GMT