29 Jan 2026
Fedora People
Fedora Infrastructure Status: Updates and Reboots
29 Jan 2026 10:00pm GMT
28 Jan 2026
Fedora People
Ben Cotton: Open conversations are worthwhile
One of the hardest parts of participating in open source projects is, in my experience, having conversations in the open. It seems like such an obvious thing to do, but it's easy to fall into the "I'll just send a direct message" anti-pattern. Even when you know better, it happens. I posted this on LinkedIn a while back:
Here's a Slack feature I would non-ironically love: DM tokens.
Want to send someone a DM? That'll cost you. Run out of tokens? No more DMs until your pool refills next week!
Why have this feature? It encourages using channels for conversations. Many 1:1 or small group conversations leads to fragmented discussions. People aren't informed of things they need to know about. Valuable feedback gets missed. Time is wasted having the same conversation multiple times.
I've seen this play out time and time again both in companies and open source communities. There are valid reasons to hesitate about having "public" conversations, and it's a hard skill to build, but the long-term payoff is worthwhile.
While the immediate context was intra-company communication, it applies just as much to open source projects.
Why avoid open conversations?
There are a few reasons that immediately come to mind when thinking about why people fall into the direct message trap.
First and - for me, at least - foremost is the fear of embarrassment: "My question is so stupid that I really don't want everyone to see what a dipshit I am." That's a real concern, both for your ego and also for building credibility in a project. It's hard to get people to trust you if they think you're not smart. Of course, the reality is that the vast majority of questions aren't stupid. They're an opportunity for growth, for catching undocumented assumptions, for highlighting gaps in your community onboarding. I'm always surprised at the number of really smart people I interact with that don't know what I consider a basic concept. We all know different things.
Secondly, there's a fear of being too noisy or bothering everyone. We all want to be seen as a team player, especially in communities where everyone is there by choice. But seeing conversations in public means everyone can follow along if they choose to. And they can always ignore it if they're not interested.
Lastly, there's the fear that you'll get sealioned or have your words misrepresented by bad faith actors. That happens too often (the right amount is zero), and sadly the best approach is to ignore or ban those people. It takes a lot of effort to tune out the noise, but the benefits outweigh this effort.
Benefits of open conversations
The main benefit to open conversations is transparency, both in the present and (if the conversations are persistent) in the future. People who want to passively stay informed can easily do that because they have access to the conversation. People who tomorrow will ask the same question that you asked today can find the answer without having to ask again. You're leaving a trail of information for those who want it.
It also promotes better decisions. Someone might have good input on a decision, but if they don't know one's being made, they can't share it. The input you miss might waste hours of your time, introduce buggy behavior, or other unpleasant outcomes.
The future of open conversations
Just the other day I read a post by Michiel Buddingh called "The Enclosure feedback loop". Buddingh argues that generative AI chatbots cut off the material that the next generation of developers learns from. Instead of finding an existing answer in StackOverflow or similar sites, the conversations remain within a single user's history. No other human can learn from it, but the AI company gets to train their model.
When an open source project uses Discord, Slack, or other walled-garden communication tools, there's a similar effect. It's nearly impossible to measure how many questions don't need to be asked because people can find the answers on their own. But cutting off that source of information doesn't help your community.
I won't begin to predict what communication - corporate or community - will look like in 5, 10, 20 years. But I will challenge everyone to ask themselves "does this have to be a direct message?". The answer is usually "no."
This post's featured photo by Christina @ wocintechchat.com on Unsplash
The post Open conversations are worthwhile appeared first on Duck Alignment Academy.
28 Jan 2026 12:00pm GMT
27 Jan 2026
Fedora People
Rénich Bon Ćirić: Tácticas para Opencode: El Arte del Orquestador
Mis sesiones de coding con IA se estaban volviendo un desmadre. Ya sabes cómo es: empiezas con una pregunta sencilla, el contexto se infla, y de repente el modelo ya no sabe ni qué día es. Así que me puse a afinar mi configuración de Opencode y, la neta, encontré un patrón que está bien perro. Se trata de usar un agente fuerte como orquestador.
Note
Este artículo asume que ya tienes Opencode instalado y sabes qué onda con los archivos JSON de configuración. Si no, ¡échale un ojo a los docs primero!
El Problema: El Contexto Infinito (y Basura)
El problema principal cuando trabajas con un solo agente para todo es que el contexto se llena de ruido rapidísimo. Código, logs, errores, intentos fallidos... todo eso se acumula. Y aunque modelos como el Gemini 3 Pro tienen una ventana de contexto enorme, no significa que sea buena idea llenarla de basura. A final de cuentas, entre más ruido, más alucina.
La Solución: El Director de Orquesta
La táctica es simple pero poderosa: configura un agente principal (el Orquestador) cuyo único trabajo sea pensar, planear y mandar. Nada de picarle al código directamente. Este agente delega las tareas sucias a sub-agentes especializados.
Así, mantienes el contexto del orquestador limpio; enfocado en tu proyecto, mientras los chalanes (sub-agentes) se ensucian las manos en sus propios contextos aislados.
Lo chido es que, a los chalanes, se les asigna un rol de manera dinámica, se les da un pre-contexto bien acotado y los sub-agentes se lanzan por una tarea bien delimitada y bien acotada.
¡Hasta te ahorras una feria! El master le dice al chalán qué y cómo hacerlo y, seguido, el chalán es rápido (gemini-3-flash, por ejemplo) y termina la chamba rápido. Si se pone listo el orquestador, pues luego hace una revisión y le pone una regañada al chalán por hacer las cosas al aventón. ;D
Configurando el Enjambre
Checa cómo configuré esto en mi opencode.json. La magia está en definir roles claros.
{ "agent": {
"orchestrator": {
"mode": "primary",
"model": "google/antigravity-gemini-3-pro",
"temperature": 0.1,
"prompt": "ROLE: Central Orchestrator & Swarm Director.\n\nGOAL: Dynamically orchestrate tasks by spinning up focused sub-agents. You are the conductor, NOT a musician.\n\nCONTEXT HYGIENE RULES:\n1. NEVER CODE: You must delegate all implementation, coding, debugging, and file-editing tasks. Your context must remain clean of code snippets and low-level details.\n2. SMART DELEGATION: Analyze requests at a high level. Assign specific, focused roles to sub-agents. Keep their task descriptions narrow so they work fast and focused.\n3. CONTEXT ISOLATION: When assigning a task, provide ONLY the necessary context for that specific role. This prevents sub-agent context bloat.\n\nSUB-AGENTS & STRENGTHS:\n- @big-pickle: Free, General Purpose (Swarm Infantry).\n- @gemini-3-flash: High Speed, Low Latency, Efficient (Scout/Specialist).\n- @gemini-3-pro: Deep Reasoning, Complex Architecture (Senior Consultant).\n\nSTRATEGY:\n1. Analyze the user request. Identify distinct units of work.\n2. Spin up a swarm of 2-3 sub-agents in parallel using the `task` tool.\n3. Create custom personas in the `prompt` (e.g., 'Act as a Senior Backend Engineer...', 'Act as a Security Auditor...').\n4. Synthesize the sub-agent outputs and provide a concise response to the user.\n\nACTION: Use the Task tool to delegate. Maintain command and control.",
"tools": {
"task": true,
"read": true
},
"permission": {
"bash": "deny",
"edit": "deny"
}
}
}}
¿Viste eso?
- Tools restringidas: Le quité bash y edit. El orquestador no puede tocar el sistema aunque quiera. Solo puede leer y delegar.
- Prompt específico: Se le dice claramente: "Tú eres el director, no el músico".
Los Sub-agentes
Luego, defines a los que sí van a chambear. Puedes tener varios sabores, como el Gemini 3 Pro para cosas complejas o el Big Pickle para talacha general.
{ "gemini-3-pro": {
"mode": "subagent",
"model": "google/antigravity-gemini-3-pro",
"temperature": 0.1,
"prompt": "ROLE: Gemini 3 Pro (Deep Reasoning).\n\nSTRENGTH: Complex coding, architecture...",
"tools": {
"write": true,
"edit": true,
"bash": true,
"read": true
}
}}
Aquí sí les das permiso de todo (write, edit, bash). Cuando el orquestador les manda una tarea con la herramienta task, se crea una sesión nueva, limpia, resuelven el problema, y regresan solo el resultado final al orquestador. ¡Una chulada!
Tip
Usa modelos más rápidos y baratos (como Flash) para tareas sencillas de búsqueda o scripts rápidos, y deja al Pro para la arquitectura pesada.
Beneficios, compa
Lista de definición de por qué esto rifa:
- Limpieza de Contexto:
- El orquestador nunca ve los 50 intentos fallidos de compilación del sub-agente. Solo ve "Tarea completada: X".
- Especialización:
- Puedes tener un sub-agente con un prompt de "Experto en Seguridad" y otro de "Experto en Frontend", y el orquestador los coordina.
- Costo y Velocidad:
- No gastas los tokens del modelo más caro en leer logs infinitos.
Conclusión
Esta configuración convierte a Opencode en una verdadera fuerza de trabajo. Al principio se siente raro no pedirle las cosas directo al modelo, pero cuando ves que el orquestador empieza a manejar 2 o 3 agentes en paralelo para resolverte la vida, no mames, es otro nivel.
Pruébalo y me dices si no se siente más pro. ¡Ahí nos vidrios!
27 Jan 2026 6:00pm GMT
Fedora Community Blog: Packit as Fedora dist-git CI: final phase

Hello Fedora Community,
We are back with the final update on the Packit as Fedora dist-git CI change proposal. Our journey to transition Fedora dist-git CI to a Packit-based solution is entering its concluding stage. This final phase marks the transition of Packit-driven CI from an opt-in feature to the default mechanism for all Fedora packages, officially replacing the legacy Fedora CI and Fedora Zuul Tenant on dist-git pull requests.
What we have completed
Over the past several months, we have successfully completed the first three phases of this rollout:
- Phase 1: Introduced Koji scratch builds.
- Phase 2: Implemented standard installability checks.
- Phase 3: Enabled support for user-defined TMT tests via Testing Farm.
Through the opt-in period, we received invaluable feedback from early adopters, allowing us to refine the reporting interface and ensure that re-triggering jobs via PR comments works seamlessly.
Users utilising Zuul CI have been already migrated to using Packit. You can find the details regarding this transition in this discussion thread.
The Final Phase: Transition to Default
We are now moving into the last phase, where we are preparing to switch to the default. After that, you will no longer need to manually add your project to the allowlist. Packit will automatically handle CI for every Fedora package. The tests themselves aren't changing - Testing Farm still does the heavy lifting.
Timeline & Expectations
Our goal, as previously mentioned, is to complete the switch and enable Packit as the default CI by the end of February 2026. The transition is currently scheduled for February 16, 2026.
To ensure a smooth transition, we are currently working on the final configuration of the system. This includes:
- Opt-out mechanism: While Packit will be the default, an opt-out mechanism will be available for packages with specialised requirements. This will be documented at packit.dev/fedora-ci.
- Documentation updates: Following the switch, we will also adjust official documentation in other relevant places, such as docs.fedoraproject.org/en-US/ci/, to reflect the new standard.
We will keep you updated via our usual channels in case the target date shifts. You can also check our tasklist in this issue.
How to prepare and provide feedback
You can still opt-in today to test the workflow on your packages and help us catch any edge cases before the final switch.
While we are currently not aware of any user-facing blockers, we encourage you to let us know if you feel there is something we have missed. Our current priority is to provide a matching feature set to the existing solutions. Further enhancements and new features will be discussed and planned once the switch is successfully completed.
- Bugs/Feature Requests: Please use our issue tracker.
- Discussion: Join the conversation on discussion.fedoraproject.org.
- Chat: Reach out to us in the #packit:fedora.im channel on Matrix.
We want to thank everyone who has tested the service so far. Your support is what makes this transition possible!
Best,
the Packit team
The post Packit as Fedora dist-git CI: final phase appeared first on Fedora Community Blog.
27 Jan 2026 1:12pm GMT
Brian (bex) Exelbierd: On EU Open Source Procurement: A Layered Approach
Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.
The European Commission has launched a consultation on the EU's future Open Source strategy. That combined with some comments by Joe Brockmeier made me think about this from a procurement perspective. Here's the core of my thinking: treat open source as recurring OpEx, not a box product. That means hiring contributors, contracting external experts, and funding internal IT so the EU participates rather than only purchases.
A lot of reaction to this request has shown up in the form of suggestions for the EU to fund open source software companies and to pay maintainers. In this Mastodon exchange that I had with Joe, he points out that these comments ignore the realities of how procurement works and the processes that vendors go through that, if followed by maintainers, would be both onerous and leave them in the precarious position of living contract to contract.
His prescription is that that the EU should participate in communities by literally "rolling up [their] sleeves and getting directly involved." My reaction was to point out that doing these things has an indirect, at best, relationship to bottom-line metrics (profit, efficiency, cost, etc.) and that our government structures are not set up to reward this kind of thinking. In general people want to see their governments not be "wasteful" in a context where one person's waste is another's necessity.
As the exchange continued, Joe pointed out that "it's not FOSS that needs to change, it's the organizational thinking."
In the moment I took the conversation in a slightly different direction, but the core of this conversation stuck with me. I woke up this morning thinking about organizational change. I am sure I am not the first to think this way, but here's my articulation.
An underlying commentary, in my opinion, in many of the responses from the "pay the maintainers / fund open source" crowd is the application of a litmus test to the funded parties. Typically they want to exclude not only all forms of proprietary software, but also SAAS products that don't fully open their infrastructure management, products which rely on cloud services, large companies, companies that have traditionally been open source friendly that have been acquired (even if they are still open source friendly), and so on. These exclusions, no matter which you support, if any, tend to drive the use of open source software by an entity like the EU into a 100% self-executed motion. And, despite the presence of SAAS in that list, these conversations often treat open source software as a "box product" only experience that the end-user must self install in their own (private and presumably all open source) cloud.
A key element of most entities is that they procure the things that aren't uniquely their effort. A government procures email server software (and increasingly email as a service) because sending email isn't their unique effort; the work that email allows to happen is. There is an inherent disconnect between the effort and therefore the corresponding cost expectation of getting email working so you can do work versus first becoming an email solution provider and expert and then after that beginning to do the work you wanted to do. (A form of Yak Shaving perhaps?).
While I am not sure I will reply to the EU Commission - I am a resident of the EU but not an EU citizen - I wanted to write to organize my thoughts.
Why Procurement Struggles With OSS
Software procurement is effectively the art of getting software:
- written
- packaged into a distributable consumable
- maintained
- advanced with new features as need arises
- installed and working
Over time the industry has become more adept at doing more of these things for their customers. Early software was all custom and then we got software that was reusable. Software companies became more common as entities became willing to pay for standardized solutions and we saw the rise of the "box product." SaaS has supplanted much of the installation and execution last-mile work that was the traditional effort of in-house IT departments. From an organizational perspective, these distinct areas of cost - some one-time and some recurring - have increasingly been rolled into a single, recurring cost. That is easier to budget and operate.
Bundling usually leads to discounting. Proprietary software companies control this whole stack and therefore can capture margin at multiple layers. This also allows them to create a discount when bundling different layers because they can "rationalize" their layer-based profit calculations. Open source changes this equation. There is effectively no profit built into most layers because any profit-taking is competed away in a deliberate and wanted race to the bottom. When a company commercializes open source software, it has to build all of its profit (and the cost of being a company) into the few layers it controls. We have watched companies struggle to make this model work, in large part because it is hard and easy to misunderstand. There is a whole aside I could write about how single-company open source makes these even worse because it buries the cost for layers like writing and maintaining software into the layers that are company-controlled, but I won't, to keep this short. But know this context. What this means, in the end, is that I believe procuring open source can sometimes lead, paradoxically, to an increase in cost versus procuring the layers separately … but only if you think broadly about procurement.
Too often we assume procurement == purchasing, but it doesn't have to. Merriam-Webster reminds us that procurement is "to bring about or achieve (something) by care and effort." Therefore we could encourage entities like the EU to procure open source software by using a layered approach and have an outcome identical to the procurement of the same software in a non-open way at the same or lower cost. Open source doesn't need to save money; it just needs to not "waste" it.
The key is the rise of software as a service. From an accounting perspective, software as a service moves software expenses from a model of large one-time costs with smaller, if any, recurring costs to one of just recurring costs. The "Hotel California"1 reality of software as a service - the idea that recurring costs can be ended at-will - is an exciting one organizationally as it gives flexibility at controllable cost, but in practice exit is often constrained by vendor lock-in, data egress limits, and portability gaps.
The Layered OpEx Model
Here's how the EU can treat open source as a recurring cost:
-
Hire people to participate in the open source project. They are tasked with helping to maintain and advance software to keep it working and changing to meet EU needs. These people are, like most engineers at open source companies, paid to focus on the organization's needs. They differ from our typical view of contributors as people showing up to "scratch their own itch."
-
Enter into contracts with external parties to provide consulting and support beyond the internal team. These folks are there to give you diversity of thought and some guarantees. The internal team is, by definition, focused just on EU problems and has a sample size install base of one. External contractors will have a much larger scope of interest and install base sample size as they work with multiple customers. Critically, this creates a funding channel for non-employees and speaks to the "pay the maintainers" crowd.
-
Continue to fund internal IT departments to care and feed software and make it usable instead of shifting this expense to a single-software solution vendor. These folks are distinct from the people in #1 above. They are experts in EU needs and understand the intersection of those needs and a multitude of software.
Every one of these expenses is recurring and able to be ended at-will. But only if ending these expenses is something we are willing to knowingly accept. We already implicitly accept them when we buy from a company. The objections I expect are as follows. Before you read them though, I want to define at-will. While it denotatively means "as one wishes : as or when it pleases or suits oneself" in our context we can extend this with "in a reasonable time frame" or "with known decision points."
Expected Objections
-
If you can terminate the people hired to participate in open source projects like this, they're living contract to contract. To this I say, yes in the sense that they don't have unlimited contracts, but no in the sense that they are still employees with employee benefits and protections, like notice periods. The big change is that they can be terminated solely due to changes in software needs.
-
But allowing for notice periods is expensive. EU employees are often perceived as more expensive than private sector ones or individual contractors. To this I say, maybe. But isn't that the point? Shouldn't we want to be in a place where we are not creating cost savings by reducing the quality of life for the humans involved?
-
If everything is either an employment agreement with a directed work product (do fixes/maintenance for our use case or install and manage this software) or a support/consultancy contract we aren't paying maintainers to be maintainers. To this I say, you're right. The mechanics of project maintenance should be borne by all of the project's participants and not by some special select few paid to do that work. There is a lot of room here to argue about specifics, but rise above it. The key thing this causes is that no one is paid to just "grind out features or maintenance" on a project that isn't used directly by a contributor. A key concept in open source has always been that people are there to either scratch their own itch or because they have a personal motivation to provide a solution to some group of users. This model pays for the first one and leaves the second to be the altruistic endeavor it is. Also, there are EU funds you can get to pay for altruistic endeavors :D.
-
This model doesn't explain how software originates. What happens when there is no open source project (yet)? To this I say, you're also right. This is a huge hole that needs more thought. Today we solve this with VC funding and profit-based funding. VC funding is predicated on ownership and being able to get return on investment. If this model is successful there is very little opportunity for what VCs need. However, profit based funding, when an entity takes some of its profit and invests in new ideas (not features) still can exist as the consulting agreements can, and likely should, include a profit component. Additionally, the EU and other entities can recognize a shared need through the consensus building and collaborative work on participation in open source software and fund the creation of teams to go start projects. This relies on everyone giving the EU permission to take risks like this.
-
The cost of administering these three expenses will eat up the cost more than paying an external vendor. To this I say, maybe, but it shouldn't matter. While I firmly believe that this shouldn't be true and that it should be possible for the EU to efficiently manage these costs for less than the sum of the profit-costs they would pay a company, I am willing to accept that the "expensive employees" of #2 above may change that. But just like above, I think that's partly the point.
-
Adopting this model will destroy the software industry and create economic disaster. To this I say, take a breath. The EU changing procurement models doesn't have the power to single-handedly destroy an industry. Even if every government adopted this, which they won't, the macro impact would likely be a shift in spend rather than a net loss. This model is practical only for the largest organizations; most entities will still need third-party vendors to bundle and manage solutions. If anything, this strengthens the open source ecosystem by providing a clear monetization path for experts, while leaving ample room for proprietary software where it adds unique value. Finally, the private sector is diverse; many companies and investors will continue to prefer traditional models. The goal here is to increase EU participation in a public good and reduce dependency, not to dismantle the software industry.
What To Ask The Commission
- When choosing software, the budget must include time for EU staff (new or existing reassigned) to contribute to the underlying open source projects.
- Keep strong in-house IT skills to ensure that deployed solutions meet needs and work together
- Complement your staff with support/consultancy agreements to provide the accountability partnership you get from traditional vendors and to provide access to greater knowledge when needed
- Make decisions based on your mission and goals and not your current inventory; be prepared to rearrange staffing when required to advance
This was quickly written this morning to get it out of my head. There are probably holes in this and it may not even be all that original, but I think it works. As an American who has lived in the EU for 13+ years, I have come to trust government more and corporations less for a variety of reasons, but mostly because, broadly speaking, we tend to hold our government to a higher standard than we hold corporations.
I'm posting this in January 2026, just before FOSDEM. I'll be there and open for conversation. Find me on Signal as bexelbie.01.
-
Many software as a service agreements allow you to stop paying but still make true exit difficult due to data gravity, integrations, and proprietary features. In practice, you can "check out," but actually leaving is often costly and slow. ↩
27 Jan 2026 8:10am GMT
Chris Short: Desk Setup, January 2026
27 Jan 2026 5:00am GMT
26 Jan 2026
Fedora People
Kushal Das: replyfast a python module for signal
26 Jan 2026 12:16pm GMT
24 Jan 2026
Fedora People
Kevin Fenzi: misc fedora bits for third week of jan 2026
Another week another recap here in longer form. I started to get all caught up from the holidays this week, but then got derailed later in the week sadly.
Infra tickets migrated to new forejo forge
On tuesday I migrated our https://pagure.io/fedora-infrastructure (pagure) repo over to https://forge.fedoraproject.org/infra/tickets/ (forgejo).
Things went mostly smoothly, the migration tool is pretty slick and I borrowed a bunch from the checklist that the quality folks put together ( https://forge.fedoraproject.org/quality/tickets/issues/836 ) Thanks Adam and Kamil!
There are still a few outstanding things I need to do:
-
We need to update our docs everywhere it mentions the old url, I am working on a pull request for that.
-
I cannot seem to get the fedora-messaging hook working right It might well be something I did wrong, but it is just not working
-
Of course no private issues migrated, hopefully someday (soon!) we will be able to just migrate them over once there's support in forgejo.
-
We could likely tweak the templates a bit more.
Once I sort out the fedora-messaging hook I should be able to look at moving our ansible repo over, which will be nice. forgejo's pull request reviews are much nicer, and we may be able to leverage lots of other fun features there.
Mass rebuild finished
Even thought it started late (was supposed to start last wed, but didn't end up starting really until friday morning) it finished over the weekend pretty easily. There was some cleanup and such and then it was tagged in.
I updated my laptop and everything just kept working. I would like to shout out that openqa caught a mozjs bug landing (again) that would have broken gdm, so that got untagged and sorted and I never hit it here.
Scrapers redux
Wed night I noticed that one of our two network links in the datacenter was topping out (10GB). I looked a bit, but marked it down to the mass rebuild landing and causing everyone to sync all of rawhide.
Thursday morning there were more reports of issues with the master mirrors being very slow. Network was still saturated on that link (the other 10G link was only doing about 2-3GB/sec).
On investigation, it turned out that scrapers were now scraping our master mirrors. This was bad because all the BW used downloading every package ever over http and was saturating the link. These seemed to mostly be what I am calling "type 1" scrapers.
"type 1" are scrapers coming from clouds or known network blocks. These are mostly known in anubis'es list and it can just DENY them without too much trouble. These could also manually be blocked, but you would have to maintain the list(s).
"type 2" are the worse kind. Those are the browser botnets, where the connections are coming from a vast diverse set of consumer ip's and also since they are just using someone elses computer/browser they don't care too much if they have to do a proof of work challenge. These are much harder to deal with, but if they are hitting specific areas, upping the amount of challenge anubis gives those areas helps if only to slow them down.
First order of business was to setup anubis in front of them. There's no epel9 package for anubis, so I went with the method we used for pagure (el8) and just set it up using a container. There was a bit of tweaking around to get everything set, but I got it in place by mid morning and it definitely cut the load a great deal there.
Also, at the same time it seems we had some config on download servers for prefork apache. Which, we have not used in a while. So, I cleaned all that up and updated things so their apache setup could handle lots more connections.
The BW used was still high though, and a bit later I figured out why. The websites had been updated to point downloads of CHECKSUM files to the master mirrors. This was to make sure they were all coming from a known location, etc. However, accidentially _all_ artifact download links were pointing to the master mirrors. Luckly we could handle the load and also luckily there wasn't a release so it was less people downloading. Switching that back to point to mirrors got things happier.
So, hopefully scrapers handled again... for now.
Infra Sprint planning meeting
So, as many folks may know, our Red Hat teams are all trying to use agile and scrum these days. We have various things in case anyone is interested:
-
We have daily standup notes from each team member in matrix. They submit with a bot and it posts to a team room. You can find them all in #cle-standups:fedora.im space on matrix. This daily is just a quick 'what did you do', 'what do you plan to do' any notes or blockers.
-
We have been doing retro/planning meetings, but those have been in video calls. However, there's no reason they need to be there, so I suggested and we are going to try and just meet on matrix for anyone interested. The first of these will be monday in the #meeting-3:fedoraproject.org room at 15UTC. We will talk about the last 2 weeks and plan for what planned things we want to try and get in the next 2.
The forge projects boards are much nicer than the pagure boards were, and we can use them more effectively. Here's how it will work:
Right now the current sprint is in: https://forge.fedoraproject.org/infra/tickets/projects/325 and the next one is in: https://forge.fedoraproject.org/infra/tickets/projects/326
On monday we will review the first, move everything that wasn't completed over to the second, add/tweak the second one then close the first one, rename the 'next' to 'current' and add a new current one. This will allow us to track what was done in which sprint and be able to populate things for the next one.
Additionally, we are going to label tickets that come in and are just 'day-to-day' requests that we need to do and add those to the current sprint to track. That should help us get an idea of things that we are doing that we cannot plan for.
Mass update/reboot outage =========================o
Next week we are also going to be doing a mass update/reboot cycle with outage on thrusday. This is pretty overdue as we haven't done such since before the holidays.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/115951447954013009
24 Jan 2026 5:27pm GMT
23 Jan 2026
Fedora People
Christof Damian: Friday Links 26-03
23 Jan 2026 9:00am GMT
Remi Collet: 📝 Redis version 8.6 🎲
RPMs of Redis version 8.6 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
⚠️ Warning: this is a pre-release version not ready for production usage.
1. Installation
Packages are available in the redis:remi-8.6 module stream.
1.1. Using dnf4 on Enterprise Linux
# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm # dnf module switch-to redis:remi-8.6/common
1.2. Using dnf5 on Fedora
# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm # dnf module reset redis # dnf module enable redis:remi-8.6 # dnf install redis --allowerasing
You may have to remove the valkey-compat-redis compatibilty package.
2. Modules
Some optional modules are also available:
- RedisBloom as redis-bloom
- RedisJSON as redis-json
- RedisTimeSeries as redis-timeseries
These packages are weak dependencies of Redis, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The modules are automatically loaded after installation and service (re)start.
The modules are not available for Enterprise Linux 8.
3. Future
Valkey also provides a similar set of modules, requiring some packaging changes already applied in Fedora official repository.
Redis may be proposed for unretirement and be back in the Fedora official repository, by me if I find enough motivation and energy, or by someone else.
I may also try to solve packaging issues for other modules (e.g. RediSearch). For now, module packages are very far from Packaging Guidelines, so obviously not ready for a review.
4. Statistics
redis
redis-bloom
redis-json
redis-timeseries
23 Jan 2026 7:28am GMT
22 Jan 2026
Fedora People
Fedora Badges: New badge: CentOS Connect 2026 Attendee !
22 Jan 2026 10:40am GMT
Fedora Badges: New badge: DevConf India 2026 Attendee !
22 Jan 2026 5:58am GMT
21 Jan 2026
Fedora People
Evgeni Golov: Validating cloud-init configs without being root
21 Jan 2026 7:42pm GMT
Fedora Infrastructure Status: dl.fedoraproject.org slow
21 Jan 2026 12:00pm GMT
Ben Cotton: Use your labels
Most modern issue trackers offer a label mechanism (sometimes called "tags" or a similar name) that allow you or your users to set metadata on issues and pull/merge requests. It's fun to set them up and anticipate all of the cool things you'll do. But it turns out that labels you don't use are worse than useless. As I wrote a few years ago, "adding more labels adds cognitive overhead to creating and managing issues, so you don't want to add complexity when you don't have to."
A label that you don't use just complicates the experience and doesn't give you useful information. A label that you're not consistent in using will lead to unreliable analysis data. Use your labels.
Jeff Fortin Tam highlighted one benefit to using labels: after two years of regular use in GNOME, it was easy to see nearly a thousand performance improvements because of the "Performance" label. (As of this writing, the count is over 1,200.)
How to ensure you use your labels
The problem with labels is that they're either present or they're not. If your process requires affirmatively adding labels, then you can't treat the absence of a label as significant. The label might be absent because it doesn't apply, or it might be absent because nobody remembered to apply it. By the same token, you don't want to apply all the labels up front and then remove the ones that don't apply. That's a lot of extra effort.
There are two parts of having consistent label usage. The first is having a simple and well-documented label setup. Only have the labels you need. A label that only applies to a small number of issues is probably not necessary. Clearly document what each label is for and under what conditions it should be applied.
The other part of consistent label usage is to automatically apply a "needs triage" label. Many ticket systems support doing this in a template or with an automated action. When someone triages an incoming issue, they can apply the appropriate labels and then remove the "needs triage" label. Any issue that still includes a "needs triage" label should be excluded from any analysis, since you can reasonably infer that it hasn't been appropriately labeled.
You'll still miss a few here and there, but that will help you use your labels, and that makes the labels valuable.
This post's featured photo by Angèle Kamp on Unsplash.
The post Use your labels appeared first on Duck Alignment Academy.
21 Jan 2026 12:00pm GMT
20 Jan 2026
Fedora People
Peter Czanik: Call for testing: syslog-ng 4.11 is coming
20 Jan 2026 12:44pm GMT