18 Sep 2025
Fedora People
Jeremy Cline: Fedora Signing Update, September edition
18 Sep 2025 3:34pm GMT
Ben Cotton: Flash sale on all Pragmatic Bookshelf titles
With less than 100 days until Christmas, now is a great time to save 45% on titles from The Pragmatic Bookshelf! Use promo code flashsale at pragprog.com between 1400 UTC (10am Eastern) September 18 and 1400 UTC (still 10am Eastern) on September 20 to save 45% on every title (except The Pragmatic Programmers).
Not sure what you should get? If you don't have a copy of Program Management for Open Source Projects, now's a great time to get one. I've been a technical reviewer for a few other books as well:
- Forge Your Future with Open Source by VM Brasseur
- Designing Data Governance from the Ground Up by Lauren Maffeo
- Guiding Star OKRs by Staffan Nöteberg
- Business Success with Open Source by VM Brasseur
I've read (or have in my stack to read) other books as well:
- Manage It! by Johanna Rothman
- Real-World Kanban by Mattias Skarin
- Hands-On Rust by Herbert Wolverson
- Advanced Hands-On Rust by Herbert Wolverson
- Powerful Command-Line Applications in Go by Ricardo Gerardi
With hundreds of titles to choose from, there's something for you and the techies in your life.
This post's featured photo by Josh Appel on Unsplash.
The post Flash sale on all Pragmatic Bookshelf titles appeared first on Duck Alignment Academy.
18 Sep 2025 2:00pm GMT
Fedora Community Blog: Announcing the Soft Launch of Fedora Forge
We are thrilled to announce the soft launch of Fedora Forge, our new home for Fedora Project subprojects and Special Interest Groups (SIGs)! This marks a significant step forward in modernizing our development and collaboration tools, providing a powerful platform built on Forgejo. For more background on why we chose Forgejo, see the previous community blog post.
A New Home for Fedora Collaboration
We designed Fedora Forge as a dedicated space for official Fedora Project work. Unlike pagure.io, which hosted personal projects alongside Fedora Project work, we focus on supporting subprojects and SIGs. This structured approach streamlines our efforts and simplifies contribution to official Fedora teams.
We are migrating projects from select teams, including: Release Engineering (RelEng), Council, and Fedora Engineering Steering Committee (FESCo). This phase-based approach lets us test the platform thoroughly before opening it to more subprojects and SIGs.
However, If you are a leader of a team or SIG and would like to request a new organization or team on Fedora Forge, please see our Requesting a New Organization and/or Team guide for detailed instructions.
Seamless Migration with Pagure Migrator
The Pagure Migrator is a key part of this launch. We developed and upstreamed this new Forgejo feature to ensure smooth transitions. This utility moves projects from Pagure-based Git forges seamlessly. It brings over historical data like pull requests, issue tickets, topics, labels, and users. As subprojects and SIGs move over, their valuable history and ongoing work come with them. This ensures continuity and a painless transition for contributors.
Get Ready for What's Next!
This soft launch is just the beginning. As we test the waters by settling these first subprojects and SIGs on Fedora Forge, we will be preparing to open it up in the coming weeks. We are confident that Fedora Forge will become an invaluable tool for the community, providing a robust and modern platform for collaboration.
Please use the #git-forge-future tag on Fedora Discussion to communicate your feedback and the #fedora-forgejo:fedoraproject.org channel on Fedora Chat to collaborate with us.
The post Announcing the Soft Launch of Fedora Forge appeared first on Fedora Community Blog.
18 Sep 2025 10:00am GMT
17 Sep 2025
Fedora People
Brian (bex) Exelbierd: Day 3: Microsoft Hackathon — Thread Heuristics, Importance Signals, and Agent Editing
Today started with a plan: drop my kid at school, head to a coworking space, meet my teammate, and push the importance model forward. Reality: unexpected stuff pulled me back home first. Momentum recovered later, but the change in expectations reinforced how fragile context can be when doing iterative LLM + data work.
Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.
Thread heuristic exporter
I built a first pass "thread stats" exporter: sender count, participant diversity, tokens of note, and other structural hints. Deterministic, fast, and inspectable. This gives a baseline before letting an LLM opine. The goal: reduce the search space without prematurely deciding what's "important."
Planning importance signals
With that baseline, I worked (with ChatGPT-5) on how to move beyond raw counts. What makes a thread worth surfacing? Some dimensions that matter to me:
- Governance, Policy or consensus decisions
- Direct relevance to Azure or adjacent cloud platform concerns
- Presence of Debian package names (as a proxy for concrete change surface)
- Diversity of participants vs. a back-and-forth between two people
- Emergence of unusual or "quirky" sidebars that might signal cultural or directional shifts (these are often interesting even if not strictly impactful)
I want to avoid letting pure volume masquerade as importance. A long bikeshed is still a bikeshed.
Data enrichment
I experimented with a regexp to match Debian package names. That worked a bit too well. Claude Sonnet was reporting threads moving forward and I couldn't believe the numbers. A little investigation and it turned out the regexp was capturing everything. Sonnet suggested pulling in a Debian package list to tag occurrences inside threads. That, plus spaCy-based entity/token passes, lets me convert unstructured text into a feature layer the LLM can later consume. The MVP now narrows roughly 424 August 2025 threads to ~250 candidates for deeper scoring. Not "good," just narrower. False negatives remain a risk; I'd rather over‑include at this stage.
Why deterministic first
Locking down deterministic extraction serves to reduce some of the noise before LLM scoring. It also provides a dial I can change as part of the review from the human-in-the-loop process I envision.
Next phase: human-in-the-loop LLM
Tomorrow I plan to let the model start proposing which threads look important or just interesting, then review those outputs manually - back and forth until the signals feel reliable. Goal: lightweight human-in-the-loop review, not handing over judgment. Keeping explanations terse will matter or I and any hypothetical readers will drown in synthetic prose.
Agent editing workflow
While agents "compiled1," I created an AGENTS.md
doc to formalize how I want writing edits to work. This is about editing my prose, not letting a model co‑author new ideas. Core rules I laid down include:
- Challenge structure and assumptions when they look shaky - do not invent content
- Preserve hedges; mark uncertainty with
[UNCLEAR]
instead of guessing - Keep my voice; I review diff output before accepting anything
Most importantly, I am not an emoji wielding performative Thought Leader(tm) turned Influencer(tm). The new guidance has already reduced noise. I still refuse to let a model start from a blank page; critique mode is the win. Visual Studio Code diff views make trust-building easier - everything is inspectable.
Closing
Today was half heuristics, half meta-process. The importance problem still feels squishy, but the scaffolding is there. Now I'm going to stop and pick the kid up and let her redeem a grocery-store stamp card for a Smurf doll. Grocery stores are weird and since she speaks Czech she can do the talking.
17 Sep 2025 6:00pm GMT
Ben Cotton: Planning ahead is the most important part of code of conduct enforcement
The goal of a code of conduct is to explicitly define the boundaries of acceptable behavior in a community so that all members can voluntarily abide by it. Unfortunately, people will occasionally violate the code of conduct. When this happens, you have to take some kind of corrective action. This is where many communities struggle.
Too often, the community lacks a well-developed plan for responding to code of conduct violations. The first reason I've seen for this is the belief (hope?) that the community members will behave. This is too optimistic, but totally relatable. We all want to think the best of our communities. The other reason I see is a general reluctance to create policy before it's needed. This makes a lot of sense in almost every other situation, but code of conduct enforcement is different. Developing processes on an as-needed basis is usually my suggested approach, but you cannot build your code of conduct enforcement on the fly. Here's why.
Ad hoc processes seem unfair
Even if the outcome is correct, a process that's made up on the fly will be perceived as unfair. If someone is given a timeout from the project because they were clearly harassing another community member, you won't get much push back. On the other hand, if someone is subtly being a jerk while advancing a controversial opinion, a decision to suspend them could be seen as a punishment for their controversial opinion. People inclined to a bad-faith interpretation can always find a reason to cry foul, but most potential critics will understand if you're following an established process.
Sometimes the on-the-fly process seems unfair because it is unfair. If two similar incidents occur a year apart, they may be handled very differently. This isn't because of malice toward the more harshly dealt with person or favoritism toward the more leniently dealt with person. It's because a year has passed and different people are potentially handling the cases. This is a recipe for inconsistent response.
Code of conduct response can be complicated
The other reason to plan ahead is that code of conduct response can be complicated. In minor cases, someone talks to the offending party, and everyone moves on with life. Those cases are easy. But in more severe cases, like where someone is temporarily (or permanently) suspended from the project, there are more steps to take. You may have to coordinate disabling one or more accounts (including social media accounts). If funding for travel or events is involved, you may need to pause or revoke that. If the person is a maintainer of some component, you need to ensure that someone else is available to handle those responsibilities.
Trying to figure out in the moment what needs to be done almost guarantees missing something. And the people with the ability to make the necessary changes might reasonably hesitate to do it in response to a request out of nowhere. In the case of temporary suspensions, you need to remember to re-enable access. Again, without a defined process, you might forget entirely, or at least forget to re-enable certain privileges.
With severe incidents, the situation is already distressing enough. A pre-defined process doesn't make it easy, but it reduces the strain.
This post's featured photo by Tingey Injury Law Firm on Unsplash.
The post Planning ahead is the most important part of code of conduct enforcement appeared first on Duck Alignment Academy.
17 Sep 2025 12:00pm GMT
Brian (bex) Exelbierd: The Couple Across the Way
From my window and balcony, I can see the balcony of a couple who live across the way. They seem to be about 10 to 15 years older than me, and I gather from their habits that they're both retired. Several times a day, they step out onto their balcony with cups of coffee. One of them is often on the phone, while the other might sit quietly. Their voices are distinctive, so if my window is open, I always hear them.
They say you shouldn't make up stories about people you observe and expect them to be true. It's like assuming someone's Instagram reflects their real life. But because I work from home, I see this couple frequently, and I think it's lovely that they have this time, this space, and this ritual.
I often wonder what they're saying-either to each other or to the person on the phone. My Czech isn't strong enough to understand much, so I can't piece together their conversations. I've never asked a Czech-speaking friend to listen in, either. If I did, I suspect I'd hear complaints, as I've been told that Czechs are prone to complaining. But one of the perks of not speaking the language fluently is that I can imagine otherwise. When I walk down the street, I assume people are talking about how beautiful the day is and how lucky we are to live in this city, rather than airing grievances.
What prompted me to write this today is that, for the first time, I noticed the woman wearing what I can only describe as a silly hat. She's never worn a hat before, at least not that I've noticed. The man occasionally wears a baseball cap, but this was different. It made me wonder: is today silly hat day? If so, I hope it's a happy one.
17 Sep 2025 11:50am GMT
Peter Czanik: Nightly syslog-ng RPM packages for RHEL & Co.
17 Sep 2025 9:36am GMT
Fedora Magazine: Introducing complyctl for Effortless Compliance in Fedora
complyctl is a powerful command-line utility implementing the principles of "ComplianceAsCode" (CaC) with high scalability and adaptability for security compliance.
In today's rapidly evolving digital landscape, maintaining a robust security posture isn't just a best practice - it is a necessity. For Fedora users, system administrators, and developers, ensuring that your systems meet various security and regulatory requirements can often feel like a daunting, manual task. But what if you could standardize and automate much of this process, making compliance checks faster, easier to audit, and seamlessly integrated into your workflows?
This is now a reality enabled with multiple ComplyTime projects. These focus on specific tasks designed to be easily integrated. They allow a robust, flexible, and scalable combination of microservices communicating with standardized formats that ultimately allow a powerful capability to much more easily adapt to the compliance demands. This also allow faster adoption of new technologies. There are multiple exciting projects actively and quickly evolving under the umbrella of ComplyTime organization. In this article I would like to highlight complyctl, the ComplyTime CLI for Fedora, and its main features that make it an excellent option to easily maintain a robust security posture in your Fedora systems.
complyctl is a powerful command-line utility available since Fedora 42. It's design uses the principles of "ComplianceAsCode" (CaC) with high scalability and adaptability. It contains a technology agnostic core and is easily extended with plugins. This allows users to use the best of every available underlying technology with a simple and standardized user interface.
The Power of ComplianceAsCode with complyctl
At its heart, complyctl is a tool for performing compliance assessment activities, scaled by a flexible plugin system that allows users to perform compliance check activities with a flexible combination of the best available assessment technologies.
The complyclt plugin architecture allows quick adoption and combination of different scanner technologies. The core design is technology agnostic with standardizing inputs and outputs using machine readable formats that allow high reusability and shareability of compliance artifacts. Currently it leverages the Open Security Controls Assessment Language (OSCAL) and its anti-fragile architecture also allows a smooth adoption of future standards, making it a reliable and continuous modern solution for the long-term.
This might sound technical, but the benefits are simple:
- Automation and Speed: Traditional compliance audits can be slow, manual, complex and prone to human error. complyctl relies on standardized machine readable formats, allowing automation without technology or vendor lock-in.
- Accuracy and Consistency: Machines are inherently more consistent than human reviewers. complyctl's reliance on OSCAL provides a standardized format for expressing security controls, assessment plans, and results. This standardization is crucial for interoperability. It allows consistent processing and understanding of compliance data across different tools and systems.
- Scalability and Integration: complyctl simplifies compliance checks integration in your development and deployment pipelines. An OSCAL Assessment Plan can be created and customized once and reused across multiple systems. Ultimately compliance checks can be implemented faster and compliance gaps are caught earlier. This prevents non-compliant configurations from reaching production environments.
- Extensibility with Plugins (including OpenSCAP): The plugin-based architecture of complyctl makes it incredibly versatile. An example is the complyctl-openscap-plugin, which extends complyctl's capabilities to use OpenSCAP Scanner and the rich content provided by scap-security-guide package. This allows an immediate and smooth adoption of complyctl using a well-established assessment engine while providing a modern, OSCAL-driven workflow for managing and executing security compliance checks. It also allows a smooth and gradual transition to other scanner technologies.
By embracing complyctl, Fedora users can more easily maintain a strong security posture.
Getting Started with complyctl: A Practical Tutorial
Ready to put complyctl to work? It is likely simpler than you expect. The following is a step-by-step guide to start using complyctl on your Fedora system.
1. Installation
First, install complyctl, if necessary. It is available as an RPM package in official repositories:
sudo dnf install complyctl
2. Understanding the Workflow
complyctl follows a logical, sequential workflow:
- list: Discover available compliance frameworks.
- plan: Create an OSCAL Assessment Plan based on a chosen framework. This plan acts as your assessment configuration.
- generate: Generate executable policy artifacts for each installed plugin based on the OSCAL Assessment Plan.
- scan: Call the installed plugins to scan the system using their respective policies and finally aggregate the results in a single OSCAL Assessment Results file.
Let's walk through these commands.
3. Step-by-Step Tutorial
Step 1: List Available Frameworks
To begin, you need to know which compliance frameworks complyctl can assess your system against. Currently the complyctl package includes the CUSP Profile out-of-the-box.
Use the list command to show the available frameworks:
complyctl list
This command will output a table, showing the available frameworks. Look for the Framework ID column, as you'll need this for the next step.
Example:

Optionally, you can also include the --plain option for simplified output.
Step 2: Create an Assessment Plan
Once you've identified a Framework ID, you can create an OSCAL Assessment Plan. This plan defines what will be assessed. The plan command will generate an assessment-plan.json file in the complytime directory.
complyctl plan cusp_fedora
This command creates the user workspace in the "complytime" directory:
tree complytime complytime/ └── assessment-plan.json
The JSON file is a machine-readable representation of your chosen compliance policy.
Step 3: Install a plugin
In this tutorial we will use OpenSCAP Scanner as the underlying technology for compliance checks. So, we also want to install the OpenSCAP plugin for complyctl as well the OpenSCAP content delivered by scap-security-guide package:
sudo dnf install complyctl-openscap-plugin scap-security-guide
Step 4: Generate Policy Artifacts
With your assessment-plan.json in place, and the desired plugins installed, the generate command translates this declarative plan into policy artifacts for the installed plugins. These are the actual plugin specific instructions complyctl plugins will use to perform the checks.
complyctl generate
This command prepares the assessment for execution.
tree complytime/ complytime/ ├── assessment-plan.json └── openscap ├── policy │ └── tailoring_policy.xml ├── remediations │ ├── remediation-blueprint.toml │ ├── remediation-playbook.yml │ └── remediation-script.sh └── results
Step 5: Execute the Compliance Scan
Finally, the scan command runs the assessment using the installed plugins. The results will appear in the assessment-results.json, file by default.
complyctl scan
For human-readable output, which is useful for review and reporting, you can add the --with-md option. This will generate both assessment-results.json and assessment-results.md files.
complyctl scan --with-md
This Markdown file provides a clear, digestible summary of your system's compliance status, making it easy to share with auditors or other stakeholders.
tree complytime/ complytime/ ├── assessment-plan.json ├── assessment-results.json ├── assessment-results.md └── openscap ├── policy │ └── tailoring_policy.xml ├── remediations │ ├── remediation-blueprint.toml │ ├── remediation-playbook.yml │ └── remediation-script.sh └── results ├── arf.xml └── results.xml
Final thoughts
complyctl is an open-source tool built for and by the community. We encourage you to give it a try.
- Find us on GitHub at complyctl repository.
- If you find an issue or have a feature request, please open an issue, propose a PR, or contact the maintainers. Your feedback will help shape the future of this tool.
- Collaboration on ComplianceAsCode/content community is also welcome to help us shaping Compliance profiles for Fedora.
17 Sep 2025 8:00am GMT
Fedora Infrastructure Status: Fedora Copr outage
17 Sep 2025 8:00am GMT
Avi Alkalay: Hermeto Pascoal vive!
Pra quem quiser conhecer um pouco da genialidade de Hermeto Pascoal (1936-2025), preparei uma playlist com algumas de suas composições interpretadas por outros músicos brilhantes do Brasil e do mundo. Prepare-se para melodias de fora deste planeta, frases de originalidade intrigante, harmonias dissonantes, tudo frequentemente sobrepostas sobre ritmos brasileiros. Hermeto foi um dos maiores compositores de nossa era. Hermeto vive para sempre em sua música.
For anyone who wants to experience a bit of Hermeto Pascoal's genius, I've put together a playlist with some of his compositions interpreted by other brilliant musicians from Brazil and around the world. Get ready for melodies from out of this world, phrases of intriguing originality, dissonant harmonies, all often layered over Brazilian rhythms. Hermeto was one of the greatest composers of our time. Hermeto lives forever in his music.
17 Sep 2025 1:09am GMT
16 Sep 2025
Fedora People
Brian (bex) Exelbierd: Day 2: Microsoft Hackathon — Distractions, Brainstorming, and Infrastructure
Today was a mixed bag. I started with the goal of advancing the central metadata architecture, which is critical for figuring out what's worth surfacing in the notebook. However, distractions and infrastructure challenges dominated the day. The day included a hot project at work, a meeting that couldn't be skipped, and the ever-present TPS reports (or in my case, an expense report from a recent trip). These interruptions made it hard to focus on the metadata work.
Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.
Brainstorming with ChatGPT
Despite the distractions, I spent some time brainstorming with ChatGPT on methods for surfacing important information. We explored various heuristics and LLM concepts. It was a productive session that gave me ideas to refine and potentially turn into something valuable.
One area we explored was how to determine if a thread was "important." This included qualitative factors like the number of unique participants in a thread and, in a future with more data, the history of those participants' involvement in the list. We also discussed keyword surfacing as a way to highlight significant topics and the potential for trend analysis to predict emerging themes over time. While we touched on some additional qualitative measures, I don't recall all the specifics.
Infrastructure Challenges
The day ultimately became about infrastructure. Scaling issues forced me to shift to a database for the MVP, and SQLite came to the rescue. While not ideal, it's a practical solution for now.
The motivation for SQLite was scalability. To ensure thread completion, I had to load more than one month of data into the MVP. Bonus months from testing added even more data, so it made sense to work with fresh data rather than repeatedly processing the same old files. SQLite also provided a way to query the data efficiently without having to read through tons of JSON files. While this approach works for the MVP, it's clear that a more robust solution-like an MCP server-might be needed in the future.
Final Takeaway
One thing this day reinforced is how fragile AI-driven development can feel without proper context management. Whether brainstorming with ChatGPT or coding in agent mode using Sonnet in Visual Studio Code, I've noticed that when tools lack memory or context, things can quickly go off the rails. For example, fragile solutions like resorting to regex to parse HTML instead of using a proper library like BeautifulSoup can emerge. This highlights the need for better agent hints or configuration files, though the idea of managing those feels cumbersome.
Tomorrow, I hope to meet with a teammate to advance the more interesting parts of the project. With the infrastructure in place, I'm optimistic about making real progress.
This remains a work in progress, but I'm hopeful that the brainstorming and infrastructure work will pay off in the coming days.
16 Sep 2025 7:50pm GMT
Fedora Magazine: Announcing Fedora Linux 43 Beta
On Tuesday, 16 September 2025, it is our pleasure to announce the availability of Fedora Linux 43 Beta! This release comes packed with the latest version upgrades of existing features, plus a few new ones too. As with every beta release, this is your opportunity to test out the upcoming Fedora Linux release and give some feedback to help us fine tune F43 final. We hope you enjoy this latest version of Fedora!
How to get the beta release
You can download F43 Beta, or our pre-release edition versions, from any of the following places:
- Fedora Workstation 43 Beta
- Fedora KDE Plasma Desktop 43 Beta
- Fedora Server 43 Beta
- Fedora IoT 43 Beta
- Fedora Cloud 43 Beta
The Fedora CoreOS "next" stream moves to the beta release one week later, but content for F43 is still available from their current branched stream to enjoy now.
You can also update an existing system to the beta using DNF system-upgrade.
The F43 Beta release content is also available for Fedora Spins and Labs, with the exception of the following:
- Mate - not currently available on any architectures with F43 content
- i3 - not currently available on aarch64 only with F43 content
F43 Beta highlights
Installer and desktop Improvements
Anaconda WebUI for Fedora Spins by default: This creates a consistent and modern installation experience across all Fedora desktop variants. It brings us closer to eventually replacing the older GTK installer. This ensures all Fedora users can benefit from the same polished and user-friendly interface.
Switch Anaconda installer to DNF5: This change provides better support and debugging for package-based applications within Anaconda. It is a bigger step towards the eventual deprecation or removal of DNF4, which is now in maintenance mode.
Enable auto-updates by default in Fedora Kinoite: This change ensures that users are consistently running a system with the latest bug fixes and features after a simple reboot. Updates are applied automatically in the background.
Set Default Monospace Fallback Font: This change ensures that when a specified monospace font is missing, a consistent fallback font is used. Font selection also remains stable and predictable, even when the user installs new font packages. No jarring font changes should occur as appeared in previous versions.
System enhancements
GNU Toolchain Update: The updates to the GNU Toolchain ensures Fedora stays current with the latest features, improvements, and bug and security fixes from the upstream gcc, glibc, binutils, and gdb projects. They guarantees a working system compiler, assembler, static and dynamic linker, core language runtimes, and debugger.
Package-specific RPM Macros For Build Flags: This change provides a consistent and standard way for packages to add to the default list of compiler flags. It also offers a cleaner and simpler method for package maintainers to make per-package adjustments to build flags. This avoids the need to manually edit and re-export environmental variables, and prevents potential issues that could arise from the old manual method. It ensures the consistent applications of flag adjustments.
Build Fedora CoreOS using Containerfile: This change brings the FCOS build process under a standard container image build, moving away from the custom tool, CoreOS Assembler. It also means that anyone with Podman installed can build FCOS. This simplifies the process for both individual users and automated pipelines.
Upgrades and removals
Deprecate The Gold Linker: Deprecate the binutils-gold subpackage. This change simplifies the developer experience by reducing the number of available linkers from four to three. It streamlines choices for projects, and moves towards safeguarding the project against potential issues from "bitrot" where a package's quality can decline and become unbuildable or insecure over time.
Retire python-nose: The python-nose package will be removed in F43. This prevents the creation of new packages with a dependency on an unmaintained test runner. Developers are encouraged to migrate to actively maintained testing frameworks such as python3-pytest or python3-nose2.
Retire gtk3-rs, gtk-rs-core v0.18, and gtk4-rs v0.7: This change prevents Fedora from continuing to depend on old, unmaintained versions of these bindings. It also prevents shipping obsolete software and fewer unmaintained versions of packages.
Python 3.14: Updating the Python stack in F43. This means that by building Fedora packages against an in-development version, critical bugs can be identified and reported before the final 3.14.0 release. This helps the entire Python ecosystem. Developers also gain access to the latest features in this release. More information is available at https://docs.python.org/3.14/whatsnew/3.14.html.
Golang 1.25: This change provides Fedora Linux 43 Beta with the latest new features in Go. These include that go build -asan now defaults to leak detection at program exit, the go doc -http option starts a documentation server, and subdirectories of a repository can now be used as a module root. Since Fedora will keep as close to upstream as possible, this means we will continue to provide a reliable development platform for the Go language and projects written in it.
Idris 2: Users gain access to new features in Idris 2. These include Quantitative Type Theory (QTT), which enables type-safe concurrent programming and fine-grained control over resource usage. It also has a new core language, a more minimal prelude library, and a new target to compile to, Chez Scheme.
More information
Details and more information on the many great changes landing in Fedora Linux 43 are available on the Change Set page.
16 Sep 2025 2:05pm GMT
Mat Booth: Configuring Vim Solarized to Follow the System Dark Mode Preference
I like to switch my system between light and dark modes as lighting conditions change, and I want my terminal windows to always respect my current system preference. This article shows how to configure vim to automatically change between light and dark variants of the Solarized colour scheme, to match the current system dark mode preference.
I usually go years and years between OS re-installs, so I've forgotten how to do this. Hopefully this article will be useful for future me. The first thing to do after installing Fedora on a new machine however, is to switch the default editor back to vim, because I have no muscle memory for nano and I refuse to change. 😅
$ sudo dnf swap nano-default-editor vim-default-editor
Now onto the main business of configuring the terminal and vim to use my favourite colour palette, Solarized by Ethan Schoonover.
Solarize The Terminal
Ptyxis, the new default terminal in Fedora Workstation Edition, has an excellent set of colour palette options. From the hamburger menu drop-down, select the Follow System Style button from the three options at the top. This causes Ptyxis to switch between light and dark modes when you change the system dark mode preference instead of being in dark mode all the time. Then open the Preferences dialog to select the Solarized colour palette from the options listed there:
Solarize Vim
This works well for normal terminal operation, but vim's own default colours can clash terribly with the terminal colour scheme. Sometimes the foreground and background colours are either the same or extremely low contrast, which results in impossible to read text, as shown here after performing a search for the string "init":
Fortunately Ethan provides a vim-specific implementation of the Solarized colour palette in his vim-colors-solarized repository. This can be installed by downloading the provided vim script into your .vim/colors
directory:
$ mkdir -p ~/.vim/colors
$ wget https://raw.githubusercontent.com/altercation/vim-colors-solarized/refs/heads/master/colors/solarized.vim \
-o ~/.vim/colors/solarized.vim
And then configuring the colour scheme in your .vimrc
file by adding the following lines:
" Enable Solarized Theme
syntax enable
colorscheme solarized
New vim sessions will now use the correct colours, and are even able to detect whether to use the light or dark variant of Solarized.
Dark Mode Detection
However, vim is only able to detect whether to use the light or dark variant at start up. This means if I switch the system dark mode preference while vim is open, then I have to close and reopen all my open vim sessions before they will use the correct Solarized variant:
Not even re-sourcing the .vimrc
using the :so
command corrects the colours. We can however edit it such that re-sourcing does fix the colour scheme variant in use without needing to exit and reload vim.
" Enable Solarized Theme
syntax enable
let sys_colors=system('gsettings get org.gnome.desktop.interface color-scheme')
if sys_colors =~ 'dark'
set background=dark
else
set background=light
endif
colorscheme solarized
Expanding upon the previous .vimrc
snippet, this explicitly sets vim's background
setting depending on the output of a gsettings
query for the current system dark mode preference. Now the :so ~/.vimrc
command can be used to fix the colours without having to close and reopen vim.
Vimterprocess Communication
It would be even better of course, to have vim automatically re-source the .vimrc
automatically when the system dark mode preference changes.
Vim has a kind of interprocess communication mechanism built in. If it's started with the --servername {NAME}
option then vim can accept commands from another vim processes running on your machine. To ensure vim is always started with this option, just add this line to your .bashrc
file to create a command alias:
# Always start vim as a server
# with a unique name
alias vi='vi --servername VIM-$(date +%s)'
Now when you run vim (or vi) the session will be named with VIM-<SOME_NUMBER>
. Commands can be sent to such named sessions using a specially crafted invocation of vim:
$ vim --servername VIM-<SOME_NUMBER> --remote-send ":so ~/.vimrc<CR>"
So all we need to do is write a small shell script to find all running vim processes, determine their session names, and execute the above command for each one. Create the script in your user's local bin directory, e.g. ~/.local/bin/vsignal.sh
and make it executable with chmod +x ~/.local/bin/vsignal.sh
:
#!/bin/bash
function signal_vim() {
# Signal all running instances of vim
PIDS=$(pgrep -u $USER vim)
for PID in $PIDS ; do
VIM_ID=$(ps --no-headers -p $PID -o args | cut -d' ' -f3)
vim --servername $VIM_ID --remote-send ":so ~/.vimrc<CR>"
done
}
# Wait for color-scheme change
gsettings monitor org.gnome.desktop.interface color-scheme | \
while read -r COLOR_SCHEME ; do
signal_vim
done
Piping the gsettings monitor
command into read
will cause the script to block until the system dark mode preference is changed. When it does, it will issue a call to the signal_vim
function, perform the magic, and then go back to blocking. Now while ever the vsignal.sh
script is running, all active vim sessions will immediately switch to the appropriate Solarized colour scheme variant when the system dark mode preference is changed.
A Systemd Theme Sync Service
It's a bit incovenient to have to start a script whenever you open a terminal though. The best way to have this script always running is by letting systemd handle it. A new, user-specific service can be created with the following command:
$ systemctl edit --user --force --full theme-sync.service
The following service definition will cause systemd to start the script when you log into your Gnome session:
[Unit]
Description=Dark Mode Sync Service
[Service]
ExecStart=%h/.local/bin/vsignal.sh
[Install]
WantedBy=gnome-session.target
And finally, enable and start the service with the following commands:
$ systemctl --user enable theme-sync.service
$ systemctl --user start theme-sync.service
Now we can switch between light and dark modes to our heart's content, safe in the knowledge that vim will follow suite. 😌
16 Sep 2025 10:00am GMT
15 Sep 2025
Fedora People
Brian (bex) Exelbierd: Day 1: Microsoft Hackathon — Building a Focused Summarizer for Upstream Linux
This week is the Microsoft Hackathon, and I'm using it as a chance to prototype something I've been thinking about for a while: a tool that summarizes what's happening in upstream Linux communities in a way that's actually useful to people who don't have time to follow them day-to-day.
Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal opinions.
For my MVP, I'm going to try to produce a "What happened in Debian last month" summary from selected mailing lists. It's not a full picture of the community, but it's a solid basis for a proof of concept.
Why this project?
Part of my work at Microsoft involves helping others understand what's going on in upstream Linux communities. That's not always easy - the signal is buried in a lot of noise, and most people don't have time to follow mailing lists or community threads. If this works, I'll have a tool that can generate a newsletter-style summary that's actually useful.
Why Debian?
For this MVP, I chose Debian. It's a community I work with but haven't traditionally followed as closely as Fedora, where I have deeper experience. That makes Debian a good test case - I know enough to judge the output, and I have colleagues who can help validate it. I'm focusing on August 2025 because I already know what happened that month, which gives me a baseline to evaluate the results.
Agentic coding, not vibe coding
Agentic coding, in my view, is when you rely on an LLM to do the heavy lifting - generating code, suggesting structure - but you stay in the loop. You review what's written, check the inputs and outputs, and make sure nothing weird slips in. It's not fire-and-forget, and it's not vibe coding where you just hope for the best. I don't read every line as it's generated, but I do check the architecture and logic. One of my frequent prompt inclusions is "don't assume, ask and challenge my assumptions where appropriate." This helps uncover ideas as I develop, similar to an agile process.
A breakfast pivot
This morning over breakfast with a friend, I walked through the architecture I'd outlined with Copilot on Friday. Originally, I was planning to build a vector database and use retrieval-augmented generation (RAG) to power the summarization. But as we talked, it became clear that this was overkill for the MVP. What I really needed was a simpler memory model - something that could support basic knowledge scaffolding without the complexity of full semantic search.
So I pivoted. Today's work focused on getting the initial data in place: downloading a couple of months of Debian mailing-list emails to ensure I had full threads from August, storing them locally to avoid putting any load on Debian's infrastructure, and building scaffolding to sort and store the data so it supports both metadata generation and LLM access.
Could I have used a vector database or IMAP-backed mail store? Sure. But this was quick, easy, and gave me a chance to practice agentic coding in Python - something I don't get to do much in my day-to-day product management work.
What I'm hoping to learn
This MVP is about testing whether AI-generated insights from community data are actually useful. In OSPO and community spaces, we talk a lot about gathering insights - but we don't always ask whether those insights answer real questions. This is a chance to test that. Can we generate something that's not just interesting, but actionable? It feels a bit like the tail wagging the dog, but sadly that's where we seem to be.
Any surprises?
Nothing major yet, but I appreciated that the LLM caught a pagination issue I missed. I'd assumed a dataset was complete; while reconstructing threads it exposed an oddly truncated dataset. Today's work also reminded me to be deliberate about model selection - not all LLMs are created equal, and the choice matters if you don't arbitrarily default to the latest frontier models.
What's on deck for tomorrow?
Thanks to how some data structures came together, I'm rearchitecting the metadata store. This lets me defer generating the basic, memory-style knowledge passed to the LLM until I'm closer to using it, which should prevent some ugly backtracking.
I keep relearning this: don't build perfect infrastructure for an MVP - ship the smallest thing that answers the question.
15 Sep 2025 6:50pm GMT
14 Sep 2025
Fedora People
Robert Wright: About
14 Sep 2025 6:41am GMT
13 Sep 2025
Fedora People
Kevin Fenzi: Misc infra bits from 2nd week of sept 2025
Welcome to saturday! Another week gone by. For some reason, for me at least, this week marked the end of the quiet of the previous few. Seemed like lots of people got back from summer vacations and wanted to discuss their new plan or idea. Thats wonderful to see, but also makes for a busy week of replying, pondering and discussing.
Next (small) datacenter move planning underway
We have a small number of machines in a datacenter often referred to as rdu2 community cage. With our big datacenter move eariler this year to rdu3, we are going to be moving the rdu2 community cage gear over to rdu3 to consolidate it.
There's only a few machines there, but there's 2 services people will notice: pagure.io and download-cc-rdu01. We are currently trying to see if we can arrange things so we can just migrate pagure.io to a new machine in rdu3 and switch to it. If we can do that, downtime should be on the order of a few hours or less. If we cannot for some reason, the outage will be on the order of a day or so while the machine it's on is moved. For download-cc-rdu01, we will just likely have to have it down for a day or so, which shouldn't be too bad since there's many other mirrors to pull from.
It's looking tenatively like this might occur in november. So, look for more information as we know it. :)
communishift upgrades
Our communishift openshift cluster ( https://docs.fedoraproject.org/en-US/infra/communishift/ ) has been lagging on updates for a while. Finally we got a notice that the 4.16.x release it was on was going to drop out of support. So, I contacted openshift dedicated folks about it, and they responded in minutes (awesome!) that the upgrade from 4.16.x required moving from SDN to OVH networking and that was a thing that customers should do. Fine with me, I just didn't know.
So, after that I:
-
Upgraded networking from SDN to OVH
-
Upgraded from 4.16.x to 4.17.x
-
Upgraded to cgroups v2
-
Upgraded to 4.18.x
-
Checked that we were not using any depreciated things
-
Upgraded to 4.19.x
So, it's all up on the latest version and back on track with regular updates now. Let us know if you see any problems on it.
anubis testing in staging
A bunch more anubis testing in staging this last week. I worked on getting things working with our proxy network and using the native fedora package. Sadly the package I was testing with had a golang compile issue and just didn't work. I lost a fair bit of time trying to figure out what I had done wrong, but it wasn't me or our setup.
Luckily there is a package with a workaround pushed now, and hopefully work to sort it out once and for all. So, if you are testing in fedora, make sure you have the latest package.
Once that was solved things went much more smoothly. I now have koji.stg, bodhi.stg and lists.stg all using it and everything seems to work fine. You can see a pretty big drop in bw on those services too.
Early next week I will add koschei and forgejo to testing, and then after the beta is out and we are out of freeze, I am going to propose we enable all those in production along with pagure.io.
f43 beta reelease next tuesday
Amazingly, we managed to hit the early date again and Fedora 43 Beta will be released next tuesday. Thanks to everyone who worked so hard on this milestone.
We did run into an issue yesterday with f43 updates, and I thought I would expand on the devel-announce posting about it.
When beta is "go" we do a bunch of prep work, make sure we have a final nightly compose that matches the RC that was declared go and then we unfreeze updates for the f43 release. This means a ton of updates that were in updates-testing and blocked from going stable due to the beta freeze suddently can go stable.
However, a while back, we added and made more consistent the checks that bodhi uses when pushing updates stable. Before it just saw that the update had a request for stable and pushed it. Now it checks all the things that should allow it to be pushed: Has it passed gating checks or have those been waived, has it spent the right amount of time in updates testing or has it gotten enough karma to go stable.
Also when we unfreeze beta we switch the updates policy in bodhi to require more days in testing, as we are nearing the release.
What this all meant was that if you had a update submitted recently during the freeze, it spent 3 days in testing and requested stable. But we changed that value from 3 days to 7 (or 14 for critpath) and so when we tried to process those updates, bodhi kicked them out of the compose saying "sorry, not enough days in testing".
Probibly what we will end up doing is changing the requirements instead at beta freeze instead of at the end of it. That way updates will need the new updated days in testing from the start of the freeze.
So, if you had a update kicked out, it should be able to go to stable after the required number of days (or karma!) is reached.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/115198218305728577
13 Sep 2025 5:05pm GMT