20 Feb 2026

feedFedora People

Fedora Community Blog: Community Update – Week 8 2026

Fedora Community Blog's avatar

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 16 Feb - 20 Feb 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It's responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It's responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It's responsible for releases, retirement process of packages and package builds.
Ticket tracker

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update - Week 8 2026 appeared first on Fedora Community Blog.

20 Feb 2026 10:00am GMT

Fedora Magazine: Podman Test Days: Try the New Backend & Parallel Pulls

Fedora Magazine's avatar

The Podman team and the Fedora Quality Assurance team are organizing a Test Week from Friday, February 27 through Friday, March 6, 2026. This is your chance to get an early look at the latest improvements coming to Podman and see how they perform on your machine.

What is Podman?

For those new to the tool, Podman is a daemonless, Linux-native engine for running, building, and sharing OCI containers. It offers a familiar command-line experience but runs containers safely without requiring a root daemon.

What's Coming in Podman 5.8?

The upcoming release includes updates designed to make Podman faster and more robust. Here is what you can look forward to, and what you can try out during this Fedora Test Day.

A Modern Database Backend (SQLite)

Podman is upgrading its internal storage logic by transitioning to SQLite. This change modernizes how Podman handles data under the hood, aiming for better stability and long-term robustness.

Faster Parallel Pulls

This release brings optimizations to how Podman downloads image layers, specifically when pulling multiple images at the same time. For a deep dive into the engineering behind this, check out the developer blog post on Accelerating Parallel Layer Creation.

Experiment and Explore: Feel free to push the system a bit and try pulling several large images simultaneously to see if you notice the performance boost. Beyond that, please bring your own workflows. Don't just follow the wiki instructions. Run the containers and commands you use daily. Your specific use cases are the best way to uncover edge cases that standard tests might miss.

What do I need to do?

Details on how to test and report results are available at the Wiki Test Day site for Podman 5.8 test day:
https://fedoraproject.org/wiki/Test_Day:2026-02-27_Podman_5.8

Test Week runs from Friday, February 27 through Friday, March 6, 2026

Thank you for taking part in the testing of Fedora Linux 44!

20 Feb 2026 8:00am GMT

Felipe Borges: GNOME is participating in Google Summer of Code 2026!

Felipe Borges's avatar

Potential GSoC contributors may reach out with questions about our project ideas or GNOME internships in general. Please direct them to gsoc.gnome.org to learn more.

You can find our proposed project ideas at gsoc.gnome.org/2026.

Project proposal submissions are open from March 16th to 31st.

20 Feb 2026 6:58am GMT

19 Feb 2026

feedFedora People

Peter Czanik: New toy in the house for AI, gaming, Linux, Windows and FreeBSD

19 Feb 2026 11:56am GMT

Peter Czanik: UDP reliability improved in syslog-ng Debian packaging

19 Feb 2026 10:19am GMT

Fedora Community Blog: Master Podman 5.8: Join Fedora Test Week

Fedora Community Blog's avatar

Want to learn the latest container tech? From February 27 to March 6, 2026, you can join the Podman 5.8 Test Day. It is the perfect time to explore new features and see how the future of Fedora is built.

What is new?

Why join?

Your setup is unique. By running Podman 5.8 on your machine, you make sure the final version works perfectly for everyone. It is a great way to learn by doing and to see how top-tier open-source software is made.

Start here

We have prepared easy-to-follow steps for you here: https://fedoraproject.org/wiki/Test_Day:2026-02-27_Podman_5.8

The post Master Podman 5.8: Join Fedora Test Week appeared first on Fedora Community Blog.

19 Feb 2026 10:00am GMT

18 Feb 2026

feedFedora People

Brian (bex) Exelbierd: Replacing my compact calendar spreadsheet with an ICS-powered web app

Brian (bex) Exelbierd's avatar

I've used some form of DSri Seah's Compact Calendar for over seven years. The calendar is a lovingly designed single-page view of the entire year, organized into Monday-through-Sunday weeks with no breaks between months.

The point of the format is simple: my normal calendar is great at telling me what I'm doing on Tuesday. What it's terrible at is answering planning questions that are above the day level, such as:

For a long time, my compact calendar was a spreadsheet. That worked until it didn't.

The problem I actually needed to solve

The spreadsheet version served me well for years, but life got more complicated.

My kid is getting older, which means more activities to track: summer camps, school breaks, etc. My partner and I no longer work for the same company, so we don't share the same corporate holidays and as our roles have changed so has the amount of travel we do. And, honestly, my spreadsheet has bespoke formulas that only I understand … on Thursdays when there is a full moon.

My partner knows how to use a calendar app. She really doesn't want to learn a special spreadsheet for planning, and I don't blame her.

The real friction screaming out that there had to be a better way was the double-entry work. If my kid has summer camp in July, I'd put it on the family calendar - and then manually mark those weeks on my compact calendar spreadsheet. Two sources of truth means one of them is eventually wrong.

So the job wasn't "build a better calendar." It was: keep the year-at-a-glance view, but make the calendar app the source of truth.

The shape of the solution

I decided to build a web version of the compact calendar that could read directly from standard ICS calendar feeds.

Put the summer camp on the shared calendar once. The compact calendar picks it up automatically.

And if this was going to be something my partner and I actually used together, it needed two things:

What the tool does

The calendar renders a full year on a single page. Each row is one week, Monday through Sunday.

Parallel to the block of weeks running down the page is a column for displaying committed events and a second for displaying possible events.

The tool uses color to signal status at a glance:

Here's what the full-year view looks like with demo data loaded:

A full-year compact calendar view with one row per week (Monday through Sunday), with committed events shown in green, possible events in yellow, public holidays in red, and overlaps highlighted with a yellow border.

Inputs: URL, file, or demo

While there is demo data available in the system, the key comes from loading your own data. You can choose two different kinds of sources:

We're an Apple household so our calendars live in iCloud, but the tool doesn't care about your calendar provider. Anything that produces a standard ICS feed works.

My practical workflow is two shared calendars in Apple Calendar:

Both are published as webcal URLs, and the compact calendar fetches them and renders the year view. Using my shared calendar works because the app ignores events that aren't multi-day, all-day blocks - so dentist appointments don't drown out the year view. You can optionally include single day all-day events if that helps you.

The setup controls are intentionally simple:

Configuration controls showing a country dropdown (for public holidays) and two inputs for selecting the committed and possible calendar sources.

The tech (and the annoying part)

This is a vanilla JavaScript app built with Vite, hosted on Azure Static Web Apps. No framework - just DOM manipulation, a CSS file, and under 500 lines of main application code.

The interesting technical problem was CORS.

Calendar providers like iCloud don't set CORS headers on their published feeds, which means a browser can't fetch them directly. The solution is a small Azure Function that acts as a proxy:

The proxy doesn't store or log anything. It's a pass-through.

I built the app with an AI coding agent. I provided direction and made decisions, but I didn't hand-write every line. For this kind of tool, I'm comfortable with that. It's a static site that renders calendar data client-side, and the risk profile is low. Additionally, nothing in this code represents a new problem or a novelty. This is bog-standard code, and the agent handled the boilerplate well for this project.

Importantly, even though I could have written this code myself, I wouldn't have. I probably would have gotten myself caught in a bit of analysis paralysis over frameworks. But more importantly, writing a lot of this code is just boring code to write. The AI agent has allowed me to solve my own problem, and that's the part that matters to me. I didn't have to suddenly become more disciplined about spreadsheets or get my family dragged onto a tool that really only speaks to me. Instead, I was able to change the shape of the problem and make it more solvable within the context of the humans involved.

Privacy and the honest trade-off

All your data stays in your browser. The app stores the URLs you're loading, your selected country, and cached holiday data in local storage. This is purely functional and not for tracking.

Calendar URLs necessarily have to go through the server-side proxy because browsers won't fetch them directly. The proxy is a stateless pass-through - I don't persist calendar data in the function or in your browser. Calendar URLs are sent via POST request body rather than query parameters, which means they aren't captured in Azure's platform-level request logs. Error logging includes only the target hostname (e.g., "iCloud fetch failed"), never the full URL or authentication tokens. If your calendar URL contains authentication tokens (iCloud URLs do), understand that the proxy briefly sees them in transit.

Try it out

The calendar is live at cc.bexelbie.com. You can load the built-in demo data to explore without connecting your own calendars - select "Demo" from either input dropdown.

The source is on GitHub at bexelbie/online-compact-calendar. If you have ideas or find bugs, open an issue.

On first visit, there's a banner that points you at settings:

A first-run welcome banner that tells the user to use the gear icon to configure the app.

What's next

I'm going to live with it for a while before adding features. The spreadsheet served me for seven years with almost no changes.

18 Feb 2026 1:30pm GMT

Vojtěch Trefný: Filtering Devices with LVM Devices File

Vojtěch Trefný's avatar

To control which devices LVM can work with, it was always possible to configure filtering in the devices section of the /etc/lvm/lvm.conf configuration file. But filtering devices this way was not very simple and could lead to problems when using paths like /dev/sda which are not stable. Many users also didn't know this possibility exists and while using this type of filtering is possible for a single command with the --config option, it is not very user friendly. This all changed recently with the introduction of the new configuration file /etc/lvm/devices/system.devices and the corresponding lvmdevices command in LVM 2.03.12. A new option --devices was also added to the existing LVM commands for a quick way to limit which devices one specific command can use.

LVM Devices File

As was said above, there is a new /etc/lvm/devices/system.devices configuration file. When this file exists, it controls which devices LVM is allowed to scan. Instead of relying on matching the device path, the devices file uses stable identifiers like WWID, serial number or UUID.

A device file on a simple system with a single physical volume on a partition would look like this:

# LVM uses devices listed in this file.
# Created by LVM command vgimportdevices pid 187757 at Fri Feb 13 16:44:45 2026
# HASH=1524312511
PRODUCT_UUID=4d58d0c1-8b67-4fa6-a937-035d2bfbb220
VERSION=1.1.1
IDTYPE=devname IDNAME=/dev/sda2 DEVNAME=/dev/sda2 PVID=rYeMgwy0mO0THDagB6k8mZkoOSqAWfte PART=2

When the devices file is enabled, LVM will only scan and operate on devices listed in it. Any device not present in the file is invisible to LVM, even if it has a valid PV header.

This is the biggest change brought in with this feature. The old lvm.conf based filters were always optional and LVM always scanned all devices in the system, unless told otherwise. This could cause problems on systems with many disks, where LVM (especially during boot) could take a long time scanning devices that did not even "belong" to it.

By default, the LVM devices file is enabled with the latest versions of LVM and on systems without preexisting volume groups, creating new LVM setups with commands like pvcreate or vgcreate will automatically add the new physical volumes to the devices file. If desired, this feature can be disabled by setting use_devicesfile=0 in lvm.conf or by simply removing the existing devices file. On systems without the devices file, LVM will simply scan all devices in the system the same way it did before introduction of this configuration file.

Managing Devices with lvmdevices and vgimportdevices

On most newly installed systems with LVM, the devices file should be already present and populated, but you might want to either create it later on systems installed with an older version of LVM, or manage some devices manually. It is possible to modify the system.devices manually, but a new command lvmdevices was added for simple management of the file.

To simply import all devices in an existing volume group, vgimportdevices <vgname> can be used and for all volume groups in the system, vgimportdevices -a can be used.

A single physical volume can be added to the file with lvmdevices --adddev and removed with lvmdevices --deldev.

To check all entries in the devices file, lvmdevices --check can be used and any issues found by the check command can be fixed with lvmdevices --update.

Backups

In the sample devices file above, you might have noticed the VERSION field. This is the current version of the file. LVM automatically makes a backup of the file with every change and old versions of the file can be found in the /etc/lvm/devices/backup directory. So if you make some mistakes when changing the file with lvmdevices, you can simply restore to a previous version of the file.

Overriding the Devices File and Filtering with Commands

Together with the devices file feature, a new option --devices was added to all LVM commands. This option allows specifying devices which are visible to the command. This overrides the existing devices file so it can be used either to restrict the command to work only on a subset of devices specified in the devices file or even to allow it to run on devices not specified in the file at all.

This option is also very useful when dealing with multiple volume groups with the same name. This is a known limitation of LVM - two volume groups with the same name cannot coexist in one system and LVM will refuse to work without renaming one of them. This can be a problem when dealing with cloned disks or backups. With --devices, commands like vgs can be restricted to "see" only one of the volume groups.

Issue: Missing Volume Group

As mentioned above, when installing a new system with LVM, for the newly created volume groups, the used devices will be added to the devices file. Fedora (and RHEL) installer, Anaconda, will also add all other volume groups present during installation to the devices file so these will also be visible in the installed system. The problems start when a device with a volume group is added to the system after installation. The volume group (and any logical volumes in it) is suddenly invisible. Even commands like vgs will simply ignore it, because its physical volumes are not listed in the devices file.

This can be a problem on dual boot systems with encryption. Because the second system's volume group is "hidden" by the encryption layer, it is not visible during installation and not added to the devices file. When the user unlocks the LUKS device in their newly installed system, they can't access their second system. Unfortunately in this situation, the only solution is to manually add the second system's volume group with vgimportdevices as described above.

Conclusion

The LVM devices file provides a cleaner and more reliable way to control which devices LVM uses, replacing the old lvm.conf based filtering with stable device identifiers and simple management through the lvmdevices command. Overall, for most users the devices file should work transparently without any manual configuration needed.

18 Feb 2026 7:13am GMT

17 Feb 2026

feedFedora People

Fedora Magazine: Mrhbaan Syria! Fedora now available in Syria

Fedora Magazine's avatar A dark grey banner featuring the Syrian Independence flag alongside the text "Now available in Syria", "Fedora", and the Syrian Arabic phrase "في داركم" below it. The background has a subtle triangular pattern.

Mrhbaan, Fedora community! 👋 I am happy to share that as of 10 February 2026, Fedora is now available in Syria. Last week, the Fedora Infrastructure Team lifted the IP range block on IP addresses in Syria. This action restores download access to Fedora Linux deliverables, such as ISOs. It also restores access from Syria to Fedora Linux RPM repositories, the Fedora Account System, and Fedora build systems. Users can now access the various applications and services that make up the Fedora Project. This change follows a recent update to the Fedora Export Control Policy. Today, anyone connecting to the public Internet from Syria should once again be able to access Fedora.

This article explains why this is happening now. It also covers the work behind the scenes to make this change happen.

Why Syria, why now?

You might wonder: what happened? Why is this happening now? I cannot answer everything in this post. However, the story begins in December 2024 with the fall of the Assad regime in Syria. A new government took control of the country. This began a new era of foreign policy in Syrian international relations.

Fast-forward to 18 December 2025. The United States signed the National Defense Authorization Act for Fiscal Year 2026 into law. This law repealed the 2019 Caesar Act sanctions. This action removed Syria from the list of OFAC embargoed countries. The U.S. Department of the Treasury maintains this list.

This may seem like a small change. Yet, it is significant for Syrians. Some U.S. Commerce Department regulations remain in place. However, the U.S. Department of the Treasury's policy change now allows open source software availability in Syria. The Fedora Project updated its stance to welcome Syrians back into the Fedora community. This matches actions taken by other major platforms for open source software, such as Microsoft's GitHub.

Syria & Fedora, behind the scenes

Opening the firewall to Syria took seconds. However, months of conversations and hidden work occurred behind the scenes to make this happen. The story begins with a ticket. Zaid Ballour (@devzaid) opened Ticket #541 to the Fedora Council on 1 September 2025. This escalated the issue to the Fedora Council. It prompted a closer look at the changing political situation in Syria.

Jef Spaleta and I dug deeper into the issue. We wanted to understand the overall context. The United States repealed the 2019 Caesar Act sanctions in December 2025. This indicated that the Fedora Export Policy Control might be outdated.

During this time, Jef and I spoke with legal experts at Red Hat and IBM. We reviewed the situation in Syria. This review process took time. We had to ensure compliance with all United States federal laws and sanctions. The situation for Fedora differs from other open source communities. Much of our development happens within infrastructure that we control. Additionally, Linux serves as digital infrastructure. This context differs from a random open source library on GitHub.

However, the path forward became clear after the repeal of the 2019 Caesar Act. After several months, we received approval. Fedora is accessible to Syrians once again.

Opening the door to Syria

Some folks may have noticed the Fedora Infrastructure ticket last week. It requested the removal of the firewall block. We also submitted a Fedora Legal Docs Merge Request to change the Fedora Export Control Policy.

We wanted to share this exciting announcement now. It aligns with our commitment to the Fedora Project vision:

"The Fedora Project envisions a world where everyone benefits from free and open source software built by inclusive, welcoming, and open-minded communities."

We look forward to welcoming Syrians back into the Fedora community and the wider open source community at large. Mrhbaan!

17 Feb 2026 8:00am GMT

14 Feb 2026

feedFedora People

Kevin Fenzi: misc fedora bits 2nd week of feb 2026

Kevin Fenzi's avatar Scrye into the crystal ball

Another weekly recap of happenings around fedora for me.

Strange long httpd reload times on proxy11

I spent a fair bit of time looking at one of our proxies. We have them all to a reload (aka 'graceful restart') every hour when we update a ticketkey on them. For the vast majority of them, thats fine and works as expected. However, proxy11 decided to start taking a while (like 12-15seconds) to reload, causing our monitoring to alert that it was down... then back up.

In the end, it seemed the problem was somehow related to some old tls certificates that were present, but not used anywhere. All I can think of is that it's doing some kind of parsing of all certs and somehow those old ones cause it undue processing time. I removed those old certs and reload times went way back down again.

I'm tempted to try and figure out what it's doing exactly here, but I already spent a fair bit of time on it and it's working again now, so I guess I will just shrug and move on.

Anubis and download servers

A while back I had to hurredly deploy anubis in front of our download servers. This was due to the scrapers deciding to just download every rpm / iso from every fedora release since the dawn of time at a massive concurrency. This was saturating one of our 10G links completely, and making another somewhat full. So, I deployed anubis and it dropped things back to 'normal' again.

Fast forward to this last week, and my rush in deploying anubis came back to bite me. We have a cloudfront distribution that uses our download servers as it's 'origin'. Then we point all aws network blocks to use that for any fedora instances in aws. This is a win for us as then everything for them is cached on the aws side saving bandwith, and a win for aws users as that traffic is 'local' to them so faster and doesn't cause them to need to be billed for ingress either.

Last week, anubis started blocking CloudFront, so uses in aws would get a anubis challenge page instead of the actual content they were expecting. But why did it this just happen now? well, as near as I could determine, someone/scrapers were hitting the CloudFront endpoints and crawling our download server (fine, no problem there), but then they hit a directory that they handled poorly.

The directory was used/last updated about 11 years ago with a readme file explaining that the content was moved and no longer there. Great. However, also it had previous subdirectories as links to '.' (ie, the current directory). Since scrapers don't use any of the 20 years of crawling code, and instead just brute force things, this resulted in a bunch of requests like:

GET /foo/ GET /foo/foo/ GET /foo/foo/foo/

and so on. These are all really small (just a directory listing), so that meant it could make requests really really fast. So, after some point anubis started challenging those CloudFront connections and boom.

So, the problem with the hurred deployment I had made there was that The policy file I had deployed was not actually being used. I had allowed CloudFront, but it didn't seem to help any, and it took me far too long to figure out that anubis was starting up, printing one error about not being able to read the policy file and just running with the default configuration. ;( It turned out be a podman/selinux interaction and is now fixed.

I also removed those . links and set that directory tree to just 403 all requests to it.

Anubis and forge

Also this week, folks were reporting problems with our new forgejo forge. Anubis was doing challenges when people were trying to submit comments and it was messing them up.

In the end here, I just needed to adjust the config to allow POSTs through. At least right now scrapers aren't doing any POSTS and just allowing those seems to fix the issues people were having.

Some more scrapers

Friday we had them hitting release-monitoring.org. This time it was what I am calling a 'type 0' scraper. It was all coming from one cloud ip and I could just block them.

This morning a bit ago, we had a group hit/find the 'search' button on koji.fedoraproject.org, taking it offline. I was able to block the endpoint for a few hours and they went away, but no telling if they will be back. These were the 'type 2' kind (botnet using users ip's/browsers from 100's of thousands of different ips).

I am sad that the end game here sounds like there's not going to be so much of a open internet anymore. ie, for self defense sites will all have to go to requiring registration of some kind before working. I can only hope business models change before it comes to that.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116070476999694239

14 Feb 2026 6:20pm GMT

Robert Wright: FOSDEM 2026

14 Feb 2026 8:00am GMT

13 Feb 2026

feedFedora People

Peter Czanik: The syslog-ng Insider 2026-02: stats-exporter; blank filter; Kafka source

13 Feb 2026 10:34am GMT

Fedora Community Blog: Community Update – Week 07 2026

Fedora Community Blog's avatar

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 09 - 13 February 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It's responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It's responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It's responsible for releases, retirement process of packages and package builds.
Ticket tracker

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

EPEL

This team is working on keeping Epel running and helping package things.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update - Week 07 2026 appeared first on Fedora Community Blog.

13 Feb 2026 10:00am GMT

Remi Collet: ⚙️ PHP version 8.4.18 and 8.5.3

Remi Collet's avatar

RPMs of PHP version 8.5.3 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.18 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

And soon in the official updates:

⚠️ To be noticed :

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

13 Feb 2026 5:42am GMT

12 Feb 2026

feedFedora People

Christof Damian: Friday Links 26-06

12 Feb 2026 11:00pm GMT

Brian (bex) Exelbierd: Building a tiny ephemeral draft sharing system on Hedgedoc

Brian (bex) Exelbierd's avatar

This yak is now shaved!

me

I've been working on two submissions I want to put into the CFP for installfest.cz and had them at a "man it'd be nice to have someone else read and comment on this" level of done. Normally when this happens I have to psych myself up for it, both because receiving feedback can be hard and because I have to do a format conversion. I tend to write in markdown in "all the places" and sharing a document for edits has typically meant pasting it into something like Google Docs or Office 365, where even if it still looks like markdown … it isn't.

And that's when the yak walked into the room. Instead of just pasting my drafts into Google Docs and getting on with the reviews, I decided I needed to delay getting feedback and build the markdown collaborative editing system of my dreams. Classic yak shaving - solving a problem you don't actually need to solve in order to eventually do the thing you originally set out to do. What is Yak Shaving - a video by Matthew Miller if you're unfamiliar.

When I am done, I then have to take this text back to where it was originally going, often in good clean markdown (this blog post is in markdown!). This rigmarole is tiring. I also dislike that the go to tools for this for me had turned into an exercise in ensuring guests could access a document or collecting someone's login ids to yet another system.

I knew there had to be a better way. Then it hit me. When markdown started to take off we had a slew of markdown collaborative editing sites take off. They were often modeled on the older etherpad. Well, several are still around. I looked at online options as I tend to prefer using a service when I can so I don't get more sysadmin work to do.

I hit three snags in picking one:

  1. I don't like being on a free tier when I don't understand how it is supported. While I don't know that anyone in this space is nefarious, the world is trending in a specific direction. I don't mind paying, but this was also not going to generate enough value to warrant serious payments.
  2. The project that first came to mind for markdown collaboration went open core back in 2019. Open source business models are hard, and doing open core well is even harder. As you'll see below I had specific needs and I had a feeling I might run into the open core wall.
  3. One of the CFPs would actually benefit from implementing this as my example … bonus!

After examining a bunch of options, I settled on building something out of Hedgedoc. This was not an easy choice and the likelihood of entering analysis paralysis was super high. So I decided to try to force this to fit on a free tier Google GCP instance I have been running for years. It is the tiny e2-micro burstable instance, a literal thimble of compute.

This ran off a lot of options. Privacy first options need more compute just to do encryption work. A bunch of options want a server database (Postgres and friends) and a single person instance should be fine on SQLite, in my opinion. All roads now ran to Hedgedoc. It was the only option that could run on SQLite, tolerate my tiny VM, still give me collaborative markdown, and seemed to have every feature required if I could make it work.

It wasn't all sunshine and happiness though. Hedgedoc is in the middle of writing version 2.0, which means 1.0 is frozen for anything except critical fixes and all efforts are focused on the future. Therefore, the documentation being a bit rough in places was something I was going to have to live with.

My core requirements were:

  1. Only I am allowed to create new notes
  2. Anyone with the "unguessable" url can edit and should not require an account to do so
  3. This should require next to zero system administration work and be easy to start and stop
  4. When I need more features, I should be able to extend this with a plugin for tools like Obsidian or Visual Studio Code.

And while it took longer than I'd hoped, it works. Here's how:

  1. Write yourself a configuration file for Hedgedoc

config.json:

{
  "production": {
    "sourceURL": "https://github.com/bexelbie/hedgedoc",
    "domain": "<url>",
    "host": "localhost",
    "protocolUseSSL": true,
    "loglevel": "info",
    "db": {
      "dialect": "sqlite",
      "storage": "/data/db/hedgedoc.sqlite"
    },
    "email": true,
    "allowEmailRegister": false,
    "allowAnonymous": false,
    "allowAnonymousEdits": true,
    "requireFreeURLAuthentication": true,
    "disableNoteCreation": false,
    "allowFreeURL": false,
    "enableStatsApi": false,
    "defaultPermission": "limited",
    "imageUploadType": "filesystem",
    "hsts": {
      "enable": true,
      "maxAgeSeconds": 31536000,
      "includeSubdomains": true,
      "preload": true
    }
  }
}

This sets a custom source URL for the fork I have made (more below), enables SSL, disables new account registration, and allows edits via unguessable URLs without requiring logins.

  1. Decide how you want to launch the container, I am using a quadlet, and provide some environment variables:
CMD_SESSION_SECRET="<secret>"
CMD_CONFIG_FILE=/hedgedoc/config.json
NODE_ENV=production

These just put it in Production mode, point it at the config and provide the only secret required.

  1. You're basically done. I happen to have put mine behind a Cloudflare tunnel and updated the main page of the site, but those are pretty straight forward.

More Yak Shaving

Naturally I planned to launch it, create my user id via the cli, and share my CFP submissions with the folks I wanted reviews from. Narrator: Naturally, that's not what happened.

I decided to push YAGNI1 out of the way and NEED IT! Specifically I forked the v1 code into a repository to add some features. The upstream is unlikely to want any of these so I will have to carry these patches. What I did:

  1. Hedgedoc will do color highlighting and gutter indicators so you can see which author added what text. Unfortunately it wasn't seeming to be working. I was getting weak indicators (underlines instead of highlighting) and often nothing. So I fixed that.
  2. The colors for authorship are chosen randomly. I am a bit past my prime in the seeing department and it was hard to see the colors against the dark editor background, so I restricted color choices to those that are contrasting. It isn't perfect, but it is better.
  3. My particular set up involves a lot of guest editors. Normally I share to just a few folks, but sometimes to many. They'll all be anonymous. Hedgedoc doesn't track authorship colors for guests, so I patched in a system to generate color markings for anonymous editors.
  4. A feature I always loved in Etherpad was that you could temporarily hide the authorship colors when you just wanted to "read the document." So I added a button for that. While I was doing that I discovered that there is a separate toggle to switch the editor into light mode, but I couldn't see it because the status bar was black and it was set to .2 opacity!! I fixed that too. Also, now the status bar switches when the editor switches.
  5. Comments, it turns out are needed. So I coded in rudimentary support for critic markup comments.

I have other ideas, but instead I am going to stop and let YAGNI win for a while. Besides, hopefully 2.0 will ship soon and render all of this unneeded.

So there you go, now if you want to offer your assistance to help me write something, I'll send you a link and you can go to town on our shared work. If you want to see more about this, well, let's see if Installfest.cz thinks you should or not :D - and whether this yak decides to grow its hair back.

  1. YAGNI: You Ain't Gonna Need It - a philosophy that reminds us that features we dream up aren't needed until an actual use comes along (or a paying customer). This also applies to engineering for future ideas when those ideas aren't committed too yet.

12 Feb 2026 12:00pm GMT