20 Nov 2025

feedFedora People

Fedora Infrastructure Status: Updates and Reboots

20 Nov 2025 10:00pm GMT

15 Nov 2025

feedFedora People

Kevin Fenzi: infra weeksly recap: Early November 2025

Kevin Fenzi's avatar Scrye into the crystal ball

Well, it's been a few weeks since I made one of these recap blog posts. Last weekend I was recovering from some oral surgery, the weekend before I had been on PTO on friday and was trying to be 'away'.

Lots of things happened in the last few weeks thought!

tcp timeout issue finally solved!

As contributors no doubt know we have been fighting a super anoying tcp timeout issue. Basically just sometimes requests from our proxies to backend services just timeout. I don't know how many hours I spent on this issue trying everything I could think of, coming up with theorys and then disproving them. Debugging was difficult because _most_ of the time everything worked as expected. Finally, after a good deal of pain I was able to get a tcpdump showing that when it happens the sending side sends a SYN and the receiving side sees nothing at all.

This all pointed to the firewall cluster in our datacenter. We don't manage that, our networking folks do. It took some prep work, but last week they were finally able to update the firmware/os in the cluster to the latest recommended version.

After that: The problem was gone!

I don't suppose we will ever know the exact bug that was happening here, but one final thing to note: When they did the upgrade the cluster had over 1 million active connections. After the upgrade it has about 150k. So, seems likely that it was somehow not freeing resources correctly and dropping packets or something along those lines.

I know this problem has been anoying to contributors. It's personally been very anoying to me, my mind kept focusing on it and not anything else. It kept me awake at night. ;(

In any case finally solved!

There is a new outstanding issue that has occurred from the upgrade: https://pagure.io/fedora-infrastructure/issue/12913 basically long running koji cli watch tasks ( watch-task / watch-logs) are getting a 502 error after a while. This does not affect the task in any way, just the watching of it. Hopefully we can get to the bottom of this and fix it soon.

outages outages outages

We have had a number of outages of late. They have been for different reasons, but it does make it frustrating trying to contribute.

A recap of a few of them:

  • AI scrapers continue to mess with us. Even though most of our services are behind anubis now, they find ways around that, like fetching css or js files in loops, hitting things that are not behind anubis and generally making life sad. We continue to block things as we can. The impact here is mostly that src.fedoraproject.org is sensitive to high load and we need to make sure and block things before it impacts commits.

  • We had two outages ( friday 2025-11-07 and later monday 2025-11-10 ) That were caused by a switch loop when I brought up a power10 lpar. This was due to the somewhat weird setup on the power10 lpars where they shouldn't be using the untagged/native vlan at all, but a build vlan. The friday outage took a while for us to figure out what was causing it. The monday outage was very short. All those lpars are correctly configured now and up and operating ok.

  • We had a outage on monday ( 2025-11-10 ) where a set of crashlooping pods filled up our log server with tracebacks and generally messed with everything. Pod was fixed, storage was cleared up.

  • We had some kojipkgs outages on thursday ( 2025-11-13 ) and friday ( 2025-11-14 ). These were caused by many requests for directory listings for some ostree objects directories. Those directories have ~65k files in them each, so apache has to stat 64k files each time it gets those requests. But then, cloudfront (which is making the request) times out after 30s and resends. So, you get a load average of 1000 and very slow processing. So, for now we put that behind varnish, so it just has to do it the first time for a dir and then it can send the cached result to all the rest. If that doesn't fix it, we can look at just disabling indexes there, but I am not sure the implications.

We had a nice discussion in the last fedora infrastructure meeting about tracking outages better and trying to do a RCA on them after the fact to make sure we solved it or at least tried to make it less likely to happen again.

I am really hoping for some non outage days and smooth sailing for a bit.

power10s

I think we are finally done with the power10 setup. Many thanks again to Fabian on figuring out all the bizare and odd things we needed to do to configure the servers as close to the way we want them as possible.

The fedora builder lpars are all up and operating since last week. The buildvm-ppc64les on them should have more memory and cpus that before and hopefully are faster for everyone. We have staging lpars also now.

The only final thing to do is to get the coreos builders installed. The lpars themselves are all setup and ready to go.

rdu2-cc to rdu3 datacenter move

I haven't really been able to think about this due to outages and timeout issue, but things will start heating up next week again.

It seems unlikely that we will get our new machine in time to matter now, so I am moving to a new plan: repurposing another server there to migrate things to. I plan to try and get it setup next week and sync pagure.io data to a new pagure instance there. Depending on how that looks we might move to it first week of december.

Theres so much more going on, but those are some highlights I recall...

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115554940851319300

15 Nov 2025 5:08pm GMT

14 Nov 2025

feedFedora People

Fedora Community Blog: Infra and RelEng Update – Week 46 2025

Fedora Community Blog's avatar

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 10th - 14th November 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It's responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

List of new releases of apps maintained by I&R Team

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Infra and RelEng Update - Week 46 2025 appeared first on Fedora Community Blog.

14 Nov 2025 10:00am GMT

Fedora Magazine: Fedora at Kirinyaga University – Docs workshop

Fedora Magazine's avatar Kirinyaga University students group photo

We did it again, Fedora at Kirinyaga university in Kenya. This time, we didn't just introduce what open source is - we showed students how to participate and actually contribute in real time.

Many students had heard of open source before, but were not sure how to get started or where they could fit. We did it hands-on and began with a simple explanation of what open source is: people around the world working together to create tools, share knowledge, and support each other. Fedora is one of these communities. It is open, friendly, and built by different people with different skills.

We talked about the many ways someone can contribute, even without deep technical experience. Documentation, writing guides, design work, translation, testing software, and helping new contributors are all important roles in Fedora. Students learned that open source is not only for "experts." It is also for learners. It is a place to grow.

Hands-on Documentation Workshop

A room full of kirinyaga students on a worskhop

After the introduction, we moved into a hands-on workshop. We opened Fedora Docs and explored how documentation is structured. Students learned how to find issues, read contribution instructions, and make changes step-by-step. We walked together through:

By the end of the workshop, students had created actual contributions that went to the Fedora project. This moment was important. It showed them that contributing is not something you wait to do "someday." You can do it today.

"This weekend's Open Source Event with Fedora, hosted by the Computer Society Of Kirinyaga, was truly inspiring! 💻

Through the guidance of Cornelius Emase, I was able to make my first pull request to the Fedora Project Docs - my first ever contribution to the open-source world. 🌍"
- Student at Kirinyaga University

Thank you note

Huge appreciation to:

And to everyone who played a part - even if your name isn't listed here, I see you. You made this possible.

Growing the next generation

The students showed interest, curiosity, and energy. Many asked how they can continue contributing and how to connect with the wider Fedora community. I guided them to Fedora Docs, Matrix community chat rooms, and how they can be part of the Fedora local meetups here in Kenya.

We are introducing open source step-by-step in Kenya. There is a new generation of students who want to be part of global technology work. They want to learn, collaborate, and build. Our role is to open the door and walk together(I have a discourse post on this, you're welcome to add your views).

A group photo of students after the workshop

What Comes Next

This event is part of a growing movement to strengthen Fedora's presence in Kenya. More events will follow so that learning and contributing can continue.

We believe that open source becomes strong when more people are included. Fedora is a place where students in Kenya can learn, grow, share, and contribute to something global.

We already had a Discourse thread running for this event - from the first announcement, planning, and budget proposal, all the way to the final workshop. Everything happened in the open. Students who attended have already shared reflections there, and anyone who wants to keep contributing or stay connected can join the conversation.

You can check the events photos submitted here on Google photos(sorry that's not FOSS:))

Cornelius Emase,
Your Friend in Open Source(Open Source Freedom Fighter)

14 Nov 2025 8:00am GMT

13 Nov 2025

feedFedora People

Jiri Eischmann: How We Streamed OpenAlt on Vhsky.cz

Jiri Eischmann's avatar

The blog post was originally published on my Czech blog.

When we launched Vhsky.cz a year ago, we did it to provide an alternative to the near-monopoly of YouTube. I believe video distribution is so important today that it's a skill we should maintain ourselves.

To be honest, it's bothered me for the past few years that even open-source conferences simply rely on YouTube for streaming talks, without attempting to secure a more open path. We are a community of tech enthusiasts who tinker with everything and take pride in managing things ourselves, yet we just dump our videos onto YouTube, even when we have the tools to handle it internally. Meanwhile, it's common for conferences abroad to manage this themselves. Just look at FOSDEM or Chaos Communication Congress.

This is why, from the moment Vhsky.cz launched, my ambition was to broadcast talks from OpenAlt-a conference I care about and help organize. The first small step was uploading videos from previous years. Throughout the year, we experimented with streaming from OpenAlt meetups. We found that it worked, but a single stream isn't quite the stress test needed to prove we could handle broadcasting an entire conference.

For several years, Michal Vašíček has been in charge of recording at OpenAlt, and he has managed to create a system where he handles recording from all rooms almost single-handedly (with assistance from session chairs in each room). All credit to him, because other conferences with a similar scope of recordings have entire teams for this. However, I don't have insight into this part of the process, so I won't focus on it. Michal's job was to get the streams to our server; our job was to get them to the viewers.

OpenAlt's AV background with running streams. Author: Michal Stanke.

Stress Test

We only got to a real stress test the weekend before the conference, when Bashy prepared a setup with seven streams at 1440p resolution. This was exactly what awaited us at OpenAlt. Vhsky.cz runs on a fairly powerful server with a 32-core i9-13900 processor and 96 GB of RAM. However, it's not entirely dedicated to PeerTube. It has to share the server with other OSCloud services (OSCloud is a community hosting of open source web services).

We hadn't been limited by performance until then, but seven 1440p streams were truly at the edge of the server's capabilities, and streams occasionally dropped. In reality, this meant 14 continuous transcoding processes, as we were streaming in both 1440p and 480p. Even if you don't change the resolution, you still need to transcode the video to leverage useful distribution features, which I'll cover later. The 480p resolution was intended for mobile devices and slow connections.

Remote Runner

We knew the Vhsky.cz server alone couldn't handle it. Fortunately, PeerTube allows for the use of "remote runners". The PeerTube instance sends video to these runners for transcoding, while the main instance focuses only on distributing tasks, storage, and video distribution to users. However, it's not possible to do some tasks locally and offload others. If you switch transcoding to remote runners, they must handle all the transcoding. Therefore, we had to find enough performance somewhere to cover everything.

I reached out to several hosting providers known to be friendly to open-source activities. Adam Štrauch from Roští.cz replied almost immediately, saying they had a backup machine that they had filed a warranty claim for over the summer and hadn't tested under load yet. I wrote back that if they wanted to see how it behaved under load, now was a great opportunity. And so we made a deal.

It was a truly powerful machine: a 48-core Ryzen with 1 TB of RAM. Nothing else was running on it, so we could use all its performance for video transcoding. After installing the runner on it, we passed the stress test. As it turned out, the server with the runner still had a large reserve. For a moment, I toyed with the idea of adding another resolution to transcode the videos into, but then I decided we'd better not tempt fate. The stress test showed us we could keep up with transcoding, but not how it would behave with all the viewers. The performance reserve could come in handy.

Load on the runner server during the stress test. Author: Adam Štrauch.

Smart Video Distribution

Once we solved the transcoding performance, it was time to look at how PeerTube would handle video distribution. Vhsky.cz has a bandwidth of 1 Gbps, which isn't much for such a service. If we served everyone the 1440p stream, we could serve a maximum of 100 viewers. Fortunately, another excellent PeerTube feature helps with this: support for P2P sharing using HLS and WebRTC.

Thanks to this, every viewer (unless they are on a mobile device and data) also becomes a peer and shares the stream with others. The more viewers watch the stream, the more they share the video among themselves, and the server load doesn't grow at the same rate.

A two-year-old stress test conducted by the PeerTube developers themselves gave us some idea of what Vhsky could handle. They created a farm of 1,000 browsers, simulating 1,000 viewers watching the same stream or VOD. Even though they used a relatively low-performance server (quad-core i7-8700 CPU @ 3.20GHz, slow hard drive, 4 GB RAM, 1 Gbps connection), they managed to serve 1,000 viewers, primarily thanks to data sharing between them. For VOD, this saved up to 98% of the server's bandwidth; for a live stream, it was 75%:

If we achieved a similar ratio, then even after subtracting 200 Mbps for overhead (running other services, receiving streams, data exchange with the runner), we could serve over 300 viewers at 1440p and multiples of that at 480p. Considering that OpenAlt had about 160 online viewers in total last year, this was a more than sufficient reserve.

Live Operation

On Saturday, Michal fired up the streams and started sending video to Vhsky.cz via RTMP. And it worked. The streams ran smoothly and without stuttering. In the end, we had a maximum of tens of online viewers at any one time this year, which posed no problem from a distribution perspective.

In practice, the server data download savings were large even with just 5 peers on a single stream and resolution.

Our solution, which PeerTube allowed us to flexibly assemble from servers in different data centers, has one disadvantage: it creates some latency. In our case, however, this meant the stream on Vhsky.cz was about 5-10 seconds behind the stream on YouTube, which I don't think is a problem. After all, we're not broadcasting a sports event.

Diagram of the streaming solution for OpenAlt. Labels in Czech, but quite self-explanatory.

Minor Problems

We did, however, run into minor problems and gained experience that one can only get through practice. During Saturday, for example, we found that the stream would occasionally drop from 1440p to 480p, even though the throughput should have been sufficient. This was because the player felt that the delivery of stream chunks was delayed and preemptively switched to a lower resolution. Setting a higher cache increased the stream delay slightly, but it significantly reduced the switching to the lower resolution.

Subjectively, even 480p wasn't a problem. Most of the screen was taken up by the red frame with the OpenAlt logo and the slides. The speaker was only in a small window. The reduced resolution only caused slight blurring of the text on the slides, which I wouldn't even have noticed as a problem if I wasn't focusing on it. I could imagine streaming only in 480p if necessary. But it's clear that expectations regarding resolution are different today, so we stream in 1440p when we can.

Over the whole weekend, the stream from one room dropped for about two talks. For some rooms, viewers complained that the stream was too quiet, but that was an input problem. This issue was later fixed in the recordings.

When uploading the talks as VOD (Video on Demand), we ran into the fact that PeerTube itself doesn't support bulk uploads. However, tools exist for this, and we'd like to use them next time to make uploading faster and more convenient. Some videos also uploaded with the wrong orientation, which was likely a problem in their metadata, as PeerTube wasn't the only player that displayed them that way. YouTube, however, managed to handle it. Re-encoding them solved the problem.

On Saturday, to save performance, we also tried transcoding the first finished talk videos on the external runner. For these, a bar is displayed with a message that the video failed to save to external storage, even though it is clearly stored in object storage. In the end we had to reupload them because they were available to watch, but not indexed.

A small interlude - my talk about PeerTube at this year's OpenAlt. Streamed, of course, via PeerTube:

Thanks and Support

I think that for our very first time doing this, it turned out very well, and I'm glad we showed that the community can stream such a conference using its own resources. I would like to thank everyone who participated. From Michal, who managed to capture footage in seven lecture rooms at once, to Bashy, who helped us with the stress test, to Archos and Schmaker, who did the work on the Vhsky side, and Adam Štrauch, who lent us the machine for the external runner.

If you like what we do and appreciate that someone is making OpenAlt streams and talks available on an open platform without ads and tracking, we would be grateful if you supported us with a contribution to one of OSCloud's accounts, under which Vhsky.cz runs. PeerTube is a great tool that allows us to operate such a service without having Google's infrastructure, but it doesn't run for free either.

13 Nov 2025 11:37am GMT

12 Nov 2025

feedFedora People

Fedora Community Blog: F43 election nominations now open

Fedora Community Blog's avatar

Today, the Fedora Project begins the nomination period during which we accept nominations to the "steering bodies" of the following teams:

This period is open until Wednesday, 2025-11-26 at 23:59:59 UTC.

Candidates may self-nominate. If you nominate someone else, check with them first to ensure that they are willing to be nominated before submitting their name.

Nominees do not yet need to complete an interview. However, interviews are mandatory for all nominees. Nominees not having their interview ready by end of the Interview period (2025-12-03) will be disqualified and removed from the election. Nominees will submit questionnaire answers via a private Pagure issue after the nomination period closes on Wednesday, 2025-11-26. The F43 Election Wrangler (Justin Wheeler) will publish the interviews to the Community Blog before the start of the voting period on Friday, 2025-12-05.

The elected seats on FESCo are for a two-release term (approximately twelve months). For more information about FESCo, please visit the FESCo docs.

The full schedule of the elections is available on the Elections schedule. For more information about the elections, process see the Elections docs.

The post F43 election nominations now open appeared first on Fedora Community Blog.

12 Nov 2025 1:18pm GMT

Brian (bex) Exelbierd: Managing a manual Alexa Home Assistant Skill via the Web UI

Brian (bex) Exelbierd's avatar

My house has a handful of Amazon Echo Dot devices that we mostly use for timers, turning lights on and off, and playing music. They work well and have been an easy solution. I also use Home Assistant for some basic home automation and serve most everything I want to verbally control to the Echo Dots from Home Assistant.

I don't use the Nabu Casa Home Assistant Cloud Service. If you're reading this and you want the easy route, consider it - the cloud service is convenient. One benefit of the service is that there is a UI toggle to mark which entities/devices to expose to voice assistants.

If you take the manual route, like I do, you must set up a developer account, AWS Lambda, and maintain a hand-coded list of entity IDs in a YAML file.

- switch.living_room
- switch.table
- light.kitchen
- sensor.temp_humid_reindeer_marshall_temperature
- sensor.living_room_temperature
- sensor.temp_humid_rubble_chase_temperature
- sensor.temp_humid_olaf_temperature
- sensor.ikea_of_sweden_vindstyrka_temperature
- light.white_lamp_bulb_1_light
- light.white_lamp_bulb_2_light
- light.white_lamp_bulb_3_light
- switch.ikea_smart_plug_2_switch
- switch.ikea_smart_plug_1_switch
- sensor.temp_humid_chase_c_temperature
- light.side_light
- switch.h619a_64c3_power_switch

A list of entity IDs to expose to Alexa.

Fun, right? Maintaining that list is tedious. I generally don't mess with my Home Assistant installation very often. Therefore, when I need to change what is exposed to Alexa or add a new device, finding the actual entity_id is annoying. This is not helped by how good Home Assistant has gotten at showing only friendly names in most places. I decided there had to be a better way to do this other than manually maintaining YAML.

After some digging through docs and the source, I found there isn't a built-in way to build this list by labels, categories, or friendly names. The Alexa integration supports only explicit entity IDs or glob includes/excludes.

So I worked out a way to build the list with a Home Assistant automation. It isn't fully automatic - there's no trigger that runs right before Home Assistant reboots - and you still need to restart Home Assistant when the list changes. But it lets me maintain the list by labeling entities rather than hand-editing YAML.

After a few experiments and some (occasionally overly imaginative) AI help, I arrived at this process. There are two parts.

Prep and staging

In your configuration.yaml enable the Alexa Smart Home Skill to use an external list of entity IDs. I store mine in /config/alexa_entities.yaml.

alexa:
  smart_home:
    locale: en-US
    endpoint: https://api.amazonalexa.com/v3/events
    client_id: !secret alexa_client_id
    client_secret: !secret alexa_client_secret
    filter:
      include_entities:
         !include alexa_entities.yaml

Add two helper shell commands:

shell_command:
  clear_alexa_entities_file: "truncate -s 0 /config/alexa_entities.yaml"
  append_alexa_entity: '/bin/sh -c "echo \"- {{ entity }}\" >> /config/alexa_entities.yaml"'

A script to find the entities

Place this script in scripts.yaml. It does three things:

  1. Clears the existing file.
  2. Finds all entities labeled with the tag you choose (I use "Alexa").
  3. Appends each entity ID to the file.
export_alexa_entities:
  alias: Export Entities with Alexa Label
  sequence:
    # 1. Clear the file
    - service: shell_command.clear_alexa_entities_file

    # 2. Loop through each entity and append
    - repeat:
        for_each: "{{ label_entities('Alexa') }}"
        sequence:
          - service: shell_command.append_alexa_entity
            data:
              entity: "{{ repeat.item }}"
  mode: single

Why clear the file and write it line by line? I couldn't get any file or notify integration to write to /config, and passing a YAML list to a shell command collapses whitespace into a single line. Reformatting that back into proper YAML without invoking Python was painful, so I chose to truncate and append line-by-line. It's ugly, but it's simple and it works.

The result is that I can label entities in the UI and avoid tedious bookkeeping.

Home Assistant entity details screen showing an IKEA smart plug named 'tree' with the Alexa label applied in the Labels section

12 Nov 2025 12:40pm GMT

11 Nov 2025

feedFedora People

Vedran Miletić: The follow-up

11 Nov 2025 6:43pm GMT

Vedran Miletić: The academic and the free software community ideals

11 Nov 2025 6:43pm GMT

Vedran Miletić: Should I do a Ph.D.?

11 Nov 2025 6:43pm GMT

Vedran Miletić: Open-source magic all around the world

11 Nov 2025 6:43pm GMT

Vedran Miletić: Markdown vs reStructuredText for teaching materials

11 Nov 2025 6:43pm GMT

Vedran Miletić: Joys and pains of interdisciplinary research

11 Nov 2025 6:43pm GMT

Vedran Miletić: Free to know: Open access and open source

11 Nov 2025 6:43pm GMT

Vedran Miletić: Fly away, little bird

11 Nov 2025 6:43pm GMT

Vedran Miletić: Celebrating Graphics and Compute Freedom Day

11 Nov 2025 6:43pm GMT