22 May 2025

feedFedora People

Fedora Infrastructure Status: Updates (esp. Wiki) and Reboots

22 May 2025 9:00pm GMT

20 May 2025

feedFedora People

Fedora Community Blog: Mindshare Interview: Akashdeep Dhar (t0xic0der)

Fedora Community Blog's avatar

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Tuesday 20th May and closes promptly at 23:59:59 UTC on Monday, 2 June 2025.

Interview with Akashdeep Dhar

Questions

What is your background in Fedora? What have you worked on and what are you doing now?

I started contributing to the Fedora Project about five years back, and then I slowly moved on to maintaining Fedora Websites and Apps, both as a volunteer engineer and an objective representative to the Fedora Council. I have dabbled with the mentorship endeavours of the Fedora Project community every now and then, taking on a bunch of mentees both during formalized mentoring programmes and structured infrastructure initiatives.

I have been organizing and participating in various events (virtual and in-person alike) where I represented the Fedora Project either with the presentations I deliver or with the conversations I have, e.g., Fedora Hatch, Fedora Mentor Summit, FOSDEM, CentOS Connect, DevConf.IN, etc. I author posts on Fedora Magazine, Fedora Community Blog, Fedora Discussions, and personal website every now and then on various topics.

I have also previously served in the Fedora Council as an elected representative and provided the research behind the Git Forge selection for the Fedora Project. Currently, I work on researching, developing, and maintaining applications and services for Fedora Infrastructure and CentOS Infrastructure, with a major focus on contributing in the open to ensure that other community members can participate meaningfully in the efforts.

Please elaborate on the personal "Why" which motivates you to be a candidate for Mindshare.

For several years now, my focus toward Fedora Project contributions has been around onboarding and retention of contributors by empowering them with access and support in subprojects and SIGs that they are interested in. As a software engineer by profession, I want to use my inclination to bridge the community outreach teams with engineering solutions like custom platforms, triaging workflows, dedicated tooling, etc.

I have witnessed the flame of multiple endeavours within the community dying out because of them not being able to enlist support from fellow members. The lack of awareness toward how members can contribute to the efforts hurts, as it ends up turning the venture very limited (both in scope as well as in impact), and the Mindshare Committee is the appropriate position to render correction to such situations with outreach.

Apart from that, I relate strongly with Mindshare Committee's "boots on the ground" approach toward supporting various events, maintaining local communities, monitoring community health, and promoting strategic goals. That motivates me to give back to my Fedora Project friends by being instrumental in aligning our communication, getting onboarding right, ensuring sustainable relations, and enabling enthusiastic contributors.

How would you improve Mindshare Committee visibility and awareness in the Fedora community?

As a forward-thinking individual, I want to use my strengths in delegation toward conveying details about the endeavours within the Fedora Project community. The visibility toward the Mindshare Committee
would not only help fellow contributors to be in the know about what's
on and about in the community but also for those endeavours to enlist deserving support for paving their way toward a successful fulfilment of objective.I plan to continue being a friendly face across various community channels for most (if not all) subprojects and SIGs that we have in the Fedora Project. Surveys and statistics are nice and all, but a lived-in experience with interacting regularly
with the teams is an incomparable approach toward understanding the
community health in the said subprojects and SIGs and parts that require
attention from the Mindshare Committee.Apart from the awareness within the community, I plan on keeping a
time schedule from my week to help handle the social media platforms to
cater toward the larger free and open source software populace. I want
to be able to play to my strengths of representing the Fedora Project in various events (virtual and in-person alike) to spread the word about the Fedora Project's positive impact while onboarding folks at the same time.

What part of Fedora do you think needs the most attention from the Mindshare Committee during your term?

One of my major gripes about the Mindshare Committee was the fact that it was a reactive team and not a proactive one. Members of the committee were expected to express their thoughts on a certain initiative (not to be confused with a community initiative) taking place - and that, more or less, decided the extent of support that the Mindshare Committee could provide to contributing members of the said initiative.

Moving from a reflexive mindset to an assertive mindset in the Mindshare Committee interactions is a paradigm shift and would demand a greater deal of engagement from the elected members. Empowering Fedora Project friends around to collaborate responsibilities would not only help the situation by preventing potential burnouts but also help with the succession and continuance of leadership (remember, flywheel theory?).

With the revamp through from the previous year, I not only expect this change for the better - but rather I am willing to roll up my sleeves and establish an active participation of members in the initiative to actively report what support is needed, cater to those, and possibly bring those initiatives to the larger community. This aspect strengthens not only the folks involved but also other contributors on the fence.



The post Mindshare Interview: Akashdeep Dhar (t0xic0der) appeared first on Fedora Community Blog.

20 May 2025 6:42pm GMT

Fedora Community Blog: Mindshare Elections: Luis Bazan (lbazan)

Fedora Community Blog's avatar

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Tuesday 20th May and closes promptly at 23:59:59 UTC on Monday, 2 June 2025.

Interview with Luis Bazan

Questions

What is your background in Fedora? What have you worked on and what are you doing now?

I am currently working on keeping the community active in the LATAM region by encouraging contributors to stay engaged and continue collaborating with any team. I also help some come out of anonymity and contribute by giving talks, hosting workshops, among other activities at universities, remotely, etc. I am always looking for ways to support the community in any way I can. I will always be available for Fedora, even when I technically can't be.

Please elaborate on the personal "Why" which motivates you to be a candidate for Mindshare.

I currently want to continue being a member of the Mindshare Committee because we are now in the process of reactivating the team, and the work we are doing is starting to take shape. We have new roles, goals to achieve, and new ways to support all contributors across the regions.

How would you improve Mindshare Committee visibility and awareness in the Fedora community?

Supporting my region in particular is already where I need to focus most of my attention, in order to generate new regional and community activities, support countries in various ways, and motivate them.

What part of Fedora do you think needs the most attention from the Mindshare Committee during your term?

I believe that right now the LATAM region needs attention from the committee, from the most basic aspects to the more complex ones. I will always believe that LATAM needs a regional event where people can also participate on a larger scale.



The post Mindshare Elections: Luis Bazan (lbazan) appeared first on Fedora Community Blog.

20 May 2025 6:36pm GMT

feedLXer Linux News

United States Federal Government's Digital Analytics Program (DAP): GNU/Linux Users Represent Close to 6% of Visitors This Year

The first occurrence of GNU/Linux was owing to my older colleague and it happened here in Manchester, not in Finland or in Portland (Oregon). That's just the real history!

20 May 2025 6:34pm GMT

Red Hat Enterprise Linux 10 Released, This Is What’s New

RHEL 10 debuts with built-in AI guidance, post-quantum cryptography, and streamlined OS-container management for modern hybrid infrastructure.

20 May 2025 5:03pm GMT

Big News for Linux on Windows: Windows Subsystem for Linux (WSL) is Now Officially Open Source!

After years of work, Microsoft has made the WSL code publicly available. Yes, Windows Subsystem for Linux is now officially Open Source!

20 May 2025 3:31pm GMT

feedPlanet Debian

Arturo Borrero González: Wikimedia Cloud VPS: IPv6 support

Cape Town (ZA), Sea Point, Nachtansicht

Dietmar Rabich, Cape Town (ZA), Sea Point, Nachtansicht - 2024 - 1867-70 - 2, CC BY-SA 4.0

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

Wikimedia Cloud VPS is a service offered by the Wikimedia Foundation, built using OpenStack and managed by the Wikimedia Cloud Services team. It provides cloud computing resources for projects related to the Wikimedia movement, including virtual machines, databases, storage, Kubernetes, and DNS.

A few weeks ago, in April 2025, we were finally able to introduceIPv6 to the cloud virtual network, enhancing the platform's scalability, security, and future-readiness. This is a major milestone, many years in the making, and serves as an excellent point to take a moment to reflect on the road that got us here. There were definitely a number of challenges that needed to be addressed before we could get into IPv6. This post covers the journey to this implementation.

The Wikimedia Foundation was an early adopter of the OpenStack technology, and the original OpenStack deployment in the organization dates back to 2011. At that time, IPv6 support was still nascent and had limited implementation across various OpenStack components. In 2012, the Wikimedia cloud users formally requested IPv6 support.

When Cloud VPS was originally deployed, we had set up the network following some of the upstream-recommended patterns:

In order for us to be able to implement IPv6 in a way that aligned with our architectural goals and operational requirements, pretty much all the elements in this list would need to change. First of all, we needed to migrate from nova-networks into Neutron, a migration effort that started in2017. Neutron was the more modern component to implement software-defined networks in OpenStack. To facilitate this transition, we made the strategic decision to backport certain functionalities from nova-networks into Neutron, specifically the "dmz_cidr" mechanism and some egress NAT capabilities.

Once in Neutron, we started to think about IPv6. In 2018 there was an initial attempt to decide on the network CIDR allocations that Wikimedia Cloud Services would have. This initiative encountered unforeseen challenges and was subsequently put on hold. We focused on removing the previously backported nova-networks patches from Neutron.

Between 2020 and 2021, we initiated another significant network refresh. We were able to introduce the cloudgw project, as part of a larger effort to rework the Cloud VPS edge network. The new edge routers allowed us to drop all the custom backported patches we had in Neutron from the nova-networks era, unblocking further progress. Worth mentioning that the cloudgw router would use nftables as firewalling and NAT engine.

A pivotal decision in 2022 was to expose the OpenStack APIs to the internet, which crucially enabled infrastructure management via OpenTofu. This was key in the IPv6 rollout as will be explained later. Before this, management was limited to Horizon - the OpenStack graphical interface - or the command-line interface accessible only from internal control servers.

Later, in 2023, following the OpenStack project's announcement of the deprecation of the neutron-linuxbridge-agent, we began to seriously consider migrating to the neutron-openvswitch-agent. This transition would, in turn, simplify the enablement of "tenant networks" - a feature allowing each OpenStack project to define its own isolated network, rather than all virtual machines sharing a single flat network.

Once we replaced neutron-linuxbridge-agent with neutron-openvswitch-agent, we were ready to migrate virtual machines to VXLAN. Demonstrating perseverance, we decided to execute the VXLAN migration in conjunction with the IPv6 rollout.

We prepared and tested several things, including the rework of the edge routing to be based on BGP/OSPF instead of static routing. In 2024 we were ready for the initial attempt to deploy IPv6,which failed for unknown reasons. There was a full network outage and we immediately reverted the changes. This quick rollback was feasible due to our adoption of OpenTofu: deploying IPv6 had been reduced to a single code change within our repository.

We started an investigation, corrected a few issues, and increased our network functional testing coverage before trying again. One of the problems we discovered was that Neutron would enable the "enable_snat" configuration flag for our main router when adding the new external IPv6 address.

Finally, in April 2025, after many years in the making, IPv6 was successfully deployed.

Compared to the network from 2011, we would have:

Over time, the WMCS team has skillfully navigated numerous challenges to ensure our service offerings consistently meet high standards of quality and operational efficiency. Often engaging in multi-year planning strategies, we have enabled ourselves to set and achieve significant milestones.

The successful IPv6 deployment stands as further testament to the team's dedication and hard work over the years. I believe we can confidently say that the 2025 Cloud VPS represents its most advanced and capable iteration to date.

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

20 May 2025 1:00pm GMT

feedPlanet Ubuntu

Simon Quigley: Donuts and 5-Star Restaurants

In my home state of Wisconsin, there is an incredibly popular gas station called Kwik Trip. (Not to be confused with Quik Trip.) It is legitimately one of the best gas stations I've ever been to, and I'm a frequent customer.

What makes it that great?

Well, everything about it. The store is clean, the lights work, the staff are always friendly (and encourage you to come back next time), there's usually bakery on sale (just depends on location etc), and the list goes on.

There's even a light-switch in the bathroom of a large amount of locations that you can flip if a janitor needs to attend to things. It actually does set off an alarm in the back room.

A dear friend of mine from Wisconsin once told me something along the lines of, "it's inaccurate to call Kwik Trip a gas station, because in all reality, it's a five star restaurant." (M - , I hope you're well.)

In my own opinion, they have an espresso machine. That's what really matters. ;)

I mentioned the discount bakery. In reality, it's a pretty great system. To my limited understanding, the bakery that is older than "standard" but younger than "expiry" are set to half price and put towards the front of the store. In my personal experience, the vast majority of the time, the quality is still amazing. In fact, even if it isn't, the people working at Kwik Trip seem to genuinely enjoy their job.

When you're looking at that discount rack of bakery, what do you choose? A personal favorite of mine is the banana nut bread with frosting on top. (To the non-Americans, yes, it does taste like it's homemade, it doesn't taste like something made in a factory.)

Everyone chooses different bakery items. And honestly, there could be different discount items out depending on the time. You take what you can get, but you still have your own preferences. You like a specific type of donut (custard-filled, or maybe jelly-filled). Frosting, sprinkles… there are so many ways to make different bakery items.

It's not only art, it's kind of a science too.

Is there a Kwik Trip that you've called a gas station instead of a five star restaurant? Do you also want to tell people about your gas station? Do you only pick certain bakery items off the discount rack, or maybe ignore it completely? (And yes, there would be good reason to ignore the bakery in favor of the Hot Spot, I'd consider that acceptable in my personal opinion.)

Remember, sometimes you just have to like donuts.

https://medium.com/media/73f78efd7bd6bb9ce495c2f08428c7d3/href

Have a sweet day. :)

20 May 2025 12:57pm GMT

feedPlanet Debian

Simon Quigley: Donuts and 5-Star Restaurants

In my home state of Wisconsin, there is an incredibly popular gas station called Kwik Trip. (Not to be confused with Quik Trip.) It is legitimately one of the best gas stations I've ever been to, and I'm a frequent customer.

What makes it that great?

Well, everything about it. The store is clean, the lights work, the staff are always friendly (and encourage you to come back next time), there's usually bakery on sale (just depends on location etc), and the list goes on.

There's even a light-switch in the bathroom of a large amount of locations that you can flip if a janitor needs to attend to things. It actually does set off an alarm in the back room.

A dear friend of mine from Wisconsin once told me something along the lines of, "it's inaccurate to call Kwik Trip a gas station, because in all reality, it's a five star restaurant." (M - , I hope you're well.)

In my own opinion, they have an espresso machine. That's what really matters. ;)

I mentioned the discount bakery. In reality, it's a pretty great system. To my limited understanding, the bakery that is older than "standard" but younger than "expiry" are set to half price and put towards the front of the store. In my personal experience, the vast majority of the time, the quality is still amazing. In fact, even if it isn't, the people working at Kwik Trip seem to genuinely enjoy their job.

When you're looking at that discount rack of bakery, what do you choose? A personal favorite of mine is the banana nut bread with frosting on top. (To the non-Americans, yes, it does taste like it's homemade, it doesn't taste like something made in a factory.)

Everyone chooses different bakery items. And honestly, there could be different discount items out depending on the time. You take what you can get, but you still have your own preferences. You like a specific type of donut (custard-filled, or maybe jelly-filled). Frosting, sprinkles… there are so many ways to make different bakery items.

It's not only art, it's kind of a science too.

Is there a Kwik Trip that you've called a gas station instead of a five star restaurant? Do you also want to tell people about your gas station? Do you only pick certain bakery items off the discount rack, or maybe ignore it completely? (And yes, there would be good reason to ignore the bakery in favor of the Hot Spot, I'd consider that acceptable in my personal opinion.)

Remember, sometimes you just have to like donuts.

https://medium.com/media/73f78efd7bd6bb9ce495c2f08428c7d3/href

Have a sweet day. :)

20 May 2025 12:57pm GMT

feedPlanet Ubuntu

Ubuntu Blog: What is geopatriation?

The world is changing every day. From geopolitical shifts to legislation like GDPR which requires localized processing - these all create a complex and uncertain landscape where data storage, processing, and cloud services could potentially come to a sudden halt or suffer heavy disruption overnight. As a result, organizations are increasingly interested in potential routes for shifting cloud services to safer alternatives closer to their country of operation.

Recently, a term has appeared in the cloud services and cloud repatriation circles: geopatriation. But what is geopatriation, and how does it fit into adjusting your cloud services and infrastructure to meet new legal and compliance requirements?

In this article, we'll define geopatriation, learn how it fits into cloud repatriation and recovery, and explore the best approaches for geopatriation. But first, let's consider a vital related concept: cloud repatriation.

What is cloud repatriation?

Before we begin, we'll quickly define an associated term that is easily confused with geopatriation: cloud repatriation.

Cloud repatriation is the process of migrating applications from public clouds back to your own infrastructure. Such infrastructure can either be located on-premises or hosted by a data centre provider. It can be a private cloud, a simple virtualisation environment or even legacy IT infrastructure. The main purpose and marker of cloud repatriation is breaking the dependence on the public cloud provider.

There are many reasons to repatriate your cloud services. One of the most common is that public cloud infrastructure usage can be very expensive, and its cost is only increasing. Another reason is the sensitive or highly regulated nature of certain kinds of data - especially where mounting regulatory compliance restricts how and where that data can be gathered, stored, or processed. After all, not all confidential data should be stored in public clouds. And finally, migrating applications to public clouds might lead to performance degradation in some parts of the world, due to low bandwidth and high latency. In such regions, local cloud infrastructure (either public or private) just performs better. In some cases, private cloud infrastructure is also more resilient than public cloud services, as outages still occur in these services, and you have no direct control over their resolution.

If you want to read more about cloud repatriation, why organizations do it, and what options are available, you can read more in our detailed article on our blog.

Cloud repatriation can be a tricky term to pin down, as its meaning shifts depending on the context of its use. For example, in infrastructure as a service (IaaS) and platform as a service (PaaS), cloud repatriation refers to different processes. For this reason, many people presume that "cloud repatriation" means "cloud migration reversal", meaning the reversal of a migration of workloads from data centres to cloud IaaS.

There are three general situations where you would perform cloud repatriation.

  1. Undoing a full-scale cloud migration
  2. Replacing existing cloud solutions with an in-house IT solution
  3. Recovering from errors or other minor issues in your databases, personnel, or hosting

What is geopatriation?

Geopatriation is a related concept, but a little different to cloud repatriation. Geopatriation was first coined by Gartner® earlier this year, in their 2025 How to Protect Geopolitically Risky Cloud Workloads research, in which "Gartner defines geopatriation as the relocation of workloads and applications from global cloud hyperscalers to regional or national alternatives due to geopolitical uncertainty."1 Geopatriation refers broadly to the repatriation efforts that result from specific geographic or territorial requirements, limitations, or risks for cloud infrastructure and data storage, processing, or other services. Similarly to sovereign clouds, geopatriation seeks to control and own cloud infrastructure that is located in a specific territory under clear legal jurisdiction.

Learn more about sovereign cloud infrastructure

Geopatriation is one of the many strategies that organizations can pursue to protect their cloud workloads, and is a form of cloud repatriation.

Generally speaking, there are five options for protecting cloud workloads that face geopolitical risks or related disruption:

  1. Reinforcement: you continue services with the hyperscaler but reinforce your cloud environment with further failsafes (for example, localized storage and processing, or additional security features like firewalls).
  2. Redeployment: you continue services with the hyperscaler but redeploy your most at-risk workloads to a different cloud setup (i.e. one that falls within new. requirements due to regulation or sanctions).
  3. Removal: you remove your at-risk workloads from the hyperscaler and redeploy everything to a different cloud setup to a local cloud provider.
  4. Repatriation: you move all of your workloads to an on-premises solution
  5. Accept the risks of disruption and make no changes.

The "removal" and "repatriation" options are both forms of geopatriation - moving your cloud workloads to your local vicinity or country.

It's important to note that geopatriation is related to cloud repatriation, but they have different meanings. Cloud repatriation refers more broadly to the removal or movement of cloud services in general from public to private, while geopatriation is a distinct form of cloud patriation.

Why is geopatriation a growing topic of interest?

Whether because of conflict, changes in international trade rules, or increasing political tensions, the world is becoming more uncertain. This geopolitical uncertainty raises a critical question for organizations delivering - or using - cloud services: how do you guarantee services will remain uninterrupted, when they depend on infrastructure or companies that are spread across the geopolitical landscape?

Here are a few example cases of why geopatriation is a topic of growing interest:

In short, geopatriation is a topic of growing interest because providers and users of cloud services are concerned that geopolitical events put their hyperscale public-cloud-integrated IaaS and PaaS services at risk.

How do you perform geopatriation?

As mentioned above, geopatriation can be performed by removing or repatriating cloud resources.

In both cases, you would require private cloud, on-premise cloud, or bare-metal infrastructure to take over your cloud workloads. Generally speaking, you would need to explore various bare metal infrastructure options, compare the costs of private cloud setups against localized cloud hosting services, and assess the functionality and scalability of your options.

If you're exploring your options, we recommend you visit our dedicated cloud infrastructure webpage, which demonstrates how our wide range of open source infrastructure solutions and enterprise services can be used to build powerful, reliable, and entirely independent cloud services.

Learn more about Canonical's Infrastructure solutions.

Works cited

  1. Gartner, Quick Answer: Protecting Geopolitically Risky Cloud Workloads, Lydia Leong, Alessandro Galimberti, 21 March 2025

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Further reading

How to build a sovereign cloud with Canonical

[Case study] Learn how Phoenix Systems created a hyper-secure OpenStack cloud with a focus on data sovereignty and data protection

What is a sovereign cloud?

[Case study] OneUptime takes back its servers and saves $352,500 a year with Canonical infrastructure solutions

20 May 2025 8:11am GMT

19 May 2025

feedPlanet Ubuntu

The Fridge: Ubuntu Weekly Newsletter Issue 892

Welcome to the Ubuntu Weekly Newsletter, Issue 892 for the week of May 11 - 17, 2025. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

19 May 2025 10:13pm GMT

feedPlanet KDE | English

StarFive VisionFive v2 and FreeBSD

This week I powered up the StarFive VisionFive v2 board that I have. I figured I would give FreeBSD another whirl on it, in the vague hope that RISC-V boards are a more cohesive family than ARM boards were five years ago. tl;dr: I didn't get it to work as well as I want it to. Here are some notes.

I mentioned this board when it arrived and documented the serial pinout as well, but it has been languishing while I had other things to do.

F(reeBSD) Around And Find Out

This is what I did. The board is listed as partially supported on the FreeBSD RISC-V Wiki so I'm not entirely surprised it craps out. I'll update the wiki if I get any further than this.

The board starts and spits out things over serial, like all SBCs seem to do.

Platform Name : StarFive VisionFive V2
Platform Features : medeleg
Platform HART Count : 5

Note the HART (core) count of 5. That's relevant later, because this is nominally a quad-core CPU. After a little bit of SBI, we get to a U-Boot layer of the boot process, which tells me this:

U-Boot 2021.10 (Feb 12 2023 - 18:15:33 +0800), Build: jenkins-VF2_515_Branch_SDK_Release-24
CPU: rv64imacu
Model: StarFive VisionFive V2
DRAM: 8 GiB

That is still consistent with what I think is on my desk. The FreeBSD kernel loads! And then the usual message Hit [Enter] to boot immediately appears. If I go on to boot normally, it invariably fails like this:

sbi_trap_error: hart0: trap handler failed (error -2)
sbi_trap_error: hart0: mcause=0x0000000000000005 mtval=0x0000000040048060

It is remarkably unhelpful to search for this, since the error message is both all over the place, and rarely fully explained or diagnosed. I don't have a good explanation either, but

That fifth core, hart 0, is a different kind of CPU, and is mislabeled in the FDT that is still being shipped. FreeBSD then tries to set up the CPU in the wrong way, and it dies. The issue is quite descriptive, after you read it like six times to figure out what it actually means. Anyway, instead of hitting \[Enter\], I press some other key, and then use the loader prompt:

OK fdt prop /cpus/cpu@0/status disabled
Using DTB provided by EFI at 0x47ef2000.
OK boot

This subsequently craps out with:

starfive_dwmmc0: <Synopsys DesignWare Mobile Storage Host Controller (StarFive)> mem 0x16020000-0x1602ffff on simplebus0
starfive_dwmmc0: No bus speed provided
starfive_dwmmc0: Can't get FDT property.
device_attach: starfive_dwmmc0 attach returned 6

Followed by:

Mounting from ufs:/dev/ufs/rootfs failed with error 19.

Not even close to workable. The board itself is fine, there is a Debian image for it which just boots on through, does useful things, but that just isn't what I want to run on this board.

It Helps To Read The Documentation

There's a long post - someone who wanted to run FreeBSD, hit snags, then tried OpenBSD instead - over here in a GitHub gist that describes most of the process that I went though. And there is a post on the FreeBSD Forum about progress.

So I looked at both, and then went over the instructions more carefully.

The scare quotes around "the DTB file" are because there are many DTB files floating around for this, and lots of links to an email message attachment. I downloaded it and this one works for me, so now I have archived it locally under a slightly different name.

Why the different name? Well, investigation at the SBI prompt with env print -a showed a variable fdtfile=starfive/starfive_visionfive2.dtb. I moved the s5v5.dtb file to that location in the EFI partition, and now I don't need to interrupt SBI because it loads the right DTB file directly.

The lack of eMMC (the controller seems to be found, but the 16MB eMMC module isn't) and NVMe (there's an M2 slot, and I have a WD stick in there) means that storage is rather constrained, still, and there's nothing I would trust a write-heavy load to.

What Would OpenBSD Do?

Going through the same steps with OpenBSD (which suffers from the same kind of "there's a gazillion ways to put together an SD card for this board", and not one is canonical or step-by-step) is also successful. More successful, even, because all the storage options are found:

nvme0: WDC WDS240G2G0C-00AJM0, firmware 231050WD, serial 22465R472602
scsibus0 at nvme0: 2 targets, initiator 0
sd0 at scsibus0 targ 1 lun 0: <NVMe, WDC WDS240G2G0C-, 2310>
sd0: 228936MB, 512 bytes/sector, 468862128 sectors
gpiorestart0 at mainbus0
"clk_ext_camera" at mainbus0 not configured
scsibus1 at sdmmc0: 2 targets, initiator 0
sd1 at scsibus1 targ 1 lun 0: <Samsung, AJTD4R, 0000> removable
sd1: 14910MB, 512 bytes/sector, 30535680 sectors
scsibus2 at sdmmc1: 2 targets, initiator 0
sd2 at scsibus2 targ 1 lun 0: <Sandisk, SC32G, 0080> removable
sd2: 30436MB, 512 bytes/sector, 62333952 sectors

That is some serious storage for a tiny board like this.

Takeaways

It helps to read the documentation carefully. I need to update the FreeBSD wiki. The board is usable, but needs additional storage options to be a nice kind of machine for router-and-storage or NAS work.

19 May 2025 10:00pm GMT

Kirigami Addons 1.8.0

Kirigami Addons is a collection of supplementary components for Kirigami applications. Version 1.8.0 is a relatively minor release, introducing two new form delegates along with various quality-of-life enhancements.

New Features

I added two new form delegates: FormLinkDelegate (!343) and FormIconDelegate (!355).

The first one is similar to FormButtonDelegate, but it's used to display an external link. It's already used on the About page:

 

The second one was upstreamed from Marknote and allows the user to pick an icon and display the selected icon.

I also added a password quality checker to FormPasswordFieldDelegate (!345). This is particularly useful when asking users to create an account:

 

Visual Changes

Kai Uwe Broulik improved avatar rendering. Initials are now always displayed consistently even on small screen (!363).

Kai also fixed an issue on mobile where library information on the About page was being ellipsized (!356).

Balló György fixed several issues when using Kirigami with the QtQuick software rendering backend (!350, !351).

I made the delegates provided by Kirigami Addons now have a slightly larger touch area on mobile (!349). Unfortunately, I also had to remove the small hover animations, as they occasionally caused visual glitches (1d6e84cd).

Convenient New APIs

Joshua Goins added an opened property to ConvergentContextMenu (!352), and I added a close method to allow closing the menu programmatically (!364).

I also added support for trailing items in FormTextFieldDelegate (f996fc6e).

Documentation

Thiago Sueto ported the entire library to QDoc (!354). QDoc provides much better support for QML.

Other Changes

"trapped-in-dreams" significantly improved the performance of the date picker (!360).

Volker Krause updated the project templates to reflect current best practices for Android support (!359).

Packager Section

1.8.0 had an issue with system not having QDoc, but a bug fix release is available as 1.8.1 with the fix for that.

You can find the package on download.kde.org and it has been signed with my GPG key.

19 May 2025 9:30pm GMT

feedPlanet Debian

Melissa Wen: A Look at the Latest Linux KMS Color API Developments on AMD and Intel

This week, I reviewed the last available version of the Linux KMS Color API. Specifically, I explored the proposed API by Harry Wentland and Alex Hung (AMD), their implementation for the AMD display driver and tracked the parallel efforts of Uma Shankar and Chaitanya Kumar Borah (Intel) in bringing this plane color management to life. With this API in place, compositors will be able to provide better HDR support and advanced color management for Linux users.

To get a hands-on feel for the API's potential, I developed a fork of drm_info compatible with the new color properties. This allowed me to visualize the display hardware color management capabilities being exposed. If you're curious and want to peek behind the curtain, you can find my exploratory work on the drm_info/kms_color branch. The README there will guide you through the simple compilation and installation process.

Note: You will need to update libdrm to match the proposed API. You can find an updated version in my personal repository here. To avoid potential conflicts with your official libdrm installation, you can compile and install it in a local directory. Then, use the following command: export LD_LIBRARY_PATH="/usr/local/lib/"

In this post, I invite you to familiarize yourself with the new API that is about to be released. You can start doing as I did below: just deploy a custom kernel with the necessary patches and visualize the interface with the help of drm_info. Or, better yet, if you are a userspace developer, you can start developing user cases by experimenting with it.

The more eyes the better.

KMS Color API on AMD

The great news is that AMD's driver implementation for plane color operations is being developed right alongside their Linux KMS Color API proposal, so it's easy to apply to your kernel branch and check it out. You can find details of their progress in the AMD's series.

I just needed to compile a custom kernel with this series applied, intentionally leaving out the AMD_PRIVATE_COLOR flag. The AMD_PRIVATE_COLOR flag guards driver-specific color plane properties, which experimentally expose hardware capabilities while we don't have the generic KMS plane color management interface available.

If you don't know or don't remember the details of AMD driver specific color properties, you can learn more about this work in my blog posts [1] [2] [3]. As driver-specific color properties and KMS colorops are redundant, the driver only advertises one of them, as you can see in AMD workaround patch 24.

So, with the custom kernel image ready, I installed it on a system powered by AMD DCN3 hardware (i.e. my Steam Deck). Using my custom drm_info, I could clearly see the Plane Color Pipeline with eight color operations as below:

└───"COLOR_PIPELINE" (atomic): enum {Bypass, Color Pipeline 258} = Bypass
    ├───Bypass
    └───Color Pipeline 258
        ├───Color Operation 258
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 1D Curve
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   └───"CURVE_1D_TYPE" (atomic): enum {sRGB EOTF, PQ 125 EOTF, BT.2020 Inverse OETF} = sRGB EOTF
        ├───Color Operation 263
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = Multiplier
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   └───"MULTIPLIER" (atomic): range [0, UINT64_MAX] = 0
        ├───Color Operation 268
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 3x4 Matrix
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   └───"DATA" (atomic): blob = 0
        ├───Color Operation 273
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 1D Curve
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   └───"CURVE_1D_TYPE" (atomic): enum {sRGB Inverse EOTF, PQ 125 Inverse EOTF, BT.2020 OETF} = sRGB Inverse EOTF
        ├───Color Operation 278
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 1D LUT
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   ├───"SIZE" (atomic, immutable): range [0, UINT32_MAX] = 4096
        │   ├───"LUT1D_INTERPOLATION" (immutable): enum {Linear} = Linear
        │   └───"DATA" (atomic): blob = 0
        ├───Color Operation 285
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 3D LUT
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   ├───"SIZE" (atomic, immutable): range [0, UINT32_MAX] = 17
        │   ├───"LUT3D_INTERPOLATION" (immutable): enum {Tetrahedral} = Tetrahedral
        │   └───"DATA" (atomic): blob = 0
        ├───Color Operation 292
        │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 1D Curve
        │   ├───"BYPASS" (atomic): range [0, 1] = 1
        │   └───"CURVE_1D_TYPE" (atomic): enum {sRGB EOTF, PQ 125 EOTF, BT.2020 Inverse OETF} = sRGB EOTF
        └───Color Operation 297
            ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT} = 1D LUT
            ├───"BYPASS" (atomic): range [0, 1] = 1
            ├───"SIZE" (atomic, immutable): range [0, UINT32_MAX] = 4096
            ├───"LUT1D_INTERPOLATION" (immutable): enum {Linear} = Linear
            └───"DATA" (atomic): blob = 0

Note that Gamescope is currently using AMD driver-specific color properties implemented by me, Autumn Ashton and Harry Wentland. It doesn't use this KMS Color API, and therefore COLOR_PIPELINE is set to Bypass. Once the API is accepted upstream, all users of the driver-specific API (including Gamescope) should switch to the KMS generic API, as this will be the official plane color management interface of the Linux kernel.

KMS Color API on Intel

On the Intel side, the driver implementation available upstream was built upon an earlier iteration of the API. This meant I had to apply a few tweaks to bring it in line with the latest specifications. You can explore their latest work here. For a more simplified handling, combining the V9 of the Linux Color API, Intel's contributions, and my necessary adjustments, check out my dedicated branch.

I then compiled a kernel from this integrated branch and deployed it on a system featuring Intel TigerLake GT2 graphics. Running my custom drm_info revealed a Plane Color Pipeline with three color operations as follows:

├───"COLOR_PIPELINE" (atomic): enum {Bypass, Color Pipeline 480} = Bypass
│   ├───Bypass
│   └───Color Pipeline 480
│       ├───Color Operation 480
│       │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, 1D LUT Mult Seg, 3x3 Matrix, Multiplier, 3D LUT} = 1D LUT Mult Seg
│       │   ├───"BYPASS" (atomic): range [0, 1] = 1
│       │   ├───"HW_CAPS" (atomic, immutable): blob = 484
│       │   └───"DATA" (atomic): blob = 0
│       ├───Color Operation 487
│       │   ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, 1D LUT Mult Seg, 3x3 Matrix, Multiplier, 3D LUT} = 3x3 Matrix
│       │   ├───"BYPASS" (atomic): range [0, 1] = 1
│       │   └───"DATA" (atomic): blob = 0
│       └───Color Operation 492
│           ├───"TYPE" (immutable): enum {1D Curve, 1D LUT, 3x4 Matrix, 1D LUT Mult Seg, 3x3 Matrix, Multiplier, 3D LUT} = 1D LUT Mult Seg
│           ├───"BYPASS" (atomic): range [0, 1] = 1
│           ├───"HW_CAPS" (atomic, immutable): blob = 496
│           └───"DATA" (atomic): blob = 0

Observe that Intel's approach introduces additional properties like "HW_CAPS" at the color operation level, along with two new color operation types: 1D LUT with Multiple Segments and 3x3 Matrix. It's important to remember that this implementation is based on an earlier stage of the KMS Color API and is awaiting review.

A Shout-Out to Those Who Made This Happen

I'm impressed by the solid implementation and clear direction of the V9 of the KMS Color API. It aligns with the many insightful discussions we've had over the past years. A huge thank you to Harry Wentland and Alex Hung for their dedication in bringing this to fruition!

Beyond their efforts, I deeply appreciate Uma and Chaitanya's commitment to updating Intel's driver implementation to align with the freshest version of the KMS Color API. The collaborative spirit of the AMD and Intel developers in sharing their color pipeline work upstream is invaluable. We're now gaining a much clearer picture of the color capabilities embedded in modern display hardware, all thanks to their hard work, comprehensive documentation, and engaging discussions.

Finally, thanks all the userspace developers, color science experts, and kernel developers from various vendors who actively participate in the upstream discussions, meetings, workshops, each iteration of this API and the crucial code review process. I'm happy to be part of the final stages of this long kernel journey, but I know that when it comes to colors, one step is completed for new challenges to be unlocked.

Looking forward to meeting you in this year Linux Display Next hackfest, organized by AMD in Toronto, to further discuss HDR, advanced color management, and other display trends.

19 May 2025 9:05pm GMT

feedPlanet GNOME

Jussi Pakkanen: Optimizing page splits in books

Earlier in this blog we looked at how to generate a justified block of text, which is nowadays usually done with the Knuth-Plass algorithm. Sadly this is not, by itself, enough to create a finished product. Processing all input text with the algorithm produces one very long column of text. This works fine if you end result is a scroll (of the papyrus variety), but people tend to prefer their text it book format. Thus you need a way to split that text column into pages.

The simplest solution is to decide that a page has some N number of lines and page breaks happen at exact these places. This works (and is commonly used) but it has certain drawbacks. From a typographical point of view, there are at least three things that should be avoided:

  1. Orphan lines, a paragraph at the bottom of a page that has only one line of text.
  2. Widow lines, a paragraph that starts a new page and has only one line of text.
  3. Spread imbalance, where the two pages on a spread have different number of lines on them (when both are "full pages")

Determining "globally optimal" page splits for a given text is not a simple problem, because every pagination choice affects every pagination choice that comes after it. If you stare at the problem long and hard enough, eventually you realize something.

The two problems are exactly the same and can be solved in the same way. I have implemented this in the Chapterizer book generator program. The dynamic programming algorithm for both is so similar that they could, in theory, use shared code. I chose not to do it because sometimes it is just so much simpler to not be DRY. They are so similar, in fact, that a search space optimization that I had to do in the paragraph justification case was also needed for pagination even though the data set size is typically much smaller in pagination. The optimization implementation turned out to be exactly the same for both cases.

The program now creates optimal page splits and in addition prints a text log of all pages with widows, orphans and imbalances it could not optimize away.

Has this been done before?

Probably. Everything has already been invented multiple times. I don't know for sure, so instead I'm going to post my potentially incorrect answer here on the Internet for other people to fact check.

I am not aware of any book generation program doing global page optimization. LibreOffice and Word definitely do not do it. As far as I know LaTeX processes pages one by one and once a page is generated it never returns to it, probably because computers in the early 80s did not have enough memory to hold the entire document in RAM at the same time. I have never used Indesign so I don't know what it does.

Books created earlier than about the mid eighties were typeset by hand and they seem to try to avoid orphans and widows at the cost of some page spreads having more lines than other spreads. Modern books seem to optimize for filling the page fully rather than avoiding widows and orphans. Since most books nowadays are created with Indesign (I think), it would seem to imply that Indesign does not do optimal page splitting or at least it is not enabled by default.

When implementing the page splitter the reasons for not doing global page splitting became clear. Computing both paragraph and page optimization is too slow for a "real time" GUI app. This sort of an optimization really only makes sense for a classical "LaTeX-style" book where there is one main text flow. Layout programs like Scribus and Indesign support richer magazine style layouts. This sort of a page splitter does not work in the general case, so in order to use it they would need to have a separate simpler document format.

But perhaps the biggest issue is that if different pages may have different number of lines on each page, it leads to problems in the page's visual appearance. Anything written at the bottom of the page (like page numbers) need to be re-placed based on how many lines of text actually ended up on the page. Doing this in code in a batch system is easy, doing the same with GUI widgets in "real time" is hard.

19 May 2025 8:10pm GMT

feedOMG! Ubuntu

Microsoft Open-Sources Windows Subsystem for Linux

Well here's a turn up: Microsoft just released the source code for Windows Subsystem for Linux (WSL), making the nifty tech open-source nearly a decade after development began. The tech giant announced the news at this year's BUILD event, where it made some other open-source related announcements, including its own CLI text editor called Edit. Source code for WSL was quickly made available on the Microsoft GitHub. For those not familiar with it, WSL is a specialised virtualisation setup that lets Windows users run Linux distributions (like Ubuntu) inside of Windows, with tight system, software and hardware integration. Microsoft says […]

You're reading Microsoft Open-Sources Windows Subsystem for Linux, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

19 May 2025 7:49pm GMT

feedLinuxiac

A New Era: Microsoft Open Sources WSL

A New Era: Microsoft Open Sources WSL

After years of anticipation, the Windows Subsystem for Linux is now fully open source-developers can build, enhance, and contribute to WSL starting today.

19 May 2025 6:46pm GMT

feedPlanet KDE | English

Send your talks to Akademy 2025! (Now really for real)

We have moved the deadline for talk submission for Akademy 2025 to the end of the month. Submit your talks now!

https://mail.kde.org/pipermail/kde-community/2025q2/008217.html

https://akademy.kde.org/2025/cfp/

19 May 2025 4:19pm GMT

feedLinuxiac

Niri 25.05 Wayland Compositor Introduces Workspace Overview

Niri 25.05 Wayland Compositor Introduces Workspace Overview

Niri 25.05 scrollable-tiling Wayland compositor introduces a powerful Overview mode, enabling intuitive workspace and window navigation.

19 May 2025 3:08pm GMT

feedLinux Today

Final Bookworm-Based Raspberry Pi OS Released Ahead of Debian Trixie

Discover the final release of the Bookworm-based Raspberry Pi OS, launched just before Debian Trixie. Explore new features and enhancements today!

The post Final Bookworm-Based Raspberry Pi OS Released Ahead of Debian Trixie appeared first on Linux Today.

19 May 2025 2:40pm GMT

Ubuntu 25.10 Will Default to Rust-Powered sudo-rs

Learn about Ubuntu 25.10's shift to Rust-powered sudo-rs, improving system security and efficiency. Stay updated on this exciting development!

The post Ubuntu 25.10 Will Default to Rust-Powered sudo-rs appeared first on Linux Today.

19 May 2025 2:37pm GMT

Slimbook Launches Kymera Black Linux Desktop Computer for Gamers and Creators

Discover the Kymera Black Linux Desktop by Slimbook, designed for gamers and creators. Experience powerful performance and sleek design for your projects.

The post Slimbook Launches Kymera Black Linux Desktop Computer for Gamers and Creators appeared first on Linux Today.

19 May 2025 2:34pm GMT

feedOMG! Ubuntu

Vivaldi 7.4 Update Adds New Keyboard Shortcut Controls

A new version of the Vivaldi web browser is available to download, carrying changes said to make our collective "everyday browsing smoother, faster, and just a little more delightful." How does Vivaldi 7.4 make browsing the increasingly gamified, algorithmically manipulative and Ai slopified modern web more 'delightful'? Shortcuts. More specifically, Vivaldi 7.4 gives you the ability to "fine-tune" how shortcuts behave on a per-site basis. If you want a website's shortcuts to take priority over Vivaldi's, you can. "It's about putting you in control, making sure your shortcuts work where and when you need them most", says Jon von Tetzchner, […]

You're reading Vivaldi 7.4 Update Adds New Keyboard Shortcut Controls, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

19 May 2025 12:49pm GMT

feedLinuxiac

Authelia Authentication Server Achieves OpenID Certified Status

Authelia Authentication Server Achieves OpenID Certified Status

Authelia open-source authentication and authorization server passes OpenID Connect certification, confirming full conformance with implemented profiles.

19 May 2025 11:07am GMT

feedPlanet GNOME

Cassidy James Blaede: Elect the next GNOME Foundation Board of Directors for 2025!

It's everyone's favorite time of year, election season! …Okay, maybe not the most exciting thing-but an extremely important one nonetheless.

For anyone who doesn't know, GNOME is comprised of many parts: individual contributors and maintainers, adhoc teams of volunteers, a bunch of open source software in the form of apps and libraries, a whole bunch of infrastructure, and-importantly-a nonprofit foundation. The GNOME Foundation exists to help manage and support the organizational side of GNOME, act as the official face of the project to third parties, and delegate authority when/where it makes the most sense. The GNOME Foundation itself is governed by its elected Board of Directors.

If you contribute to GNOME, you're eligible to become a member of the GNOME Foundation, which gets you some perks (like an @gnome.org email address and Matrix account, blog hosting and syndication, and access to Nextcloud and video conferencing tools)-but most importantly, GNOME Foundation members vote to elect the Board of Directors. If you contribute to GNOME, I highly recommend you become a member: it looks good for you, but it also helps ensure the GNOME Foundation is directly influenced and governed by contributors themselves.

I'm Running for the Board!

I realized today I never actually announced this on my blog (just via social media), but this past March I was appointed to the GNOME Foundation Board of Directors to fill a vacancy.

However, the seat I filled was up for re-election in this very next cycle, so I'm happy to officially announce: I'm running for the GNOME Foundation Board of Directors! As part of announcing my candidacy, I was asked to share why I would like to serve on the board. I posted this on the GNOME Discourse, but for convenience, I've copied it below:

Hey everyone,

I'm Cassidy (cassidyjames pretty much everywhere)! I have been involved in GNOME design since 2015, and was a contributor to the wider FreeDesktop ecosystem before that via elementary OS since around 2010. I am employed by Endless, where I am the community architect/experience lead.

I am particularly proud of my work in early design, communication, and advocacy around both the FreeDesktop color scheme (i.e. dark style) and accent color preferences, both of which are now widely supported across FreeDesktop OSes and the app ecosystem. At elementary I coordinated volunteer projects, lead the user experience design, launched and managed OEM partnerships, and largely maintained our communication by writing and editing regular update announcements and other blog posts. Over the past year I helped organize GUADEC 2024 in Denver, and continue to contribute to the GNOME design team and Flathub documentation and curation.

I was appointed to the GNOME Foundation board in March to fill a vacancy, and I am excited to earn your vote to continue my work on the board. If elected, I will continue my focus on:

  • Clearer and more frequent communication from the GNOME Foundation, including by helping write and edit blog posts and announcements

  • Exploring and supporting fundraising opportunities including with users, OEMs, and downstream projects

  • Ensuring Flathub continues to be recognized as the premier Linux app store, especially as it moves to enable financially supporting the developers of FOSS apps

  • More widely communicating the impact, influence, and importance of GNOME and Flathub to raise awareness beyond the existing contributor community

  • Helping ensure that the Foundation reflects the interests of the contributor community

I feel like the GNOME Foundation is at an important transformation point, and I look forward to helping steer things in the right direction for an effective, sustainable organization in support of the GNOME community. Regardless of whether I am elected, I will continue to contribute to design and communication as much as I'm able.

Thank you for your consideration!

Become a Member, and Vote!

Voting will be open for two weeks beginning June 5, 2025. If you contribute to GNOME, now is a great time to ensure you're a member so you can vote in time; check the GNOME Discourse announcement for all of the specific dates and details. And don't forget to actually vote once it begins. :)

19 May 2025 12:00am GMT

18 May 2025

feedPlanet GNOME

Sam Thursfield: Book club, 2025 edition

It's strange being alive while so much bad shit is going on in the world, right? With our big brains that invented smartphones, quantum computers and Haskell, surely we could figure out how to stop Benjamin Netenyahu from starving hundreds of thousands of children? (Or, causing "high levels of acute food insecurity" as the UN refer to it).

Nothing in the world is simple, though, is it.

Back in 1914 when European leaders kicked off the First World War, the collective imagination of a war dated back to an era where the soldiers wore colourful jackets and the most sophisticated weapon was a gun with a knife attached. The reality of WWI was machine guns, tanks and poison gas. All that new technology took people by surprise, and made for one of the deadliest wars in history.

If you're reading this, then however old or young you are, your life has been marked by rapid technological changes. Things are still changing rapidly. And therein lies the problem.

In amongst the bad news I am seeing some reasons to be optimistic. The best defense against exploitation is education. As a society it feels like we're starting to get a grip on why living standards for everyone except the rich are nosediving.

Lets go back to an older technology which changed the world centuries ago: books. I am going to recommend a few books.

Technofeudalism (by Yannis Varoufakis)

Cover of Technofeudalism

The book's preface outlines the theory: capital has mutated into a much more powerful and dangerous form of itself. Two things caused it: the "privatization of the internet", and the manner in which Western governments and central banks responded to the financial crisis of 2008. The strongest part of the book is the detailed telling of this story, from the beginnings of capitalism and its metamorphoses during the 20th century, to the panicked central banks keeping interest rates near zero for over a decade, effectively printing money and giving it to the wealthy, who in turn decided it was best to hang onto all of it. Out of this he declares capitalism itself is dead, replaced by a more powerful force: technofuedalism.

Yanis' concept of technofuedalism is this:

Markets, the medium of capitalism, have been replaced by digital trading platforms which look like, but as not, markets and are better understood as fiefdoms. And profit, the engine of capitalism, has been replaced with its feudal predecessor: rent. Specifically, it is a form of rent that must be paid for access to those platforms and to the cloud more broadly.

Many people depend on cloud platforms for basic needs. For example, access to work. Think about how many people earn a living through apps: Uber drivers, food delivery couriers, freelancers advertising via Google or Meta, and so on. But it's not just individuals. Many capitalist businesses now rely on sites like Amazon.com for most of their sales. Everyone has to pay cloud rent to the overlord.

Yanis likens this to a colonial town where all the shops are owned by the same guy, who happens to be named Jeff. This isn't your traditional monopoly, though - because cloud platforms also "personalize" your experience of the site. You get recommendations perfectly tailed to your needs. For consumers, the platform owners control what you see. For participants, they control who is paying attention. This he calls cloud capital.

The concept of cloud capital needs better definition in the book, but I think the attention economy is the most interesting aspect, and it is what leads to the other counterintuitive side effect: many people creating value for cloud platforms do it for little or no money. Apple doesn't pay you to make app store apps. Tiktok don't pay you to upload videos. The book claims that capitalist companies like General Motors pay about 80% of their income to workers as salary payments, while Big Tech companies tend to spend less than 1% of their income paying workers.

In my last status update I mentioned some disillusionment with open source projects in the age of AI. Here's another perspective: contributing to some open source projects now feels like giving free labour to cloud platform owners.

The Free Software movement dates from the 1980s, when operating systems were a source of power. Microsoft created an illegal monopoly on operating systems in the 90s and became the richest and most powerful company in the world; but today, operating systems are a commodity, and Microsoft makes more money from its cloud platform Azure.

It's great that we maintain ethical, open computing platforms like GNOME, but the power struggle has moved on. I don't expect to see huge innovations in desktop or mobile operating systems in the next decade.

Meanwhile, maintaining ethical cloud platforms is still a minority pursuit. Writing software doesn't feel like the difficult part, here. The work needed if we have the will to climb out of this technofuedal hole is community organization and communication. The most valuable thing the major cloud platforms have is our attention. (And perhaps the most valuable thing we have in the open source world, is our existing communities and events, such as the Linux App Summit).

Why does this book give me hope? It gives a structure to the last 18 years of fucked up goings on in the world of power and politics. And it helps us analyze exactly what makes the big tech companies of the USA and China so powerful.

If the cloudalists got to you already and you don't have the attention span to buy and read a book any more, don't worry! There's also a video.

The Trading Game (by Gary Stevenson)

Cover of The Trading Game

I'm late to the party with this one. Gary started a blog ten years ago (wealtheconomics.org), and now runs a online video channel (GarysEconomics).

He knows a lot about money and the super-rich. He knows that people are addicted to accumulating wealth and power. He knows that living standards for working people are getting worse. He knows that politicians won't admit that the two things are linked. And he has over a million subscribers to his channel who know that too.

Why does it give me hope? First, he's focused on helping us understand the problem. He does have a clickbait solution - "Tax wealth, not work" - but he also acknowledges that it's slow, difficult and messy to affect national politics. He's realistic about how difficult it is to tax the super-rich in a world of tax havens. And he's been going at it for 10 years already.

Careless People (by Sarah Wynn-Williams)

I listened to long discussion of this book on a podcast called Chisme Corporativo, run by two chicas from Mexico working to demystify the world of big business and USA power that controls much of the world.

The fact that Chisme Corporativo exists makes me very happy. If we're going to live in a world where US corporations have more power than your own government - particularly the case in Latin America - then it makes a lot of sense to learn about US corporations, the people who run them, and why they make the decisions they do.

The book review quotes a part where Mark Zuckerberg finally realized that Facebook was instrumental in the first Tromp election campaign, and just how much power that endows the company with.

And he digested this bombshell for three hours, and his thought process led him to this: "Maybe I should run for president!"

That's the type of person we are dealing with.

What's next

Inequality keeps rising and living standards are getting worse for everyone except the super rich. But we are learning more and more about the people and the processes responsible. Information is a seed for bringing society into better balance again.

I'm going to leave you with this quote I stole blatantly from Gary Stevenson's website:

"If we can really understand the problem, the answer will come out of it, because the answer is not separate from the problem."
― J. Krishnamurti

Have you read any of these books? What else would you add to this list?

18 May 2025 10:17pm GMT

16 May 2025

feedOMG! Ubuntu

elementary OS Preview Cool Upcoming Features

The elementary OS 8.0.1 release back in March brought an appreciable set of improvements with it, including a much-improved Files app, but as ever in development: the work never stops! Project founder, Danielle Foré, recently recapped a few smaller features that have been issued to users of the Ubuntu-based Linux distribution as software updates, including: If you run elementary OS 8.x, install your updates and eat your greens, you should be benefitting from the changes listed above (if you don't have them, go update to get 'em). But Danielle also gave us an early-look at an exciting new app and […]

You're reading elementary OS Preview Cool Upcoming Features, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

16 May 2025 3:15pm GMT

14 May 2025

feedKernel Planet

Linux Plumbers Conference: Submission time for Linux Plumbers 2025

Submissions for the Refereed Track and Microconferences are now open. Linux Plumbers will be held this year in Tokyo from December 11th - 13th (Note, the 13th is on a Saturday).

The Refereed presentations are 45 minutes in length and should focus on a specific aspect of the "plumbing" in a Linux ecosystem. Examples of Linux plumbing include core kernel subsystems, init systems, core libraries, toolchains, windowing systems, management tools, device support, media creation/playback, testing, and so on. The best presentations are not about finished work, but rather problem statements, proposals, or proof-of-concept solutions that require face-to-face discussions and debate.

The Microconferences are 3 and a half hours of technical discussion, broken up into 15 to 30 minute subtopics. The only presentations allowed are those that are needed to bring the audience up to speed and should not last more than half the allotted time for the subtopic. To submit a Microconference, provide a topic, some examples of subtopics to be discussed and a list of key people that should be present to have meaningful discussions. For Microconferences that have been to Linux Plumbers in the past, they should provide a list of accomplishments that were a direct result of the discussions from their previous sessions (with links to patches and such).

Presentations and Microconference subtopic leads should ideally be physically present at the conference. Remote presentations may be available but are strongly discouraged.

The Refereed submissions end at 11:59PM UTC on Wednesday, September 10, 2025.
The Microconference submissions end at 11:59PM UTC on Sunday, June 29, 2025.

Go ahead and submit your Refereed track presentation or Microconference topic. We are looking forward to seeing the great content that is submitted that makes Linux Plumbers the best technical conference there is.

14 May 2025 8:44pm GMT

12 May 2025

feedPlanet Arch Linux

Am I a musician yet? - Superbooth 2025 Experience

I went to Berlin for a music event and here is what happened.

12 May 2025 12:00am GMT

30 Apr 2025

feedKernel Planet

Brendan Gregg: Doom GPU Flame Graphs

AI Flame Graphs are now open source and include Intel Battlemage GPU support, which means it can also generate full-stack GPU flame graphs for providing new insights into gaming performance, especially when coupled with FlameScope (an older open source project of mine). Here's an example of GZDoom, and I'll start with flame scopes for both CPU and GPU utilization, with details annotated:

(Here are the raw CPU and GPU versions.) FlameScope shows a subsecond-offset heatmap of profile samples, where each column is one second (in this example, made up of 50 x 20ms blocks) and the color depth represents the number of samples, revealing variance and perturbation that you can select to generate a flame graph just for that time range.

Putting these CPU and GPU flame scopes side by side has enabled your eyes to do pattern matching to solve what would otherwise be a time-consuming task of performance correlation. The gaps in the GPU flame scope on the right - where the GPU was not doing much work - match the heavier periods of CPU work on the left.

CPU Analysis

FlameScope lets us click on the interesting periods. By selecting one of the CPU shader compilation stripes we get the flame graph just for that range:

This is brilliant, and we can see exactly why the CPUs were busy for about 180 ms (the vertical length of the red stripe): it's doing compilation of GPU shaders and some NIR preprocessing (optimizations to the NIR intermediate representation that Mesa uses internally). If you are new to flame graphs, you look for the widest towers and optimize them first. Here is the interactive SVG.

CPU flame graphs and CPU flame scope aren't new (from 2011 and 2018, both open source). What is new is full-stack GPU flame graphs and GPU flame scope.

GPU Analysis

Interesting details can also be selected in the GPU FlameScope for generating GPU flame graphs. This example selects the "room 3" range, which is a room in the Doom map that contains hundreds of enemies. The green frames are the actual instructions running on the GPU, aqua shows the source for these functions, and red (C) and yellow (C++) show the CPU code paths that initiated the GPU programs. The gray "-" frames just help highlight the boundary between CPU and GPU code. (This is similar to what I described in the AI flame graphs post, which included extra frames for kernel code.) The x-axis is proportional to cost, so you look for the widest things and find ways to reduce them.

I've included the interactive SVG version of this flame graph so you can mouse-over elements and click to zoom. (PNG version.)

The GPU flame graph is split between stalls coming from rendering walls (41.4%), postprocessing effects (35.7%), stenciling (17.2%), and sprites (4.95%). The CPU stacks are further differentiated by the individual shaders that are causing stalls, along with the reasons for those stalls.

GZDoom

We picked GZDoom to try since it's an open source version of a well known game that runs on Linux (our profiler does not support Windows yet). Intel Battlemage makes light work of GZDoom, however, and since the GPU profile is stall-based we weren't getting many samples. We could have switched to a more modern and GPU-demanding game, but didn't have any great open source ideas, so I figured we'd just make GZDoom more demanding. We built GPU demanding maps for GZDoom (I can't believe I have found a work-related reason to be using Slade), and also set some Battlemage tunables to limit resources, magnifying the utilization of remaining resources.

Our GZDoom test map has three rooms: room 1 is empty, room 2 is filled with torches, and room 3 is open with a large skybox and filled with enemies, including spawnpoints for Sergeants. This gave us a few different workloads to examine by walking between the rooms.

Using iaprof: Intel's open source accelerator profiler

The AI Flame Graph project is pioneering work, and has needed various changes to graphics compilers, libraries, and kernel drivers, not just the code but also how they are built. Since Intel has its own public cloud (the Intel® Tiber™ AI Cloud) we can fix the software stack in advance so that for customers it "just works." Check the available releases. It currently supports the Intel Max Series GPU.

If you aren't on the Intel cloud, or you wish to try this with Intel Battlemage, then it can require a lot of work to get the system ready to be profiled. Requirements include:

If you are new to custom kernel builds and library tinkering, then getting this all working may feel like Nightmare! difficulty. Over time things will improve and gradually get easier: check the github docs. Intel can also develop a much easier version of this tool as part of a broader product offering and get it working on more than just Linux and Battlemage (either watch this space or, if you have an Intel rep, ask them to make it a priority).

Once you have it all working, you can run the iaprof command to profile the GPU. E.g.:

git clone https://github.com/intel/iaprof
cd iaprof
make deps
make
iaprof record > profile.txt
cat profile.txt | iaprof flame > flame.svg

iaprof is modeled on the Linux perf command. (Maybe one day it'll become included in perf directly.) Thanks to Gabriel Muñoz for getting the work done to get this open sourced.

FAQ and Future Work

From the launch of AI flame graphs last year, I can guess what FAQ #1 will be: "What about NVIDIA?". They do have flame graphs in Nsight Graphics for GPU workloads, although their flame graphs are currently shallow as it is GPU code only, and onerous to use as I believe it requires an interposer; on the plus side they have click-to-source. The new GPU profiling method we've been developing allows for easy, everything, anytime profiling, like you expect from CPU profilers.

Future work will include github releases, more hardware support, and overhead reduction. We're the first to use eustalls in this way, and we need to add more optimization to reach our target of <5% overhead, especially with the i915 driver.

Conclusion

We've open sourced AI flame graphs and tested it on new hardware, Intel Battlemage, and a non-AI workload: GZDoom (gaming). It's great to see a view of both CPU and GPU resources down to millisecond resolution, where we can see visual patterns in the flame scope heat maps that can be selected to produce flame graphs to show the code. We applied these new tools to GZDoom and explained GPU pauses by selecting the corresponding CPU burst and reading the flame graph, as well as GPU code use for arbitrary time windows.

While we have open sourced this, getting it all running requires Intel hardware and Linux kernel and library tinkering - which can be a lot of work. (Actually playing Doom on Nightmare! difficulty may be easier.) This will get better over time. We look forward to seeing if anyone can fight their way through this work in the meantime and what new performance issues they can solve.

Authors: Brendan Gregg, Ben Olson, Brandon Kammerdiener, Gabriel Muñoz.

30 Apr 2025 2:00pm GMT

feedPlanet Gentoo

Urgent - OSU Open Source Lab needs your help

OSL logo Oregon State University's Open Source Lab (OSL) has been a major supporter of Gentoo Linux and many other software projects for years. It is currently hosting several of our infrastructure servers as well as development machines for exotic architectures, and is critical for Gentoo operation.

Due to drops in sponsor contributions, OSL has been operating at loss for a while, with the OSU College of Engineering picking up the rest of the bill. Now, university funding has been cut, this is not possible anymore, and unless US$ 250.000 can be provided within the next two weeks OSL will have to shut down. The details can be found in a blog post of Lance Albertson, the director of OSL.

Please, if you value and use Gentoo Linux or any of the other projects that OSL has been supporting, and if you are in a position to make funds available, if this is true for the company you work for, etc … contact the address in the blog post. Obviously, long-term corporate sponsorships would here serve best - for what it's worth, OSL developers have ended up at almost every big US tech corporation by now. Right now probably everything helps though.

30 Apr 2025 5:00am GMT

18 Apr 2025

feedPlanet Arch Linux

Easter hack: terraform-provider-openwrt

April is usualy tax season for most people in Norway, and as I got some "money back on the skætt" I wound up purchasing an OpenWrt One to replace my 13-14 year old Asus router. I've been meaning to learn a bit more about networking in general and getting an OpenWrt router seemed like a fun project. Last year I bought a Beryl AX from GL-Inet as I was travelling for a few weeks.

18 Apr 2025 12:00am GMT

17 Apr 2025

feedPlanet Arch Linux

Valkey to replace Redis in the [extra] Repository

Valkey, a high-performance key/value datastore, will be replacing redis in the [extra] repository. This change is due to Redis modifying its license from BSD-3-Clause to RSALv2 and SSPLv1 on March 20th, 2024[0]. Arch Linux Package Maintainers intend to support the availability of the redis package for roughly 14 days from the day of this post, to enable a smooth transition to valkey. After the 14 day transition period has ended, the redis package will be moved to the AUR. Also, from this point forward, the redis package will not receive any additional updates and should be considered deprecated until it is removed. Users are recommended to begin transitioning their use of Redis to Valkey as soon as possible to avoid possible complications after the 14 day transition window closes. [0] https://github.com/redis/redis/commit/0b34396924eca4edc524469886dc5be6c77ec4ed

17 Apr 2025 12:00am GMT

18 Mar 2025

feedKernel Planet

Matthew Garrett: Failing upwards: the Twitter encrypted DM failure

Almost two years ago, Twitter launched encrypted direct messages. I wrote about their technical implementation at the time, and to the best of my knowledge nothing has changed. The short story is that the actual encryption primitives used are entirely normal and fine - messages are encrypted using AES, and the AES keys are exchanged via NIST P-256 elliptic curve asymmetric keys. The asymmetric keys are each associated with a specific device or browser owned by a user, so when you send a message to someone you encrypt the AES key with all of their asymmetric keys and then each device or browser can decrypt the message again. As long as the keys are managed appropriately, this is infeasible to break.

But how do you know what a user's keys are? I also wrote about this last year - key distribution is a hard problem. In the Twitter DM case, you ask Twitter's server, and if Twitter wants to intercept your messages they replace your key. The documentation for the feature basically admits this - if people with guns showed up there, they could very much compromise the protection in such a way that all future messages you sent were readable. It's also impossible to prove that they're not already doing this without every user verifying that the public keys Twitter hands out to other users correspond to the private keys they hold, something that Twitter provides no mechanism to do.

This isn't the only weakness in the implementation. Twitter may not be able read the messages, but every encrypted DM is sent through exactly the same infrastructure as the unencrypted ones, so Twitter can see the time a message was sent, who it was sent to, and roughly how big it was. And because pictures and other attachments in Twitter DMs aren't sent in-line but are instead replaced with links, the implementation would encrypt the links but not the attachments - this is "solved" by simply blocking attachments in encrypted DMs. There's no forward secrecy - if a key is compromised it allows access to not only all new messages created with that key, but also all previous messages. If you log out of Twitter the keys are still stored by the browser, so if you can potentially be extracted and used to decrypt your communications. And there's no group chat support at all, which is more a functional restriction than a conceptual one.

To be fair, these are hard problems to solve! Signal solves all of them, but Signal is the product of a large number of highly skilled experts in cryptography, and even so it's taken years to achieve all of this. When Elon announced the launch of encrypted DMs he indicated that new features would be developed quickly - he's since publicly mentioned the feature a grand total of once, in which he mentioned further feature development that just didn't happen. None of the limitations mentioned in the documentation have been addressed in the 22 months since the feature was launched.

Why? Well, it turns out that the feature was developed by a total of two engineers, neither of whom is still employed at Twitter. The tech lead for the feature was Christopher Stanley, who was actually a SpaceX employee at the time. Since then he's ended up at DOGE, where he apparently set off alarms when attempting to install Starlink, and who today is apparently being appointed to the board of Fannie Mae, a government-backed mortgage company.

Anyway. Use Signal.

comment count unavailable comments

18 Mar 2025 11:58pm GMT

20 Feb 2025

feedPlanet Gentoo

Bootable Gentoo QCOW2 disk images - ready for the cloud!

Larry the Qcow2 We are very happy to announce new official downloads on our website and our mirrors: Gentoo for amd64 (x86-64) and arm64 (aarch64), as immediately bootable disk images in qemu's QCOW2 format! The images, updated weekly, include an EFI boot partition and a fully functional Gentoo installation; either with no network activated but a password-less root login on the console ("no root pw"), or with network activated, all accounts initially locked, but cloud-init running on boot ("cloud-init"). Enjoy, and read on for more!

Questions and answers

How can I quickly test the images?

We recommend using the "no root password" images and qemu system emulation. Both amd64 and arm64 images have all the necessary drivers ready for that. Boot them up, use as login name "root", and you will immediately get a fully functional Gentoo shell. The set of installed packages is similar to that of an administration or rescue system, with a focus more on network environment and less on exotic hardware. Of course you can emerge whatever you need though, and binary package sources are already configured too.

What settings do I need for qemu?

You need qemu with the target architecture (aarch64 or x86_64) enabled in QEMU_SOFTMMU_TARGETS, and the UEFI firmware.

app-emulation/qemu
sys-firmware/edk2-bin

You should disable the useflag "pin-upstream-blobs" on qemu and update edk2-bin at least to the 2024 version. Also, since you probably want to use KVM hardware acceleration for the virtualization, make sure that your kernel supports that and that your current user is in the kvm group.

For testing the amd64 (x86-64) images, a command line could look like this, configuring 8G RAM and 4 CPU threads with KVM acceleration:

qemu-system-x86_64 \
        -m 8G -smp 4 -cpu host -accel kvm -vga virtio -smbios type=0,uefi=on \
        -drive if=pflash,unit=0,readonly=on,file=/usr/share/edk2/OvmfX64/OVMF_CODE_4M.qcow2,format=qcow2 \
        -drive file=di-amd64-console.qcow2 &

For testing the arm64 (aarch64) images, a command line could look like this:

qemu-system-aarch64 \
        -machine virt -cpu neoverse-v1 -m 8G -smp 4 -device virtio-gpu-pci -device usb-ehci -device usb-kbd \
        -drive if=pflash,unit=0,readonly=on,file=/usr/share/edk2/ArmVirtQemu-AARCH64/QEMU_EFI.qcow2 \
        -drive file=di-arm64-console.qcow2 &

Please consult the qemu documentation for more details.

Can I install the images onto a real harddisk / SSD?

Sure. Gentoo can do anything. The limitations are:

pinacolada ~ # blockdev --report /dev/sdb
RO    RA   SSZ   BSZ        StartSec            Size   Device
rw   256   512  4096               0   4000787030016   /dev/sdb

So, this is an expert workflow.

Assuming your disk is /dev/sdb and has a size of at least 20GByte, you can then use the utility qemu-img to decompress the image onto the raw device. Warning, this obviously overwrites the first 20Gbyte of /dev/sdb (and with that the existing boot sector and partition table):

qemu-img convert -O raw di-amd64-console.qcow2 /dev/sdb

Afterwards, you can and should extend the new root partition with xfs_growfs, create an additional swap partition behind it, possibly adapt /etc/fstab and the grub configuration, …

If you are familiar with partitioning and handling disk images you can for sure imagine more workflow variants; you might find also the qemu-nbd tool interesting.

So what are the cloud-init images good for?

Well, for the cloud. Or more precisely, for any environment where a configuration data source for cloud-init is available. If this is already provided for you, the image should work out of the box. If not, well, you can provide the configuration data manually, but be warned that this is a non-trivial task.

Are you planning to support further architectures?

Eventually yes, in particular (EFI) riscv64 and loongarch64.

Are you planning to support legacy boot?

No, since the placement of the bootloader outside the file system complicates things.

How about disks with 4096 byte sectors?

Well… let's see how much demand this feature finds. If enough people are interested, we should be able to generate an alternative image with a corresponding partition table.

Why XFS as file system?

It has some features that ext4 is sorely missing (reflinks and copy-on-write), but at the same time is rock-solid and reliable.

20 Feb 2025 6:00am GMT

01 Feb 2025

feedPlanet Gentoo

Tinderbox shutdown

Due to the lack of hardware, the Tinderbox (and CI) service is no longer operational.

I would like to take this opportunity to thank all the people who have always seen the Tinderbox as a valuable resource and who have promptly addressed bugs, significantly improving the quality of the packages we have in Portage as well as the user experience.

01 Feb 2025 7:08am GMT

16 Oct 2024

feedPlanet Maemo

Adding buffering hysteresis to the WebKit GStreamer video player

The <video> element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.

WebKit GStreamer regular playback class diagram

The player private can have 3 buffering modes:

The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.

So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.

All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.

Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.

For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().

So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.

0 Add to favourites0 Bury

16 Oct 2024 6:12am GMT

10 Sep 2024

feedPlanet Maemo

Don’t shoot yourself in the foot with the C++ move constructor

Move semantics can be very useful to transfer ownership of resources, but as many other C++ features, it's one more double edge sword that can harm yourself in new and interesting ways if you don't read the small print.

For instance, if object moving involves super and subclasses, you have to keep an extra eye on what's actually happening. Consider the following classes A and B, where the latter inherits from the former:

#include <stdio.h>
#include <utility>

#define PF printf("%s %p\n", __PRETTY_FUNCTION__, this)

class A {
 public:
 A() { PF; }
 virtual ~A() { PF; }
 A(A&& other)
 {
  PF;
  std::swap(i, other.i);
 }

 int i = 0;
};

class B : public A {
 public:
 B() { PF; }
 virtual ~B() { PF; }
 B(B&& other)
 {
  PF;
  std::swap(i, other.i);
  std::swap(j, other.j);
 }

 int j = 0;
};

If your project is complex, it would be natural that your code involves abstractions, with part of the responsibility held by the superclass, and some other part by the subclass. Consider also that some of that code in the superclass involves move semantics, so a subclass object must be moved to become a superclass object, then perform some action, and then moved back to become the subclass again. That's a really bad idea!

Consider this usage of the classes defined before:

int main(int, char* argv[]) {
 printf("Creating B b1\n");
 B b1;
 b1.i = 1;
 b1.j = 2;
 printf("b1.i = %d\n", b1.i);
 printf("b1.j = %d\n", b1.j);
 printf("Moving (B)b1 to (A)a. Which move constructor will be used?\n");
 A a(std::move(b1));
 printf("a.i = %d\n", a.i);
 // This may be reading memory beyond the object boundaries, which may not be
 // obvious if you think that (A)a is sort of a (B)b1 in disguise, but it's not!
 printf("(B)a.j = %d\n", reinterpret_cast<B&>(a).j);
 printf("Moving (A)a to (B)b2. Which move constructor will be used?\n");
 B b2(reinterpret_cast<B&&>(std::move(a)));
 printf("b2.i = %d\n", b2.i);
 printf("b2.j = %d\n", b2.j);
 printf("^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place\n");
 printf("Destroying b2, a, b1\n");
 return 0;
}

If you've read the code, those printfs will have already given you some hints about the harsh truth: if you move a subclass object to become a superclass object, you're losing all the subclass specific data, because no matter if the original instance was one from a subclass, only the superclass move constructor will be used. And that's bad, very bad. This problem is called object slicing. It's specific to C++ and can also happen with copy constructors. See it with your own eyes:

Creating B b1
A::A() 0x7ffd544ca690
B::B() 0x7ffd544ca690
b1.i = 1
b1.j = 2
Moving (B)b1 to (A)a. Which move constructor will be used?
A::A(A&&) 0x7ffd544ca6a0
a.i = 1
(B)a.j = 0
Moving (A)a to (B)b2. Which move constructor will be used?
A::A() 0x7ffd544ca6b0
B::B(B&&) 0x7ffd544ca6b0
b2.i = 1
b2.j = 0
^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place
Destroying b2, a, b1
virtual B::~B() 0x7ffd544ca6b0
virtual A::~A() 0x7ffd544ca6b0
virtual A::~A() 0x7ffd544ca6a0
virtual B::~B() 0x7ffd544ca690
virtual A::~A() 0x7ffd544ca690

Why can something that seems so obvious become such a problem, you may ask? Well, it depends on the context. It's not unusual for the codebase of a long lived project to have started using raw pointers for everything, then switching to using references as a way to get rid of null pointer issues when possible, and finally switch to whole objects and copy/move semantics to get rid or pointer issues (references are just pointers in disguise after all, and there are ways to produce null and dangling references by mistake). But this last step of moving from references to copy/move semantics on whole objects comes with the small object slicing nuance explained in this post, and when the size and all the different things to have into account about the project steals your focus, it's easy to forget about this.

So, please remember: never use move semantics that convert your precious subclass instance to a superclass instance thinking that the subclass data will survive. You can regret about it and create difficult to debug problems inadvertedly.

Happy coding!

0 Add to favourites0 Bury

10 Sep 2024 7:58am GMT

17 Jun 2024

feedPlanet Maemo

Incorporating 3D Gaussian Splats into the graphics pipeline

3D Gaussian splatting is the emerging rendering technique that is overtaking NeRFs. Since it is centered around point primitives, it is more compatible with traditional graphics pipelines that already support point rendering.

Gaussian splats essentially enhance the concept of point rendering by converting the point primitive into a 3D ellipsoid, which is then projected into 2D during the rendering process.. This concept was initially described in 2002 [3], but the technique of extending Structure from Motion scans in this way was only detailed more recently [1].

In this post, I explore how to integrate Gaussian splats into the traditional graphics pipeline. This allows them to be used alongside triangle-based primitives and interact with them through the depth buffer for occlusion (see header image). This approach also simplifies deployment by eliminating the need for CUDA.

Storage

The original implementation uses .ply files as their checkpoint format, focusing on maintaining training-relevant data structures at the expense of storage efficiency, leading to increased file sizes.

For example, it stores the covariance as scaling and a rotation quaternion, necessitating reconstruction during rendering. A more efficient approach would be to leverage orthogonality, storing only the diagonal and upper triangular vectors, thereby eliminating reconstruction and reducing storage requirements.

Further analysis of the storage usage for each attribute shows that the spherical harmonics of orders 1-3 are the main contributors to the file size. However, according to the ablation study in the original publication [1], these harmonics only lead to a modest PSNR improvement of 0.5.

Therefore, the most straightforward way to decrease storage is by discarding the higher-order spherical harmonics. Additionally, the level 0 spherical harmonics can be converted into a diffuse color and merged with opacity to form a single RGBA value. These simple yet effective methods were implemented in one of the early WebGL implementations, resulting in the .splat format. As an added benefit, this format can be easily interpreted by viewers unaware of Gaussian splats as a simple colored point cloud:

Results using a non Gaussian-splat aware renderer

By directly storing the covariance as previously mentioned we can reduce the precision from float32 to float16, thereby halving the storage needed for that data. Furthermore, since most splats have limited spatial extents, we can also utilize float16 for position data, yielding additional storage savings.

With these changes, we achieve a storage requirement of 22 bytes per splat, in contrast to the 44 bytes needed by the .splat format and 236 bytes in the original implementation. Thus, we have attained a 10x reduction in storage compared to the original implementation simply by using more suitable data types.

Blending

The image formation model presented in the original paper [1] is similar to the NeRF rendering, as it is compared to it. This involves casting a ray and observing its intersection with the splats, which leads to front-to-back blending. This is precisely the approach taken by the provided CUDA implementation.

Blending remains a component of the fixed-function unit within the graphics pipeline, which can be set up for front-to-back blending [2] by using the factors (one_minus_dest_alpha, one) and by multiplying color and alpha in the shader as color.rgb * color.a. This results in the following equation:

\begin{aligned}C_{dst} &= (1 - \alpha_{dst}) \cdot \alpha_{src} C_{src} &+ C_{dst}\\ \alpha_{dst} &= (1 - \alpha_{dst})\cdot\alpha_{src} &+ \alpha_{dst}\end{aligned}

However, this method requires the framebuffer alpha value to be zero before rendering the splats, which is not typically the case as any previous render pass could have written an arbitrary alpha value.

A simple solution is to switch to back-to-front sorting and use the standard alpha blending factors (src_alpha, one_minus_src_alpha) for the following blending equation:

C_{dst} = \alpha_{src} \cdot C_{src} + (1 - \alpha_{src}) \cdot C_{dst}

This allows us to regard Gaussian splats as a special type of particles that can be rendered together with other transparent elements within a scene.

References

  1. Kerbl, Bernhard, et al. "3d gaussian splatting for real-time radiance field rendering." ACM Transactions on Graphics 42.4 (2023): 1-14.
  2. Green, Simon. "Volumetric particle shadows." NVIDIA Developer Zone (2008).
  3. Zwicker, Matthias, et al. "EWA splatting." IEEE Transactions on Visualization and Computer Graphics 8.3 (2002): 223-238.

0 Add to favourites0 Bury

17 Jun 2024 1:28pm GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT