05 Feb 2025
LXer Linux News
Firefox 136 Promises Hardware Video Decoding for AMD GPUs on Linux, Vertical Tabs
Now that Firefox 135 is rolling out to users all over the globe, Mozilla has promoted the next major release of its open-source web browser, Firefox 136, to the beta channel for public testing.
05 Feb 2025 3:28pm GMT
How I turned traditional Ubuntu Mate into a modern, minimal desktop - and you can too
Ubuntu Mate 24.10 is a desktop operating system that helps ease new users into the world of Linux with a fairly traditional UI that can be easily updated with built-in features.
05 Feb 2025 2:12pm GMT
Planet GNOME
Felix Häcker: Shortwave 5.0
You want background playback? You get background playback! Shortwave 5.0 is now available and finally continues playback when you close the window, resolving the "most popular" issue on GitLab!
Shortwave uses the new Flatpak background portal for this, which means that the current playback status is now also displayed in the "Background Apps" menu.
The recording feature has also been overhauled. I have addressed a lot of user feedback here, e.g. you can now choose between 3 different modes:
- Save All Tracks: Automatically save all recorded tracks
- Decide for Each Track: Temporarily record tracks and save only the ones you want
- Record Nothing: Stations are played without recording
In addition to that the directory for saving recorded tracks can be customized, and users can now configure the minimum and maximum duration of recordings.
There is a new dialog window with additional details and options for current or past played tracks. For example, you no longer need to worry about forgetting to save your favorite track when the recording is finished - you can now mark tracks directly during playback so that they are automatically saved when the recording is completed.
You don't even need to open Shortwave for this, thanks to the improved notifications you can decide directly when a new track gets played whether you want to save it or not.
Of course the release also includes the usual number of bug fixes and improvements. For example, the volume can now be changed using the keyboard shortcut.
Enjoy!
05 Feb 2025 1:52pm GMT
Linuxiac
Debian 13 to Feature GNOME 48 Desktop Environment
The next major Debian release, 13 "Trixie," is expected to ship with GNOME 48 desktop environment.
05 Feb 2025 1:48pm GMT
LXer Linux News
AlmaLinux Now Has a DOD Guide for Security Hardening the Distro as Much as You Want
Users who take advantage of the new DISA STIG can give their AlmaLinux servers military-grade hardening.
05 Feb 2025 12:57pm GMT
Planet KDE | English
Qt for MCUs 2.8.2 LTS released
Qt for MCUs 2.8.2 LTS (Long-Term Support) has been released and is available for download. This patch release provides bug fixes and other improvements while maintaining source compatibility with Qt for MCUs 2.8. It does not add any new functionality.
05 Feb 2025 12:02pm GMT
Fedora People
Ben Cotton: Open source projects don’t exist separately from the outside world
For some people, contributing to an open source project is a diversion from the world around them. It's a fun way to work on well-defined problems with a community of like-minded people. But it's important to remember that open source contributors - and their projects - still exist in the real world.
Global projects are going to have interactions with laws that relate to global relations. For example, the Linux Foundation recently issued guidance on complying with the U.S. Office of Foreign Assets Control sanctions. Projects that host services have had to pay attention to Europe's General Data Protection Regulation (GDPR). And everyone is trying to figure out what Europe's Cyber Resilience Act (CRA) will mean for open source projects.
Laws aren't the only effect of the outside world on projects. When the COVID-19 pandemic was in the most acute phase in the spring of 2020, I was filled with worry and uncertainty, as were many others. I worried about my family, but I also worried about the Fedora community. As a leader in the community, I felt a sense of responsibility to make sure everyone was doing well. When governments enact laws hostile to the identity of members of the community, I worry for them.
It's tempting to think of open source as a noble pursuit that's separate from the noise of daily life. But you do your community a disservice when you take that approach. If people in your community want to use contributing as an escape, let them. But be aware of how people are doing and create a space where they can feel comfortable stepping away to take care of themselves.
This post's featured photo by Christian Lue on Unsplash.
The post Open source projects don't exist separately from the outside world appeared first on Duck Alignment Academy.
05 Feb 2025 12:00pm GMT
Planet Debian
Reproducible Builds: Reproducible Builds in January 2025
Welcome to the first report in 2025 from the Reproducible Builds project!
Our monthly reports outline what we've been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As usual, though, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.
Table of contents:
- reproduce.debian.net
- Two new academic papers
- Distribution work
- On our mailing list…
- Upstream patches
- diffoscope
- Website updates
- Reproducibility testing framework
reproduce.debian.net
The last few months saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. Powering that is rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there.
This month, however, we are pleased to announce that in addition to the existing amd64.reproduce.debian.net and i386.reproduce.debian.net architecture-specific pages, we now build for a three more architectures (for a total of five) - arm64
armhf
and riscv64
.
Two new academic papers
Giacomo Benedetti, Oreofe Solarin, Courtney Miller, Greg Tystahl, William Enck, Christian Kästner, Alexandros Kapravelos, Alessio Merlo and Luca Verderame published an interesting article recently. Titled An Empirical Study on Reproducible Packaging in Open-Source Ecosystem, the abstract outlines its optimistic findings:
[We] identified that with relatively straightforward infrastructure configuration and patching of build tools, we can achieve very high rates of reproducible builds in all studied ecosystems. We conclude that if the ecosystems adopt our suggestions, the build process of published packages can be independently confirmed for nearly all packages without individual developer actions, and doing so will prevent significant future software supply chain attacks.
The entire PDF is available online to view.
In addition, Julien Malka, Stefano Zacchiroli and Théo Zimmermann of Télécom Paris' in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale?.
Answering strongly in the affirmative, the article's abstract reads as follows:
In this work, we perform the first large-scale study of bitwise reproducibility, in the context of the Nix functional package manager, rebuilding 709,816 packages from historical snapshots of the
nixpkgs
repository[. We] obtain very high bitwise reproducibility rates, between 69 and 91% with an upward trend, and even higher rebuildability rates, over 99%. We investigate unreproducibility causes, showing that about 15% of failures are due to embedded build dates. We release a novel dataset with all build statuses, logs, as well as full diffoscopes: recursive diffs of where unreproducible build artifacts differ.
As above, the entire PDF of the article is available to view online.
Distribution work
There as been the usual work in various distributions this month, such as:
- Arch Linux developer kpcyrd has provided a status report for January 2025 related to Arch's progress towards full reproducibility. kpcyrd notes in particular progress towards to making a "minimal reproducible bootable system" - that is, an Arch installation containing only reproducible packages.
-
10+ reviews of Debian packages were added, 11 were updated and 10 were removed this month adding to our knowledge about identified issues. A number of issue types were updated also.
-
The FreeBSD Foundation announced that "a planned project to deliver zero-trust builds has begun in January 2025". Supported by the Sovereign Tech Agency, this project is centered on the various build processes, and that the "primary goal of this work is to enable the entire release process to run without requiring root access, and that build artifacts build reproducibly - that is, that a third party can build bit-for-bit identical artifacts." The full announcement can be found online, which includes an estimated schedule and other details.
- Finally, for openSUSE, Bernhard M. Wiedemann published another report for that distribution.
On our mailing list…
On our mailing list this month:
-
Following-up to a substantial amount of previous work pertaining the Sphinx documentation generator, James Addison asked a question pertaining to the relationship between
SOURCE_DATE_EPOCH
environment variable and testing that generated a number of replies. -
Adithya Balakumar of Toshiba asked a question about whether it is possible to make
ext4
filesystem images reproducible. Adithya's issue is that even the smallest amount of post-processing of the filesystem results in the modification of the "Last mount" and "Last write" timestamps. -
James Addison also investigated an interesting issue surrounding our disorderfs filesystem. In particular:
FUSE (Filesystem in USErspace) filesystems such as disorderfs do not delete files from the underlying filesystem when they are deleted from the overlay. This can cause seemingly straightforward tests - for example, cases that expect directory contents to be empty after deletion is requested for all files listed within them - to fail.
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
-
Bernhard M. Wiedemann:
Komikku
(nocheck)abseil-cpp
(race)dunst
(date)eclipse-egit
(jar-mtime minor)exaile
(race)gawk
(bug)gimp3
(png date)intel
(ASLR)ioquake3
(debugsource
contains date and time)joker
(sort)libchardet
llama.cpp
(random)llama.cpp
(-march=native
-related issue)nethack
(race)netrek-client-cow
(date)nvidia-modprobe
(date)nvidia-persistenced
(date)obs-build
(toolchain bug, mis-parses changelog)perl-libconfigfile
(race)pgvector
(CPU)python-Django4
(FTBFS-2038)python-python-datamatrix
(FTBFS)qore-ssh2-module
(GIGO-bug)rpm
(UID incpio
header fromrpmbuild
)zig
(CPU-related issue)
-
Chris Lamb:
-
Egbert Eich:
-
Valentin Lefebvre:
uki-tool
(toolchain)
-
Marvin Friedrich:
cargo-packaging/rusty_v8
(upstream toolchain bugfix)
-
James Addison:
-
Pol Dellaiera:
- PHP Ecosystem: composer/composer#12090 which was then gracefully fixed by Jordi Boggiano at composer/composer#12263.
diffoscope
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 285
, 286
and 287
to Debian:
-
Security fixes:
- Validate the
--css
command-line argument to prevent a potential Cross-site scripting (XSS) attack. Thanks to Daniel Schmidt from SRLabs for the report. […] - Prevent XML entity expansion attacks. Thanks to Florian Wilkens from SRLabs for the report.. […][…]
- Print a warning if we have disabled XML comparisons due to a potentially vulnerable version of
pyexpat
. […]
- Validate the
-
Bug fixes:
- Correctly identify changes to only the line-endings of files; don't mark them as Ordering differences only. […]
- When passing files on the command line, don't call
specialize(…)
before we've checked that the files are identical or not. […] - Do not exit with a traceback if paths are inaccessible, either directly, via symbolic links or within a directory. […]
- Don't cause a traceback if
cbfstool
extraction failed.. […] - Use the
surrogateescape
mechanism to avoid aUnicodeDecodeError
and crash when any decodingzipinfo
output that is not UTF-8 compliant. […]
-
Testsuite improvements:
-
Misc improvements:
- Drop unused subprocess imports. […][…]
- Drop an unused function in
iso9600.py
. […] - Inline a call and check of
Config().force_details
; no need for an additional variable in this particular method. […] - Remove an unnecessary return value from the
Difference.check_for_ordering_differences
method. […] - Remove unused logging facility from a few comparators. […]
- Update copyright years. […][…]
In addition, fridtjof added support for the ASAR .tar
-like archive format. […][…][…][…] and lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 285 […][…] and 286 […][…].
strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-1
was uploaded to Debian unstable by Chris Lamb, making the following the changes:
- Clarify the
--verbose
and non--verbose
output ofbin/strip-nondeterminism
so we don't imply we are normalizing files that we are not. […] - Bump Standards-Version to 4.7.0. […]
Website updates
There were a large number of improvements made to our website this month, including:
-
Arnout Engelen:
- Update the link to NixOS' reproducibility-related issue template on the NixOS-specific contribute page […] and remove an outdated link. […]
-
Holger Levsen:
- Check, deduplicate, update and generally cleanup a number of presentations linked on our Talks & Resources page. […][…][…][…]
-
James Addison:
- Add some file permissions hints and guidance on the Archive metadata page. […]
-
Michael R. Crusoe:
- Add an R example to the
SOURCE_DATE_EPOCH
documentation. […] - Update the website's
README
to make the setup command copy & paste friendly. […]
- Add an R example to the
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
-
reproduce.debian.net-related:
- Add support for rebuilding the
armhf
architecture. […][…] - Add support for rebuilding the
arm64
architecture. […][…][…][…] - Add support for rebuilding the
riscv64
architecture. […][…] - Move the
i386
builder to theosuosl5
node. […][…][…][…] - Don't run our rebuilders on a public port. […][…]
- Add database backups on all builders and add links. […][…]
- Rework and dramatically improve the statistics collection and generation. […][…][…][…][…][…]
- Add contact info to the main page […], thumbnails […] as well as the new, missing architectures. […]
- Move the
amd64
worker to theosuosl4
and node. […] - Run the underlying
debrebuild
script undernice
. […] - Try to use
TMPDIR
when callingdebrebuild
. […][…]
- Add support for rebuilding the
-
buildinfos.debian.net-related:
-
FreeBSD-related:
-
Misc:
In addition:
-
Ed Maste modified the FreeBSD build system to the clean the object directory before commencing a build. […]
-
Gioele Barabucci updated the rebuilder stats to first add a category for network errors […] as well as to categorise failures without a diffoscope log […].
-
Jessica Clarke also made some FreeBSD-related changes, including:
-
Jochen Sprickerhof:
- Fix logic for old files saved on buildinfos.debian.net. […]
- Rework and simplify the generation of statistics linked from reproduce.debian.net. […][…][…][…]
-
Roland Clobus:
Lastly, both Holger Levsen […] and Vagrant Cascadian […] performed some node maintenance.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
-
IRC:
#reproducible-builds
onirc.oftc.net
. -
Mastodon: @reproducible_builds@fosstodon.org
-
Mailing list:
rb-general@lists.reproducible-builds.org
-
Twitter: @ReproBuilds
05 Feb 2025 11:49am GMT
Planet KDE | English
Which is the best LLM for prompting QML code (featuring DeepSeek v3)
Claude 3.5 Sonnet is the best LLM to write QML code when prompted in English. If you want to know why we reached this conclusion, keep reading.
05 Feb 2025 6:01am GMT
Linux Today
Best Free and Open Source Alternatives to Apple Passwords
Apple Passwords is a password manager application which lets users store and access encrypted account information. Passwords is proprietary software. We recommend the best free and open source alternatives for Linux.
The post Best Free and Open Source Alternatives to Apple Passwords appeared first on Linux Today.
05 Feb 2025 3:00am GMT
Top Cross-Platform Apps for Linux, Windows, and Mac in 2025
One of the most significant advancements in software development is the ability to create apps that work seamlessly across different operating systems like Linux, Windows, and Mac. This cross-platform compatibility ensures that users don't have to worry about switching devices or operating systems - they can enjoy the same apps, features, and functionality everywhere. In […]
The post Top Cross-Platform Apps for Linux, Windows, and Mac in 2025 appeared first on Linux Today.
05 Feb 2025 1:00am GMT
Linuxiac
Thunderbird 135 Brings Fixes for IMAP, POP3, and Calendar Users
Mozilla Thunderbird 135 rolls out with new add-on support, improved OAuth2 for CardDAV, and key bug fixes (only for testing purposes).
05 Feb 2025 12:31am GMT
Planet KDE | English
New Video: Memilio Impasta Brushes!
Ramon Miranda has published a new video on the Krita channel: Memilio Impasto Brushes! Who doesn't love a nice impasto effect!
05 Feb 2025 12:00am GMT
Planet GNOME
Flathub Blog: On the Go: Making it Easier to Find Linux Apps for Phones & Tablets
With apps made for different form factors, it can be hard to find what works for your specific device. For example, we know it can be a bit difficult to find great apps that are actually designed to be used on a mobile phone or tablet. To help solve this, we're introducing a new collection: On the Go.
As the premier source of apps for Linux, Flathub serves a wide range of people across a huge variety of hardware: from ultra powerful developer workstations to thin and light tablets; from handheld gaming consoles to a growing number of mobile phones. Generally any app on Flathub will work on a desktop or laptop with a large display, keyboard, and mouse or trackpad. However, devices with only touch input and smaller screen sizes have more constraints.
Revealing the App Ecosystem
Using existing data and open standards, we're now highlighting apps on Flathub that report as being designed to work on these mobile form factors. This new On the Go collection uses existing device support data submitted by app developers in their MetaInfo, the same spec that is used to build those app's listings for Flathub and other app store clients. The collection is featured on the Flathub.org home page for all devices.
Many of these apps are adaptive across screen sizes and input methods; you might be surprised to know that your favorite app on your desktop will also work great on a Linux phone, tablet, or Steam Deck's touch screen. We aim to help reveal just how rich and well-rounded the app ecosystem already is for these devices-and to give app developers another place for their apps to shine and be discovered.
Developers: It's Up to You
As of this writing there are over 150 apps in the collection, but we expect there are cases where app developers have not provided the requisite device support data.
If you're the creator of an app that should work well on mobile form factors but isn't featured in the collection, take a minute to double-check the documentation and your own apps's MetaInfo to ensure it's accurate. Device support data can also be used by native app store clients across form factors to determine what apps are displayed or how they are ranked, so it's a good idea to ensure it's up to date regardless of what your app supports.
05 Feb 2025 12:00am GMT
04 Feb 2025
Linux Today
Linux Kernel Source Code Surpasses 40 Million Lines [January 2025 Update]
As of January, 2025, the Linux Kernel Source has approximately 40 Million lines of code! This is one of the greatest achievements in the history of open-source, community-driven projects.
The post Linux Kernel Source Code Surpasses 40 Million Lines [January 2025 Update] appeared first on Linux Today.
04 Feb 2025 11:00pm GMT
Linuxiac
Serpent OS Needs Your Support
Financial troubles force Ikey Doherty to delay Serpent OS development, putting the project's future at risk.
04 Feb 2025 9:12pm GMT
Planet Ubuntu
Lubuntu Blog: Lubuntu Plucky Puffin Alpha Notes
Lubuntu Plucky Puffin is the current development branch of Lubuntu, which will become 25.04. Since the release of 24.10, we have been hard at work polishing the experience and fixing bugs in the upcoming release. Below, we detail some of the changes you can look forward to in 25.04. Two Minute Minimal Install When installing […]
04 Feb 2025 8:32pm GMT
Ubuntu Studio: LTS Upgrades (22.04 to 24.04) ARE BACK!
Following a bug in ubuntu-release-upgrader
which was causing Ubuntu Studio 22.04 LTS to fail to upgrade to 24.04 LTS, we are pleased to announce that this bug has been fixed, and upgrades now work.
As of this writing, this update is being propagated to the various Ubuntu mirrors throughout the world. The version of ubuntu-release-upgrader
needed is 24.04.26
or higher, and is automatically pulled from the 24.04 repositories upon upgrade.
Unfortunately, while testing this fix, we noticed that, due to the time_t64
transition which prevents the 2038 problem, some packages get removed. We have noticed that, if upgrading from 22.04 LTS to 24.04 LTS, the following applications get removed (this list is not exhaustive):
- Blender
- Kdenlive
- digiKam
- GIMP
- Krita (doesn't get upgraded)
To fix this, immediately after upgrade, open a Konsole terminal (ctrl-alt-t
) and enter the following:
sudo apt -y remove ubuntstudio-graphics ubuntustudio-video ubuntustudio-photography && sudo apt -y install ubuntustudio-graphics ubuntustudio-video ubuntustudio-photography && sudo apt upgrade
If you do intend to upgrade, remember to purge any PPAs you may have enabled via ppa-purge
so that your upgrade will go as smooth as possible.
We apologize for the inconvenience that may have been caused by this bug, and we hope your upgrade process goes as smooth as possible. There may be edge cases where this goes badly as we cannot account for every installation and whatever third-party repositories may be enabled, in which case the best method is to back-up your /home
directory and do a clean installation.
Remember to upgrade soon, as Ubuntu Studio 22.04 goes End Of Life (EOL) in April!
04 Feb 2025 8:01pm GMT
Ubuntu Blog: The role of FIPS 140-3 in the latest FedRAMP guidance
There's good news in the US federal compliance space. The latest FedRAMP policy on the use of cryptographic modules relaxes some of the past restrictions that prevented organizations from applying critical security updates.
There has long been a tension between the requirements for strictly certified FIPS crypto modules and the need to keep software patched and up to date with the latest security vulnerability fixes. The new guidance goes a ways to resolving this tension - and best of all, it aligns perfectly with how we've already been approaching FIPS module updates for Ubuntu.
In this post we cover the basics of FIPS certification along with the new FedRAMP guidance that prioritizes security updates.
What is FIPS 140-3?
FIPS 140-3 is a NIST standard for ensuring that cryptography has been implemented correctly, protecting users against common pitfalls such as misconfigurations or weak algorithms. All US Government departments, federal agencies, contractors, and service providers to the Government are required to use FIPS validated crypto modules, and a number of industries have also adopted the FIPS 140-3 standard as a security best practice. A FIPS-compliant technology stack is therefore essential in these sectors, and Ubuntu provides the building blocks for a modern and innovative open source solution.
The latest FedRAMP guidance
A major use-case for FIPS-validated cryptography is for FedRAMP-accredited service providers. Up till now, the FedRAMP guidelines have mandated a strict adherence to using certified modules. This approach is changing, however, and new FedRAMP policy is now in place, aimed at giving cloud service providers and their independent assessors a more modern risk-based approach to security and compliance:
These requirements highlight the fact that whilst using FIPS-certified crypto is important, there is greater risk facing organizations who deploy code that contains known vulnerabilities. This new approach balances the use of FIPS-certified modules with vendors supplying critical security fixes, which matches exactly what Canonical provides with our module updates.
FIPS 140-3 certification
For Ubuntu, we have chosen a set of cryptographic libraries and utilities that have the widest usage and converted them to operate in FIPS mode. This means that we have disabled various disallowed algorithms and ciphers from the libraries, and made sure that they work by default in a FIPS compatible mode of operation. The exact modules may vary between LTS releases, though we currently include the userspace libraries OpenSSL, Libgcrypt & GnuTLS, along with the Linux kernel, which also provides the validated source of random entropy data.
In order to ensure that we have implemented the FIPS specifications correctly, we work with an accredited Cryptographic and Security Testing Laboratory (CSTL) which validates the implementation through a rigorous series of functional and operational tests. Our current lab partner is atsec information security, who we have engaged since 2016 for this work.
The final certification step is for NIST's Cryptographic Module Validation Program to perform their own checks; when they are satisfied they will publish the official certificates and policy documents.
How we apply patches
Ubuntu packages are built up of layers: we start with the upstream source code, and then apply subsequent patch sets. The first patches comprise any modifications we choose to make for compatibility - the changes that we apply in order to make the software package operate seamlessly with the rest of the software in the distribution. Security patches for fixing vulnerabilities are then applied on top of this.
When we convert the crypto packages to operate in FIPS mode, we make a set of patches that are applied after all the rest of the previous patches. In particular, the FIPS changes sit on top of the security fixes (where the security fixes don't affect the FIPS-specific code, which is the majority case). This means that we can have a high degree of certainty that the FIPS functionality remains unchanged and the security fixes won't affect the FIPS status of the modules, and hence why we can offer the updated modules to our customers for deployment in their FIPS environments.
Strict versions and updates
NIST assesses a particular version of the cryptographic module and issues the FIPS 140-3 certificates and security policy documents for this specific set of binary files. For Ubuntu, that means that we submit versions of the packages for certification and these versions then remain effectively frozen in time from the moment that they are handed over for certification. In particular, these packages don't receive any further security updates to fix any vulnerabilities that are discovered in the time after submission. The certification process can take months, or even years, and then the certificates are valid for 5 years henceforth. Even at the point when the certificates are issued, the modules may well contain unpatched vulnerabilities, and the longer the period since they were frozen in time the more vulnerable they become.
In order to address this obvious shortcoming, Canonical also provides updated versions of the FIPS 140-3 modules that include all the relevant security fixes. These modules are therefore no longer the exact same set of binary files that were submitted for validation, but we assert that the FIPS functionality that was assessed by the testing lab and by NIST remains unchanged, and we strongly recommend that everybody uses these updated versions so that your systems remain fully secure.
Which FIPS to use? -updates, -preview, strict
Canonical has provided FIPS modules in several forms up till now:
- "fips" - strict certified modules
- "fips-updates" - certified modules with security updates
- "fips-preview" - the strict modules prior to certification
We recommend that everyone uses the "fips-updates" modules to remain fully secure against security exploits. In the past, we have provided "fips" modules, which are not updated and likely contain known vulnerabilities, for customers who had a strong need to satisfy strict regimes - such as FedRAMP - that required exact binary versions of the certified modules. Now that the new FedRAMP guidance is being updated to take a more integrated view of security, we will be deprecating the strict "fips" modules, and encouraging everyone to use the "fips-updates".
With Ubuntu 22.04 we introduced another version of the modules called "fips-preview", which contained the strict binary packages that were submitted to NIST for validation, again without the security fixes applied. This will also be deprecated in favour of "fips-updates".
Conclusion
Canonical welcomes the holistic approach to security espoused by the latest FedRAMP guidelines, which aligns perfectly with the FIPS 140-3 modules update strategy that we have already been supporting to provide our customers with the best possible security posture. This combines the NIST-validated crypto implementation needed to meet the strictest government standards with Canonical's 10+ years of security patching, all within the popular Ubuntu ecosystem.
For questions, contact us here.
Register for the webinar: Taking advantage of FIPS 140-3 Certification for Ubuntu 22.04 LTS
04 Feb 2025 6:30pm GMT
Fedora People
Fedora Infrastructure Status: Koji builds might be affected by mass branching
04 Feb 2025 2:00pm GMT
Planet Debian
Dominique Dumont: Azure API throttling strikes back
Hi
In my last blog, I explained how we resolved a throttling issue involving Azure storage API. In the end, I mentioned that I was not sure of the root cause of the throttling issue.
Even though we no longer had any problem in dev and preprod cluster, we still faced throttling issue with prod. The main difference between these 2 environments is that we have about 80 PVs in prod versus 15 in the other environments. Given that we manage 1500 pods in prod, 80 PVs does not look like a lot.
To continue the investigation, I've modified k8s-scheduled-volume-snapshotter
to limit the number of snaphots done in a single cron run (see add maxSnapshotCount parameter pull request).
In prod, we used the modified snapshotter to trigger snapshots one by one.
Even with all previous snapshots cleaned up, we could not trigger a single new snapshot without being throttled. I guess that, in the cron job, just checking the list of PV to snapshot was enough to exhaust our API quota.
Azure doc mention that a leaky bucket algorithm is used for throttling. A full bucket holds tokens for 250 API calls, and the bucket gets 25 new tokens per second. Looks like that not enough.
I was puzzled and out of ideas .
I looked for similar problems in AKS issues on GitHub where I found this comment that recommend using useDataPlaneAPI
parameter in the CSI file driver. That was it!
I was flabbergasted by this parameter: why is CSI file driver able to use 2 APIs ? Why is one on them so limited ? And more importantly, why is the limited API the default one ?
Anyway, setting useDataPlaneAPI: "true"
in our VolumeSnapshotClass
manifest was the right solution. This indeed solved the throttling issue in our prod cluster.
But not the snaphot issue . Amongst the 80 PV, I still had 2 snaphots failing.
Fortunately, the error was mentioned in the description of the failed snapshots: we had too many (200) snapshots for these shared volumes.
What ?? All these snapshots were cleaned up last week.
I then tried to delete these snaphots through azure console. But the console failed to delete these snapshot due to API throttling. Looks like Azure console is not using the right API.
Anyway, I went back to the solution explained in my previous blog, I listed all snapshots with az
command. I indeed has a lot of snaphots, a lot of them dated Jan 19 and 20. There was often a new bogus snaphot created every minute.
These were created during the first attempt at fixing the throttling issue. I guess that even though CSI file driver was throttled, a snaphot was still created in the storage account, but the CSI driver did not see it and retried a minute later. What a mess.
Anyway, I've cleaned up again these bogus snapshot , and now, snaphot creation is working fine .
For now.
All the best.
04 Feb 2025 1:23pm GMT
Planet GNOME
Jussi Pakkanen: The trials and tribulations of supporting CJK text in PDF
In the past I may have spoken critically on Truetype fonts and their usage in PDF files. Recently I have come to the conclusion that it may have been too harsh and that Truetype fonts are actually somewhat nice. Why? Because I have had to add support for CFF fonts to CapyPDF. This is a font format that comes from Adobe. It encodes textual PostScript drawing operations into binary bytecode. Wikipedia does not give dates, but it seems to have been developed in the late 80s - early 90s. The name CFF part is an abbeviation for "complicated font format".
Double-checks notes.
Compact font format. Yes, that is what I meant to write. Most people reading this have probably not ever even seen a CFF file so you might be asking why is supporting CFF fonts even a thing nowadays? It's all quite simple. Many of the Truetype (and especially OpenType) fonts you see are not actually Truetype fonts. Instead they are Transfontners, glyphs in disguise. It is entirely valid to have a Truetype font that is merely an envelope holding a CFF font. As an example the Noto CJK fonts are like this. Aggregation of different formats is common in font files, and the main reason OpenType fonts have like four different and mutually incompatible ways of specifying color emoji. None of the participating entities were willing to accept anyone else's format so the end result was to add all of them. If you want Asian language support, you have to dive into the bowels of the CFF rabid hole.
As most people probably do not have sufficient historical perspective, let's start by listing out some major computer science achievements that definitely existed when CFF was being designed.
- File format magic numbers
- Archive formats that specify both the offset and size of the elements within
- Archive formats that afford access to their data in O(number of items in the archive) rather than O(number of bytes in the file)
- Data compression
CFF chooses to not do any of this nonsense. It also does not believe in consistent offset types. Sometimes the offsets within data objects refer to other objects by their order in the index they are in. Sometimes they refer to number of bytes from the beginning of the file. Sometimes they refer to number of bytes from the beginning of the object the offset data is written in. Sometimes it refers to something else. One of the downsides of this is that while some of the data is neatly organized into index structures with specified offsets, a lot of it is just free floating in the file and needs the equivalent of three pointer dereferences to access.
Said offsets are stored with a variable width encoding like so:
This makes writing subset CFF font files a pain. In order to write an offset value at some location X, you first must serialize everything up to that point to know where the value would be written. To know the value to write you have to serialize the the entire font up to the point where that data is stored. Typically the data comes later in the file than its offset location. You know what that means? Yes, storing all these index locations and hotpatching them afterwards once you find out where the actual data pointed to ended up in. Be sure to compute your patching locations correctly lest you end up in lengthy debugging sessions where your subset font files do not render correctly. In fairness all of the incorrect writes were within the data array and thus 100% memory safe, and, really, isn't that the only thing that actually matters?
One of the main data structures in a CFF file is a font dictionary stored in, as the docs say, "key-value pairs". This is not true. The "key-value dictionary" is neither key-value nor is it a dictionary. The entries must come in a specific order (sometimes) so it is not a dictionary. The entries are not stored as key-value pairs but as value-key pairs. The more accurate description of "value-key somewhat ordered array" does lack some punch so it is understandable that they went with common terminology. The backwards ordering of elements to some people confusion bring might, but it perfect sense makes, as the designers of the format a long history with PostScript had. Unknown is whether some of them German were.
Anyhow, after staring directly into the morass of madness for a sufficient amount of time the following picture emerges.
Final words
The CFF specification document contains data needed to decipher CFF data streams in nice tabular format, which is easy to convert to an enum. Trying it fails with an error message saying that the file has prohibited copypasting. This is a bit rich coming from Adobe, whose current stance seems to be that they can take any document opened with their apps and use it for AI training. I'd like to conclude this blog post by sending the following message to the (assumed) middle manager who made the decision that publicly available specification documents should prohibit copypasting:
YOU GO IN THE CORNER AND THINK ABOUT WHAT YOU HAVE DONE! AND DON'T EVEN THINK ABOUT COMING BACK UNTIL YOU ARE READY TO APOLOGIZE TO EVERYONE FOR YOU ACTIONS!
04 Feb 2025 1:16pm GMT
OMG! Ubuntu
Firefox 135 Brings New Tab Page Tweaks, AI Chatbot Access + More
Right on schedule, a new update to the Mozilla Firefox web browser is available for download. Last month's Firefox 134 release saw the New Tab page layout refreshed for users in the United States, let Linux go hands-on with touch-hold gestures, seeded Ecosia search engine, and fine-tuned the performance of the built-in pop-up blocker. Firefox 135, as is probably intuit, brings an equally sizeable set of changes to the fore including a wider rollout of its new New Tab page layout to all locales where Stories are available: It's not a massive makeover, granted. But the new layout adjusts the […]
You're reading Firefox 135 Brings New Tab Page Tweaks, AI Chatbot Access + More, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
04 Feb 2025 10:10am GMT
Fedora People
Fedora Community Blog: Balkan Computer Congress, Novi Sad, Serbia
BalCCon 2k24 - Invisible Path
Fedora had a booth at BalCCon for the 8th time in a row. I'm personally very happy that we kept this streak as the first BalCCon where Fedora had presence is when I first became Ambassador in Serbia . I'm not aware of any other booths that had such a strong presence.
For those unaware of this conference, it is focused on: hacking, information security, privacy, technology & society, making, lock-picking, and electronic arts. For a sharp eye, these are well-rounded topics for Free Software-oriented people and that makes it rather important conference in the region.
In addition to all that, people that organize these events are extremely welcoming and heart-warming people! Working in tandem: founders, volunteers, and speakers - all show teamwork in very challenging times, both now as well as in the past… For an untrained eye it would seem that every BalCCon is a smooth sailing, but the truth is that there is a lot of effort and sacrifice required in realizing it. Every single person behind it, is a devoted, humble, and passionate contributor to the event.
Attendees range from teenagers, students, younger and older adults all interested in learning, sharing and socializing with each others. In other words, the conference is not solely focused on lectures and workshops, but also on bringing similar-minded people together and providing them a safe place to connect.
Fun is also huge part of the event as there is a karaoke evening (don't miss that one ) and a rakia tasting and sharing night adequately named "Rakija leaks".
At Fedora booth we try to engage people and encourage questions, to better understand what people like and dislike, to provide them guidance and invite them to join the community. We always keep the positive attitude towards all Free and Open Source Software and never fuel or support distro-wars. We love Fedora, but that doesn't exclude love towards other distros as well (it just may not be as strong ).
This approach had an impact that many non-Fedora users liked to talk to us and stuck around. That relationship grown to constructive discussions about strengths of Fedora and their OSes of choice. Many of them converted over time , or at least found a perfect sweet-spot for Fedora in their everyday life.
Due to the import customs in Serbia, swag has been a hit and miss sometimes, but we try to keep the booth entertaining even in the absence of much-adored stickers.
This year we've had a revamp of our booth with new Fedora logo for the roll-up banners and the table cloth. And it was at a perfect time, as Fedora booth was visible in an article of country's most popular printed IT magazine Svet Kompjutera.
This was all due to the amazing support from jwflory who displayed great amount of innovative and pro-active energy!
Booth appearance has evolved over the years and become more and more inviting to everyone. An organizing volunteer even approached me to say that people've been asking if Fedora will have a booth again this year, as they found it very interesting - not only from the Project's aspect, but also because, since the first year we tried to bring something that would draw people to come and talk to us.
To give some examples:
- Awesome demo of touchless gaming concept by our dear thunderbirdtr (2015)
- Fedora on handheld touchscreen devices (2018)
- AAA Gaming on Fedora out-of-the box (with RPM Fusion only) (2022)
- Introduction of Fedora cookies (2023)
- Fedora baby (2024)
There are plans and ideas for future booths too, such as SyncStar setup, SELinux challenge box, DIY pin machine, other quizes, …
Here is the timeline in photos from 2015 - 2024 (there are missing photos due to, either COVID, or just an unfortunate oversight on my end):
Huge thanks for the support from jwflory, thunderbirdtr, nmilosev, nsukur, bitlord, and especially to my dear wife littlecat that makes the booth incredibly appealing.
If you've never been to Serbia, Novi Sad, or BalCCon, you should definitely consider visiting and we'll do our best to be good hosts and dedicate some of our times just for you!
The post Balkan Computer Congress, Novi Sad, Serbia appeared first on Fedora Community Blog.
04 Feb 2025 10:00am GMT
Planet Debian
Paul Wise: FLOSS Activities January 2025
Focus
This month I didn't have any particular focus. I just worked on issues in my info bubble.
Changes
- zygolophodon: support Iceshrimp URLs
- reportbug: arch menu fixes
- Debian website: add arch data reportbug sync note
- Debian wiki pages: DeveloperNews, Exploits, PortsDocs/New, Teams/Debbugs/ArchitectureTags
Sponsors
All work was done on a volunteer basis.
04 Feb 2025 2:43am GMT
Planet Arch Linux
Infrastructure as Advent of Code
In the cold of December we have but one thing to keep us warm: our laptops, trying to solve Advent of Code puzzles with inefficient algorithms. This year, 2024, is the tenth edition, and the puzzles are filled with more Easter eggs than ever before. Unfortunately, I'm not interested in Easter eggs, or solving the puzzles. I am a DevOps engineer, and I'm going to apply Infrastructure as Code principles to Advent of Code.
04 Feb 2025 12:00am GMT
03 Feb 2025
OMG! Ubuntu
How to Fix Spotify ‘No PubKey’ Error on Ubuntu
Do you use the official Spotify DEB on Ubuntu (or an Ubuntu-based Linux distribution like Linux Mint)? If so, you'll be used to receiving updates to the Spotify Linux client direct from the official Spotify APT repo, right alongside all your other DEB-based software. Thing is: if you haven't checked for updates from the command line recently you might not be aware the that security key used to 'sign' packages from the Spotify APT repo stopped working at the end of last year. Annoying, but not catastrophic as it-thankfully-doesn't stop the Spotify Linux app from working just pollutes terminal output […]
You're reading How to Fix Spotify 'No PubKey' Error on Ubuntu, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
03 Feb 2025 2:48am GMT
Planet Arch Linux
Glibc 2.41 corrupting Discord installation
We plan to move glibc
and its friends to stable later today, Feb 3. After installing the update, the Discord client will show a red warning that the installation is corrupt. This issue has been fixed in the Discord canary build. If you rely on audio connectivity, please use the canary build, login via browser or the flatpak version until the fix hits the stable Discord release. There have been no reports that (written) chat connectivity is affected.
03 Feb 2025 12:00am GMT
02 Feb 2025
OMG! Ubuntu
Linux Icon Pack Papirus Gets First Update in 8 Months
Fans of the Papirus icon theme for Linux desktops will be happy hear a new version is now available to download. Paprius's first update in 2025 improves support for KDE Plasma 6 by adding Konversation, KTorrent and RedShift tray icons, KDE and Plasma logo glyphs for use in 'start menu' analogues, as well as an assortment of symbolic icons. Retro gaming fans will appreciate an expansion in mime type support in this update. Papirus now includes file icons for ROMs used for emulating ZX Spectrum, SEGA Dreamcast, SEGA Saturn, MSX, and Neo Geo Pocket consoles; and Papirus now uses different […]
You're reading Linux Icon Pack Papirus Gets First Update in 8 Months, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
02 Feb 2025 7:54pm GMT
01 Feb 2025
Planet Gentoo
Tinderbox shutdown
Due to the lack of hardware, the Tinderbox (and CI) service is no longer operational.
I would like to take this opportunity to thank all the people who have always seen the Tinderbox as a valuable resource and who have promptly addressed bugs, significantly improving the quality of the packages we have in Portage as well as the user experience.
01 Feb 2025 7:08am GMT
16 Jan 2025
Planet Arch Linux
Critical rsync security release 3.4.0
We'd like to raise awareness about the rsync security release version 3.4.0-1
as described in our advisory ASA-202501-1. An attacker only requires anonymous read access to a vulnerable rsync server, such as a public mirror, to execute arbitrary code on the machine the server is running on. Additionally, attackers can take control of an affected server and read/write arbitrary files of any connected client. Sensitive data can be extracted, such as OpenPGP and SSH keys, and malicious code can be executed by overwriting files such as ~/.bashrc
or ~/.popt
. We highly advise anyone who runs an rsync daemon or client prior to version 3.4.0-1
to upgrade and reboot their systems immediately. As Arch Linux mirrors are mostly synchronized using rsync, we highly advise any mirror administrator to act immediately, even though the hosted package files themselves are cryptographically signed. All infrastructure servers and mirrors maintained by Arch Linux have already been updated.
16 Jan 2025 12:00am GMT
11 Jan 2025
Kernel Planet
Pete Zaitcev: Looking for a BSSID
I'm looking for a name for a new WiFi area.
The current one is called "Tokyo-Jupiter". It turns out hard to top, it meets all the requirements. It's a geographic area. It's weeb, but from old enough times: not Naruto Shippuuden, Attack On Titan, or Kimetsu no Yaiba. Classy and unique enough.
"Konoha" is too new, too washed-up, and too short.
"Kodena" and "Yokosuka" add a patriotic American tint nicely, but also too short.
"Minas-Tirith" is a place and outstanding in its reference, but not weeb.
"Big-Sight" is an opposite of the above: too much. I'm a weeb, not otaku.
Any ideas are appreciated.
UPDATE 2025-01-11: The provisional candidate is "Nishi-Teppelin". Don't google it, it's not canon. I remain open to better ideas.
11 Jan 2025 1:42am GMT
05 Jan 2025
Planet Gentoo
2024 in retrospect & happy new year 2025!
Happy New Year 2025! Once again, a lot has happened over the past months, in Gentoo and otherwise. Our fireworks were a bit early this year with the stabilization of GCC 14 in November, after a huge amount of preparations and bug fixing via the Modern C initiative. A lot of other programming language ecosystems also saw significant improvements. As always here we're going to revisit all the exciting news from our favourite Linux distribution.
Gentoo in numbers
The number of commits to the main ::gentoo repository has remained at an overall high level in 2024, with a 2.4% increase from 121000 to 123942. The number of commits by external contributors has grown strongly from 10708 to 12812, now across 421 unique external authors.
The importance of GURU, our user-curated repository with a trusted user model, as entry point for potential developers, is clearly increasing as well. We have had 7517 commits in 2024, a strong growth from 5045 in 2023. The number of contributors to GURU has increased a lot as well, from 158 in 2023 to 241 in 2024. Please join us there and help packaging the latest and greatest software. That's the ideal preparation for becoming a Gentoo developer!
Activity has picked up speed on the Gentoo bugtracker bugs.gentoo.org, where we've had 26123 bug reports created in 2024, compared to 24795 in 2023. The number of resolved bugs shows the same trend, with 25946 in 2024 compared to 22779 in 2023!
New developers
In 2024 we have gained two new Gentoo developers. They are in chronological order:
-
Matt Jolly (kangie): Matt joined us already in February from Brisbane, Australia - now finally pushing his commits himself, after already taking care of, e.g., Chromium for over half a year. In work life a High Performance Computing systems administrator, in his free time he enjoys playing with his animals, restoring retro computing equipment and gaming consoles (or using them), brewing beer, the beach, or the local climbing gym.
-
Eli Schwartz (eschwartz): In July, we were able to welcome Eli Schwartz from the USA as new Gentoo developer. A bookworm and big fan of Python, and also an upstream maintainer for the Meson Build System, Eli caught the Linux bug already in highschool. Quoting him, "asking around for recommendations on distro I was recommended either Arch or Gentoo. Originally I made a mistake ;)" … We're glad this got fixed now!
Featured changes and news
Let's now look at the major improvements and news of 2024 in Gentoo.
Distribution-wide Initiatives
-
SPI associated project: As of March 2024, Gentoo Linux has become an Associated Project of Software in the Public Interest (SPI). SPI is a non-profit corporation founded to act as a fiscal sponsor for organizations that develop open source software and hardware. It provides services such as accepting donations, holding funds and assets, … and qualifies for 501(c)(3) (U.S. non-profit organization) status. This means that all donations made to SPI and its supported projects are tax deductible for donors in the United States. The intent behind becoming an SPI associated project is to gradually wind down operations of the Gentoo Foundation and transfer its assets to SPI.
-
GCC 14 stabilization: After a huge amount of work to identify and fix bugs and working with upstreams to modernize the overall source code base, see also the Modern C porting initiative, GCC 14 was finally stabilized in November 2024. Same as Clang 16, GCC 14 by default drops support for several long-deprecated and obsolete language constructs, turning decades-long warnings on bad code into fatal errors.
-
Link time optimization (LTO): Lots of progress has been made supporting LTO all across the Gentoo repository.
-
64bit time_t for 32bit architectures: Various preparations have begun to keep our 32-bit arches going beyond the year 2038. While the GNU C library is ready for that, the switch to a wider time_t data type is an ABI break between userland programs and libraries and needs to be approached carefully, in particular for us as a source-based distribution. Experimental profiles as well as a migration tool are available by now, and will be announced more widely at some point in 2025.
-
New 23.0 profiles: A new profile version 23.0, i.e. a collection of presets and configurations, has become the default setting; the old profiles are deprecated and will be removed in June 2025. The 23.0 profiles fix a lot of internal inconsistencies; for the user, they bring more toolchain hardening (specifically, CET on amd64 and non-lazy runtime binding) and optimization (e.g., packed relative reolcations where supported) by default.
-
Expanded binary package coverage: The binary package coverage for amd64 has been expanded a lot, with, e.g., different use-flag combinations, Python support up to version 3.13, and additional large leaf packages beyond stable as for example current GCC snapshots, all for baseline x86-64 and for x86-64-v3. At the moment, the mirrors hold over 60GByte of package data for amd64 alone.
-
Two additional merchandise stores: We have licensed two additional official merchandise stores, both based in Europe: FreeWear (clothing, mugs, stickers; located in Spain) and BadgeShop (Etsy, Ebay; badges, stickers; located in Romania).
-
Handbook improvements and editor role: The Gentoo handbook has once again been significantly improved (though there is always still more work to be done). We now have special Gentoo handbook editor roles assigned, which makes the handbook editing effectively much more community friendly. This way, a lot of longstanding issues have been fixed, making installing Gentoo easier for everyone.
-
Event presence: At the Free and Open Source Software Conference (FrOSCon) 2024, visitors enjoyed a full weekend of hands-on Gentoo workshops. The workshops covered a wide range of topics, from first installation to ebuild maintenance. We also offered mugs, stickers, t-shirts, and of course the famous self-compiled buttons.
-
Online workshops: Our German support, Gentoo e.V., is grateful to the inspiring speakers of the 6 online workshops in 2024 on various Gentoo topics in German and English. We are looking forward to more exciting events in 2025.
-
Ban on NLP AI tools: Due to serious concerns with current AI and LLM systems, the Gentoo Council has decided to embrace the value of human contributions and adopt the following motion: "It is expressly forbidden to contribute to Gentoo any content that has been created with the assistance of Natural Language Processing artificial intelligence tools. This motion can be revisited, should a case been made over such a tool that does not pose copyright, ethical and quality concerns."
Architectures
-
MIPS and Alpha fully supported again: After the big drive to improve Alpha support last year, now we've taken care of MIPS keywording all across the Gentoo repository. Thanks to renewed volunteer interest, both arches have returned to the forefront of Gentoo Linux development, with a consistent dependency tree checked and enforced by our continuous integration system. Up-to-date stage builds and the accompanying binary packages are available for both, in the case of MIPS for all three ABI variants o32, n32, and n64 and for both big and little endian, and in the case of Alpha also with a bootable installation CD.
-
32bit RISC-V now available: Installation stages for 32bit RISC-V systems (rv32) are now available for download, both using hard-float and soft-float ABI, and both using glibc and musl.
-
End of IA-64 (Itanium) support: Following the removal of IA-64 (Itanium) support in the Linux kernel and in glibc, we have dropped all ia64 profiles and keywords.
Packages
-
Slotted Rust: The Rust compiler is now slotted, allowing multiple versions to be installed in parallel. This allows us to finally support packages that have a maximum bounded Rust dependency and don't compile successfully with a newer Rust (yes, that exists!), or ensure that packages use Rust and LLVM versions that fit together (e.g., firefox or chromium).
-
Reworked LLVM handling: In conjunction with this, the LLVM ebuilds and eclasses have been reworked so packages can specify which LLVM versions they support and dependencies are generated accordingly. The eclasses now provide much cleaner LLVM installation information to the build systems of packages, and therefore, e.g., also fix support for cross-compilation
-
Python: In the meantime the default Python version in Gentoo has reached Python 3.12. Additionally we have also Python 3.13 available stable - again we're fully up to date with upstream.
-
Zig rework and slotting: An updated eclass and ebuild framework for the Zig programming language has been committed that hooks into the ZBS or Zig Build System, allows slotting of Zig versions, allows Zig libraries to be depended on, and even provides some experimental cross-compilation support.
-
Ada support: We finally have Ada support for just about every architecture. Yay!
-
Slotted Guile: The last but not least language that received the slotting treatment has been Guile, with three new eclasses, such that now Guile 1, 2, and 3 and their reverse dependencies can coexist in a Gentoo installation.
-
TeX Live 2023 and 2024: Catching up with our backlog, the packaging of TeX Live has been refreshed; TeX Live 2023 is now marked stable and TeX Live 2024 is marked testing.
-
DTrace 2.0: The famous tracing tool DTrace has come to Gentoo! All required kernel options are already enabled in the newest stable Gentoo distribution kernel; if you are compiling manually, the DTrace ebuild will inform you about required configuration changes. Internally, DTrace 2.0 for Linux builds on the BPF engine of the Linux kernel, so the build installs a gcc that outputs BPF code (which, btw, also is very useful for systemd).
-
KDE Plasma 6 upgrade: Stable Gentoo Linux has upgraded to the new major version of the KDE community desktop environment, KDE Plasma 6. As of end of 2024, in Gentoo stable we have KDE Gear 24.08.3, KDE Frameworks 6.7.0, and KDE Plasma 6.2.4. As always, Gentoo testing follows the newest upstream releases (and using the KDE overlay you can even install from git sources). In the course of KDE package maintenance we have over the past months and years contributed over 240 upstream backports to KDE's Qt5PatchCollection.
-
Microgram Ramdisk: We have added µgRD (or ugrd) as a lightweight initramfs generator alternative to dracut. As a side effect of this our installkernel mechanism has gained support for arbitrary initramfs generators.
Physical and Software Infrastructure
-
Mailing list archives: archives.gentoo.org, our mailing list archive, is back, now with a backend based on public-inbox. Many thanks to upstream there for being very helpful; we were even able to keep all historical links to archived list e-mails working.
-
Ampere Altra Max development server: Arm Ltd. and specifically its Works on Arm team has sent us a fast Ampere Altra Max server to support Gentoo development. With 96 Armv8.2+ 64bit cores, 256 GByte of RAM, and 4 TByte NVMe storage, it is now hosted together with some of our other hardware at OSU Open Source Lab.
Finances of the Gentoo Foundation
-
Income: The Gentoo Foundation took in approximately $20,800 in fiscal year 2024; the dominant part (over 80%) consists of individual cash donations from the community.
-
Expenses: Our expenses in 2024 were, as split into the usual three categories, operating expenses (for services, fees, …) $7,900, only minor capital expenses (for bought assets), and depreciation expenses (value loss of existing assets) $13,300.
-
Balance: We have about $105,000 in the bank as of July 1, 2024 (which is when our fiscal year 2024 ends for accounting purposes). The draft finanical report for 2024 is available on the Gentoo Wiki.
-
Transition to SPI: With the move of our accounts to SPI, see above, the web pages for individual cash donations now direct the funds to SPI earmarked for Gentoo, both for one time and recurrent donations. Donors of ongoing recurrent donations will be contacted and asked to re-arrange over the upcoming months.
Thank you!
As every year, we would like to thank all Gentoo developers and all who have submitted contributions for their relentless everyday Gentoo work. If you are interested and would like to help, please join us to make Gentoo even better! As a volunteer project, Gentoo could not exist without its community.
05 Jan 2025 6:00am GMT
02 Jan 2025
Kernel Planet
Matthew Garrett: The GPU, not the TPM, is the root of hardware DRM
As part of their "Defective by Design" anti-DRM campaign, the FSF recently made the following claim:
Today, most of the major streaming media platforms utilize the TPM to decrypt media streams, forcefully placing the decryption out of the user's control
(from here).
This is part of an overall argument that Microsoft's insistence that only hardware with a TPM can run Windows 11 is with the goal of aiding streaming companies in their attempt to ensure media can only be played in tightly constrained environments.
I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is. I've been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I'd certainly pick a computer with a TPM. But in terms of whether it's of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I'm not sure that's a worthwhile tradeoff.
What I can say is that the FSF's claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft's strategy here, the argument is pretty significantly undermined. I'm not aware of any streaming media platforms making use of TPMs in any way whatsoever. There is hardware DRM that the media companies use to restrict users, but it's not in the TPM - it's in the GPU.
Let's back up for a moment. There's multiple different DRM implementations, but the big three are Widevine (owned by Google, used on Android, Chromebooks, and some other embedded devices), Fairplay (Apple implementation, used for Mac and iOS), and Playready (Microsoft's implementation, used in Windows and some other hardware streaming devices and TVs). These generally implement several levels of functionality, depending on the capabilities of the device they're running on - this will range from all the DRM functionality being implemented in software up to the hardware path that will be discussed shortly. Streaming providers can choose what level of functionality and quality to provide based on the level implemented on the client device, and it's common for 4K and HDR content to be tied to hardware DRM. In any scenario, they stream encrypted content to the client and the DRM stack decrypts it before the compressed data can be decoded and played.
The "problem" with software DRM implementations is that the decrypted material is going to exist somewhere the OS can get at it at some point, making it possible for users to simply grab the decrypted stream, somewhat defeating the entire point. Vendors try to make this difficult by obfuscating their code as much as possible (and in some cases putting some of it in-kernel), but pretty much all software DRM is at least somewhat broken and copies of any new streaming media end up being available via Bittorrent pretty quickly after release. This is why higher quality media tends to be restricted to clients that implement hardware-based DRM.
The implementation of hardware-based DRM varies. On devices in the ARM world this is usually handled by performing the cryptography in a Trusted Execution Environment, or TEE. A TEE is an area where code can be executed without the OS having any insight into it at all, with ARM's TrustZone being an example of this. By putting the DRM code in TrustZone, the cryptography can be performed in RAM that the OS has no access to, making the scraping described earlier impossible. x86 has no well-specified TEE (Intel's SGX is an example, but is no longer implemented in consumer parts), so instead this tends to be handed off to the GPU. The exact details of this implementation are somewhat opaque - of the previously mentioned DRM implementations, only Playready does hardware DRM on x86, and I haven't found any public documentation of what drivers need to expose for this to work.
In any case, as part of the DRM handshake between the client and the streaming platform, encryption keys are negotiated with the key material being stored in the GPU or the TEE, inaccessible from the OS. Once decrypted, the material is decoded (again either on the GPU or in the TEE - even in implementations that use the TEE for the cryptography, the actual media decoding may happen on the GPU) and displayed. One key point is that the decoded video material is still stored in RAM that the OS has no access to, and the GPU composites it onto the outbound video stream (which is why if you take a screenshot of a browser playing a stream using hardware-based DRM you'll just see a black window - as far as the OS can see, there is only a black window there).
Now, TPMs are sometimes referred to as a TEE, and in a way they are. However, they're fixed function - you can't run arbitrary code on the TPM, you only have whatever functionality it provides. But TPMs do have the ability to decrypt data using keys that are tied to the TPM, so isn't this sufficient? Well, no. First, the TPM can't communicate with the GPU. The OS could push encrypted material to it, and it would get plaintext material back. But the entire point of this exercise was to avoid the decrypted version of the stream from ever being visible to the OS, so this would be pointless. And rather more fundamentally, TPMs are slow. I don't think there's a TPM on the market that could decrypt a 1080p stream in realtime, let alone a 4K one.
The FSF's focus on TPMs here is not only technically wrong, it's indicative of a failure to understand what's actually happening in the industry. While the FSF has been focusing on TPMs, GPU vendors have quietly deployed all of this technology without the FSF complaining at all. Microsoft has enthusiastically participated in making hardware DRM on Windows possible, and user freedoms have suffered as a result, but Playready hardware-based DRM works just fine on hardware that doesn't have a TPM and will continue to do so.
comments
02 Jan 2025 1:14am GMT
29 Dec 2024
Planet Gentoo
FOSDEM 2025
It's FOSDEM time again! Join us at Université Libre de Bruxelles, Campus du Solbosch, in Brussels, Belgium. The upcoming FOSDEM 2025 will be held on February 1st and 2nd 2025. Our developers will be happy to greet all open source enthusiasts at our Gentoo stand (exact location still to be announced), which we will share this year with then Gentoo-based Flatcar Container Linux. Of course there's also the chance to celebrate 25 years of compiling! Visit this year's wiki page to see who's coming and for more practical information.
29 Dec 2024 6:00am GMT
12 Dec 2024
Kernel Planet
Matthew Garrett: When should we require that firmware be free?
The distinction between hardware and software has historically been relatively easy to understand - hardware is the physical object that software runs on. This is made more complicated by the existence of programmable logic like FPGAs, but by and large things tend to fall into fairly neat categories if we're drawing that distinction.
Conversations usually become more complicated when we introduce firmware, but should they? According to Wikipedia, Firmware is software that provides low-level control of computing device hardware
, and basically anything that's generally described as firmware certainly fits into the "software" side of the above hardware/software binary. From a software freedom perspective, this seems like something where the obvious answer to "Should this be free" is "yes", but it's worth thinking about why the answer is yes - the goal of free software isn't freedom for freedom's sake, but because the freedoms embodied in the Free Software Definition (and by proxy the DFSG) are grounded in real world practicalities.
How do these line up for firmware? Firmware can fit into two main classes - it can be something that's responsible for initialisation of the hardware (such as, historically, BIOS, which is involved in initialisation and boot and then largely irrelevant for runtime[1]) or it can be something that makes the hardware work at runtime (wifi card firmware being an obvious example). The role of free software in the latter case feels fairly intuitive, since the interface and functionality the hardware offers to the operating system is frequently largely defined by the firmware running on it. Your wifi chipset is, these days, largely a software defined radio, and what you can do with it is determined by what the firmware it's running allows you to do. Sometimes those restrictions may be required by law, but other times they're simply because the people writing the firmware aren't interested in supporting a feature - they may see no reason to allow raw radio packets to be provided to the OS, for instance. We also shouldn't ignore the fact that sufficiently complicated firmware exposed to untrusted input (as is the case in most wifi scenarios) may contain exploitable vulnerabilities allowing attackers to gain arbitrary code execution on the wifi chipset - and potentially use that as a way to gain control of the host OS (see this writeup for an example). Vendors being in a unique position to update that firmware means users may never receive security updates, leaving them with a choice between discarding hardware that otherwise works perfectly or leaving themselves vulnerable to known security issues.
But even the cases where firmware does nothing other than initialise the hardware cause problems. A lot of hardware has functionality controlled by registers that can be locked during the boot process. Vendor firmware may choose to disable (or, rather, never to enable) functionality that may be beneficial to a user, and then lock out the ability to reconfigure the hardware later. Without any ability to modify that firmware, the user lacks the freedom to choose what functionality their hardware makes available to them. Again, the ability to inspect this firmware and modify it has a distinct benefit to the user.
So, from a practical perspective, I think there's a strong argument that users would benefit from most (if not all) firmware being free software, and I don't think that's an especially controversial argument. So I think this is less of a philosophical discussion, and more of a strategic one - is spending time focused on ensuring firmware is free worthwhile, and if so what's an appropriate way of achieving this?
I think there's two consistent ways to view this. One is to view free firmware as desirable but not necessary. This approach basically argues that code that's running on hardware that isn't the main CPU would benefit from being free, in the same way that code running on a remote network service would benefit from being free, but that this is much less important than ensuring that all the code running in the context of the OS on the primary CPU is free. The maximalist position is not to compromise at all - all software on a system, whether it's running at boot or during runtime, and whether it's running on the primary CPU or any other component on the board, should be free.
Personally, I lean towards the former and think there's a reasonably coherent argument here. I think users would benefit from the ability to modify the code running on hardware that their OS talks to, in the same way that I think users would benefit from the ability to modify the code running on hardware the other side of a network link that their browser talks to. I also think that there's enough that remains to be done in terms of what's running on the host CPU that it's not worth having that fight yet. But I think the latter is absolutely intellectually consistent, and while I don't agree with it from a pragmatic perspective I think things would undeniably be better if we lived in that world.
This feels like a thing you'd expect the Free Software Foundation to have opinions on, and it does! There are two primarily relevant things - the Respects your Freedoms campaign focused on ensuring that certified hardware meets certain requirements (including around firmware), and the Free System Distribution Guidelines, which define a baseline for an OS to be considered free by the FSF (including requirements around firmware).
RYF requires that all software on a piece of hardware be free other than under one specific set of circumstances. If software runs on (a) a secondary processor and (b) within which software installation is not intended after the user obtains the product
, then the software does not need to be free. (b) effectively means that the firmware has to be in ROM, since any runtime interface that allows the firmware to be loaded or updated is intended to allow software installation after the user obtains the product.
The Free System Distribution Guidelines require that all non-free firmware be removed from the OS before it can be considered free. The recommended mechanism to achieve this is via linux-libre, a project that produces tooling to remove anything that looks plausibly like a non-free firmware blob from the Linux source code, along with any incitement to the user to load firmware - including even removing suggestions to update CPU microcode in order to mitigate CPU vulnerabilities.
For hardware that requires non-free firmware to be loaded at runtime in order to work, linux-libre doesn't do anything to work around this - the hardware will simply not work. In this respect, linux-libre reduces the amount of non-free firmware running on a system in the same way that removing the hardware would. This presumably encourages users to purchase RYF compliant hardware.
But does that actually improve things? RYF doesn't require that a piece of hardware have no non-free firmware, it simply requires that any non-free firmware be hidden from the user. CPU microcode is an instructive example here. At the time of writing, every laptop listed here has an Intel CPU. Every Intel CPU has microcode in ROM, typically an early revision that is known to have many bugs. The expectation is that this microcode is updated in the field by either the firmware or the OS at boot time - the updated version is loaded into RAM on the CPU, and vanishes if power is cut. The combination of RYF and linux-libre doesn't reduce the amount of non-free code running inside the CPU, it just means that the user (a) is more likely to hit since-fixed bugs (including security ones!), and (b) has less guidance on how to avoid them.
As long as RYF permits hardware that makes use of non-free firmware I think it hurts more than it helps. In many cases users aren't guided away from non-free firmware - instead it's hidden away from them, leaving them less aware that their freedom is constrained. Linux-libre goes further, refusing to even inform the user that the non-free firmware that their hardware depends on can be upgraded to improve their security.
Out of sight shouldn't mean out of mind. If non-free firmware is a threat to user freedom then allowing it to exist in ROM doesn't do anything to solve that problem. And if it isn't a threat to user freedom, then what's the point of requiring linux-libre for a Linux distribution to be considered free by the FSF? We seem to have ended up in the worst case scenario, where nothing is being done to actually replace any of the non-free firmware running on people's systems and where users may even end up with a reduced awareness that the non-free firmware even exists.
[1] Yes yes SMM
comments
12 Dec 2024 3:57pm GMT
16 Oct 2024
Planet Maemo
Adding buffering hysteresis to the WebKit GStreamer video player
The <video>
element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.
The player private can have 3 buffering modes:
- On-disk buffering: This is the typical mode on desktop systems, but is frequently disabled on purpose on embedded devices to avoid wearing out their flash storage memories. All the video content is downloaded to disk, and the buffering percentage refers to the total size of the video. A GstDownloader element is present in the pipeline in this case. Buffering level monitoring is done by polling the pipeline every second, using the
fillTimerFired()
method. - In-memory buffering: This is the typical mode on embedded systems and on desktop systems in case of streamed (live) content. The video is downloaded progressively and only the part of it ahead of the current playback time is buffered. A GstQueue2 element is present in the pipeline in this case. Buffering level monitoring is done by listening to GST_MESSAGE_BUFFERING bus messages and using the buffering level stored on them. This is the case that motivates the refactoring described in this blog post, what we actually wanted to correct in Broadcom platforms, and what motivated the addition of hysteresis working on all the platforms.
- Local files: Files, MediaStream sources and other special origins of video don't do buffering at all (no GstDownloadBuffering nor GstQueue2 element is present on the pipeline). They work like the on-disk buffering mode in the sense that
fillTimerFired()
is used, but the reported level is relative, much like in the streaming case. In the initial version of the refactoring I was unaware of this third case, and only realized about it when tests triggered the assert that I added to ensure that the on-disk buffering method was working in GST_BUFFERING_DOWNLOAD mode.
The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.
So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.
All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.
Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.
For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().
So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.
16 Oct 2024 6:12am GMT
10 Sep 2024
Planet Maemo
Don’t shoot yourself in the foot with the C++ move constructor
Move semantics can be very useful to transfer ownership of resources, but as many other C++ features, it's one more double edge sword that can harm yourself in new and interesting ways if you don't read the small print.
For instance, if object moving involves super and subclasses, you have to keep an extra eye on what's actually happening. Consider the following classes A and B, where the latter inherits from the former:
#include <stdio.h> #include <utility> #define PF printf("%s %p\n", __PRETTY_FUNCTION__, this) class A { public: A() { PF; } virtual ~A() { PF; } A(A&& other) { PF; std::swap(i, other.i); } int i = 0; }; class B : public A { public: B() { PF; } virtual ~B() { PF; } B(B&& other) { PF; std::swap(i, other.i); std::swap(j, other.j); } int j = 0; };
If your project is complex, it would be natural that your code involves abstractions, with part of the responsibility held by the superclass, and some other part by the subclass. Consider also that some of that code in the superclass involves move semantics, so a subclass object must be moved to become a superclass object, then perform some action, and then moved back to become the subclass again. That's a really bad idea!
Consider this usage of the classes defined before:
int main(int, char* argv[]) { printf("Creating B b1\n"); B b1; b1.i = 1; b1.j = 2; printf("b1.i = %d\n", b1.i); printf("b1.j = %d\n", b1.j); printf("Moving (B)b1 to (A)a. Which move constructor will be used?\n"); A a(std::move(b1)); printf("a.i = %d\n", a.i); // This may be reading memory beyond the object boundaries, which may not be // obvious if you think that (A)a is sort of a (B)b1 in disguise, but it's not! printf("(B)a.j = %d\n", reinterpret_cast<B&>(a).j); printf("Moving (A)a to (B)b2. Which move constructor will be used?\n"); B b2(reinterpret_cast<B&&>(std::move(a))); printf("b2.i = %d\n", b2.i); printf("b2.j = %d\n", b2.j); printf("^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place\n"); printf("Destroying b2, a, b1\n"); return 0; }
If you've read the code, those printfs will have already given you some hints about the harsh truth: if you move a subclass object to become a superclass object, you're losing all the subclass specific data, because no matter if the original instance was one from a subclass, only the superclass move constructor will be used. And that's bad, very bad. This problem is called object slicing. It's specific to C++ and can also happen with copy constructors. See it with your own eyes:
Creating B b1 A::A() 0x7ffd544ca690 B::B() 0x7ffd544ca690 b1.i = 1 b1.j = 2 Moving (B)b1 to (A)a. Which move constructor will be used? A::A(A&&) 0x7ffd544ca6a0 a.i = 1 (B)a.j = 0 Moving (A)a to (B)b2. Which move constructor will be used? A::A() 0x7ffd544ca6b0 B::B(B&&) 0x7ffd544ca6b0 b2.i = 1 b2.j = 0 ^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place Destroying b2, a, b1 virtual B::~B() 0x7ffd544ca6b0 virtual A::~A() 0x7ffd544ca6b0 virtual A::~A() 0x7ffd544ca6a0 virtual B::~B() 0x7ffd544ca690 virtual A::~A() 0x7ffd544ca690
Why can something that seems so obvious become such a problem, you may ask? Well, it depends on the context. It's not unusual for the codebase of a long lived project to have started using raw pointers for everything, then switching to using references as a way to get rid of null pointer issues when possible, and finally switch to whole objects and copy/move semantics to get rid or pointer issues (references are just pointers in disguise after all, and there are ways to produce null and dangling references by mistake). But this last step of moving from references to copy/move semantics on whole objects comes with the small object slicing nuance explained in this post, and when the size and all the different things to have into account about the project steals your focus, it's easy to forget about this.
So, please remember: never use move semantics that convert your precious subclass instance to a superclass instance thinking that the subclass data will survive. You can regret about it and create difficult to debug problems inadvertedly.
Happy coding!
10 Sep 2024 7:58am GMT
17 Jun 2024
Planet Maemo
Incorporating 3D Gaussian Splats into the graphics pipeline
3D Gaussian splatting is the emerging rendering technique that is overtaking NeRFs. Since it is centered around point primitives, it is more compatible with traditional graphics pipelines that already support point rendering.
Gaussian splats essentially enhance the concept of point rendering by converting the point primitive into a 3D ellipsoid, which is then projected into 2D during the rendering process.. This concept was initially described in 2002 [3], but the technique of extending Structure from Motion scans in this way was only detailed more recently [1].
In this post, I explore how to integrate Gaussian splats into the traditional graphics pipeline. This allows them to be used alongside triangle-based primitives and interact with them through the depth buffer for occlusion (see header image). This approach also simplifies deployment by eliminating the need for CUDA.
Storage
The original implementation uses .ply files as their checkpoint format, focusing on maintaining training-relevant data structures at the expense of storage efficiency, leading to increased file sizes.
For example, it stores the covariance as scaling and a rotation quaternion, necessitating reconstruction during rendering. A more efficient approach would be to leverage orthogonality, storing only the diagonal and upper triangular vectors, thereby eliminating reconstruction and reducing storage requirements.
Further analysis of the storage usage for each attribute shows that the spherical harmonics of orders 1-3 are the main contributors to the file size. However, according to the ablation study in the original publication [1], these harmonics only lead to a modest PSNR improvement of 0.5.
Therefore, the most straightforward way to decrease storage is by discarding the higher-order spherical harmonics. Additionally, the level 0 spherical harmonics can be converted into a diffuse color and merged with opacity to form a single RGBA value. These simple yet effective methods were implemented in one of the early WebGL implementations, resulting in the .splat format. As an added benefit, this format can be easily interpreted by viewers unaware of Gaussian splats as a simple colored point cloud:
By directly storing the covariance as previously mentioned we can reduce the precision from float32
to float16
, thereby halving the storage needed for that data. Furthermore, since most splats have limited spatial extents, we can also utilize float16
for position data, yielding additional storage savings.
With these changes, we achieve a storage requirement of 22 bytes per splat, in contrast to the 44 bytes needed by the .splat format and 236 bytes in the original implementation. Thus, we have attained a 10x reduction in storage compared to the original implementation simply by using more suitable data types.
Blending
The image formation model presented in the original paper [1] is similar to the NeRF rendering, as it is compared to it. This involves casting a ray and observing its intersection with the splats, which leads to front-to-back blending. This is precisely the approach taken by the provided CUDA implementation.
Blending remains a component of the fixed-function unit within the graphics pipeline, which can be set up for front-to-back blending [2] by using the factors (one_minus_dest_alpha, one)
and by multiplying color and alpha in the shader as color.rgb * color.a
. This results in the following equation:
\begin{aligned}C_{dst} &= (1 - \alpha_{dst}) \cdot \alpha_{src} C_{src} &+ C_{dst}\\ \alpha_{dst} &= (1 - \alpha_{dst})\cdot\alpha_{src} &+ \alpha_{dst}\end{aligned}
However, this method requires the framebuffer alpha value to be zero before rendering the splats, which is not typically the case as any previous render pass could have written an arbitrary alpha value.
A simple solution is to switch to back-to-front sorting and use the standard alpha blending factors (src_alpha, one_minus_src_alpha)
for the following blending equation:
C_{dst} = \alpha_{src} \cdot C_{src} + (1 - \alpha_{src}) \cdot C_{dst}
This allows us to regard Gaussian splats as a special type of particles that can be rendered together with other transparent elements within a scene.
References
- Kerbl, Bernhard, et al. "3d gaussian splatting for real-time radiance field rendering." ACM Transactions on Graphics 42.4 (2023): 1-14.
- Green, Simon. "Volumetric particle shadows." NVIDIA Developer Zone (2008).
- Zwicker, Matthias, et al. "EWA splatting." IEEE Transactions on Visualization and Computer Graphics 8.3 (2002): 223-238.
17 Jun 2024 1:28pm GMT
18 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub
I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.
So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.
Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.
Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org
But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.
The equipment deployed in this installation currently contains:
-
a rather beefy Supermicro 2U server with EPYC 7113P CPU and 4x PCIe, two of which are populated with Digium TE820 cards resulting in a total of 16 E1 ports
-
an icE1usb with RS422 interface board connected via 100m RS422 to an Ericsson GPS03 receiver. There's two layers of of over-voltage protection on the RS422 (each with gas discharge tubes and TVS) and two stages of over-voltage protection in the coaxial cable between antenna and GPS receiver.
-
a Livingston Portmaster3 RAS server
-
a Cisco AS5400 RAS server
For more details, see this wiki page and this ticket
Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.
In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.
Acknowledgements
I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware
18 Sep 2022 10:00pm GMT
08 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front
Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.
This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.
My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.
The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about
In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.
So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.
08 Sep 2022 10:00pm GMT
Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables
If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.
In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.
In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.
So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.
Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.
bursty bit clock changes until link is up
The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4
As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.
What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.
As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.
This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.
But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.
clock drift between master and slave cards
Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.
When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(
Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.
This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.
The work-around
If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.
In this setup, your slave card[s] will have perfect bit clock sync/lock.
Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).
08 Sep 2022 10:00pm GMT