29 Jan 2025
Fedora People
Fedora Infrastructure Status: Updates and Reboots
29 Jan 2025 9:00pm GMT
25 Jan 2025
LXer Linux News
TEAMGROUP MP44L 2TB M.2 NVMe SSD Review
The SSD uses the Phison PS5021-E21T controller. It's a DRAM-less quad-channel controller that supports a PCIe 4.0 x4 interface.
25 Jan 2025 4:50pm GMT
Linuxiac
Dovecot 2.4 Secure IMAP Server Released
Dovecot 2.4 secure IMAP server has been released with a new signing key, experimental ARM64 Docker support, and major config changes.
25 Jan 2025 4:43pm GMT
LXer Linux News
x86 32-bit Operating Systems Arent Dead Yet: New Linux Patches Improve 32-bit PAE
The Linux x86 32-bit PAE kernel support for Physical Address Extensions allows addressing more than 4GB of memory for those still running 32-bit processors or otherwise opting to still run a 32-bit OS.
25 Jan 2025 3:18pm GMT
Linux Today
AI Document Editing: Connect GPT4All to ONLYOFFICE on Ubuntu
In this article, you will learn how to enable AI-powered document editing on Ubuntu through the example of ONLYOFFICE Desktop Editors, an open-source office package for Linux, and GPT4All, an open-source platform designed to run local AI models.
The post AI Document Editing: Connect GPT4All to ONLYOFFICE on Ubuntu appeared first on Linux Today.
25 Jan 2025 2:38pm GMT
LXer Linux News
GNOME 48 Alpha Is Now Available for Public Testing, Here’s What’s New
Today, the GNOME Project announced the alpha version of the upcoming GNOME 48 desktop environment series for public testing, giving us a first taste of the new features and improvements.
25 Jan 2025 1:47pm GMT
Linux Today
How to Install NeoVim on Ubuntu and Other Linux Distros
Discover a step-by-step guide to install the latest version of NeoVim on your Ubuntu and other Linux distributions with multiple methods.
The post How to Install NeoVim on Ubuntu and Other Linux Distros appeared first on Linux Today.
25 Jan 2025 1:38pm GMT
Planet Debian
Bits from Debian: Infomaniak Platinum Sponsor of DebConf25
We are pleased to announce that Infomaniak has committed to sponsor DebConf25 as a Platinum Sponsor.
Infomaniak is Switzerland's leading developer of Web technologies. With operations all over Europe and based exclusively in Switzerland, the company designs and manages its own data centers powered by 100% renewable energy, and develops all its solutions locally, without outsourcing. With millions of users and the trust of public and private organizations across Europe - such as RTBF, the United Nations, central banks, over 3,000 radio and TV stations, as well as numerous cities and security bodies - Infomaniak stands for sovereign, sustainable and independent digital technology. The company offers a complete suite of collaborative tools, cloud hosting, streaming, marketing and events solutions, while being owned by its employees and self-financed exclusively by its customers.
With this commitment as Platinum Sponsor, Infomaniak is contributing to the Debian annual Developers' conference, directly supporting the progress of Debian and Free Software. Infomaniak contributes to strengthen the community that collaborates on Debian projects from all around the world throughout all of the year.
Thank you very much, Infomaniak, for your support of DebConf25!
Become a sponsor too!
DebConf25 will take place from 14th to July 20th 2025 in Brest, France, and will be preceded by DebCamp, from 7th to 13th July 2025.
DebConf25 is accepting sponsors! Interested companies and organizations should contact the DebConf team through sponsors@debconf.org, or visit the DebConf25 website at https://debconf25.debconf.org/sponsors/become-a-sponsor/.
25 Jan 2025 10:22am GMT
Planet KDE | English
Mnemonics, Mnemonics Everywhere
The M is silent. In computing this stands for the underlined letters in menus that can be triggered using an Alt+Letter key combination, one that you can remember and apply later to navigate around more quickly.
Qt and other toolkits typically use an ampersand to denote a mnemonic when assigning a menu entry. For instance, "&Shutdown" will be displayed as "Shutdown" and trigger on Alt+S whereas "Slee&p" will be "Sleep" and trigger on Alt+P. Of course this isn't limited to menus, pretty much any control, buttons and what not, can have mnemonics. Since they are part of the label, a translated string can and likely will have a different one.
KDE applications, both written in Qt Widgets and Qt Quick, automatically assign mnemonics for most controls that don't have one explicitly set. This is done through KAcceleratorManager and Kirigami's MnemonicData, respectively, using a set of rules based on the control's type. For example, a toolbar button is less important than a regular button or check box but both are more important than a section label. It also tries to use the first letter of a word, if that letter is not already taken. If a control is hidden the shortcut is removed again. The end result in a German dialog is "&Abbrechen" (Cancel), "&OK", and "A&nwenden" (Apply, since the A was already taken) for its footer.
While our Qt Quick Controls 2 Desktop Style automatically assigned mnemonics for all of its controls, Plasma Components did not for CheckBoxes, Switches, and some others. That is now fixed and it's now possible to use e.g. Alt+R to Raise maximum volume in the Volume applet or switch to the Applications tab using Alt+A. Likewise for the circular action button used on the lock and logout screens, you can now Alt+P to Sleep from the logout and lock screens! The "S" is taken for shutdown for consistency and unused on the lock screen.
I noticed that I couldn't trigger the toolbar buttons in System Settings even though they clearly showed an underlined letter. Turns out the shortcut was registered twice for some reason! If this happens, neither action is executed and instead the "activated ambiguously" signal is emitted. Kirigami's ActionToolBar is effectively two views: the regular strip of buttons and an overflow menu. The buttons are shown dynamically based on how much room there is available and the action's priority. There was a bug in Kirigami's mnemonic handler where hiding a control wouldn't release its shortcut, effectively registering every toolbar shortcut twice.
Speaking of Kirigami, there's a FormLayout similar to QFormLayout that we use for most of our settings pages. It has a label on the left, and control on the right. By default, the label generates a mnemonic to focus its buddy. However, we don't just want to focus the control, we want to trigger it as if we had clicked it. Qt 6.8 introduced an animateClick method on buttons that briefly flashes the button as a reminder of what's about to happen and then triggers it. For controls without this features, focus is set as before, albeit with ShortcutFocusReason to tell the control that it was focused as a result of a shortcut press. A ComboBox for instance reacts differently depending on how it got activated. I then also made sure no mnemonic is assigned to the label next to a control when the control itself already had one.
With those improvements done, I tested various Qt Quick applications and settings modules for their mnemonics. The "Display & Monitor" settings barely had any working ones. The thing is: FormLayout's labels by default are attached to the Item to which the label was added. In case of KScreen, we often used a RowLayout to place a control an a "contextual help button" (the little (i) button with more information) next to it. Since RowLayout isn't an interactive item, no mnemonic was assigned for the given row. Luckily, you can explicitly set buddyFor and tell it what the relevant control is. Doing that I made most of KScreen's settings reachable by Alt key combinations. While at it, I explicitly set the letter H for the HDR check box.
Now that you've seen me improve our mnemonic machinery, what can you do to make an application more accessible this way? Press and hold Alt, see what shortcuts get assigned, try triggering the underlined letter using Alt+letter:
- If there's a FormLayout and the control isn't reachable, check that there's a proper buddyFor set.
- For obvious abbreviation and words, consider to set a mnemonic explicitly so the letter used is consistent and predictable, like the "Enable &HDR" in Display settings
- For custom controls not based on Qt Quick Controls, you can use Kirigami.MnemonicData to register your control with our Mnemonic infrastructure and assign the shortcut it generated to a Shortcut item.
- Consider disabling mnemonics using Kirigami.MnemonicData.enabled where it doesn't make much sense to have them. e.g. controls in lists. Each one would just get a subsequent letter in its word assigned, reducing the pool of available letters for the important ones
- If a control doesn't show an underlined letter, try Alt+first letter in the label. Maybe it has one that doesn't show up for a reason?
- Finally: Report or fix bugs you find!
25 Jan 2025 8:03am GMT
This Week in Plasma: Fancy Time Zone Picker
Welcome to a new issue of "This Week in Plasma"! Every week we cover as much as possible of what's happening in the world of KDE Plasma and its associated apps like Discover, System Monitor, and more.
This week the bug-fixing for Plasma 6.3 continued, as well as a lot of new features and UI changes that have been in the pipeline for some time; these will mostly land in Plasma 6.4. There's a lot of cool stuff, so let's get into it!
Notable New Features
Late breaking Plasma 6.3 feature: Discover can now open flatpak:/
URLs. (Aleix Pol Gonzalez, 6.3.0. Link)
The time zone choosers present on System Settings' Date & Time page as well as the Digital Clock widget's settings page have been given a major upgrade: a visual UI using a world map! (Niccolò Venerandi, 6.4.0. Link 1 and link 2)
Notable UI Improvements
When activating the "Restore manually saved session" option on System Settings' Desktop Session page, the corresponding "Save Session" action now appears in Kickoff and other launcher menus immediately, rather than requiring a reboot first. (Marco Martin, 6.3.0. Link)
On System Settings' Users page, the dialogs used for choosing an avatar image are now sized more appropriately no matter the window size, and the custom avatar cropper feature now defaults to no cropping for square images. (Nate Graham, 6.3.0. Link 2 and link 2)
On the System Tray widget's settings window, the table on the Entries page now uses the alternating row color style to make it easier to match up the columns, especially when the window has been made enormous for some reason. (Marco Martin, 6.3.0. Link)
Improved the accessibility of several non-default Alt+Tab switcher styles. (Christoph Wolk, 6.3.0. Link)
Made the top corners' radii and side margins in Kickoff perfect. (Nate Graham, 6.3.0. Link)
Made the Breeze Dark color scheme a bit darker by default. (Thomas Duckworth, 6.4.0. Link)
Adjusted the visualization for different panel visibility modes to incorporate some animations, which makes them clearer. (Niccolò Venerandi, 6.4.0. Link)
You can now scroll on the Media Player widget's seek slider to move it without having to drag with the mouse. (Kai Uwe Broulik, 6.4.0. Link)
Scrolling on the Task Manager widget to switch between tasks is now disabled by default (but can be re-enabled if wanted, of course), as a result of feedback that it was easy to trigger by accident and could lead to disorientation. (Nate Graham, 6.4.0. Link)
Re-arranged the items on the context menu for Plasma's desktop a bit, to improve usability and speed for common file and folder management tasks. (Nate Graham, 6.4.0. Link)
The Audio Volume widget now has a hamburger menu button when used in standalone form, rather than as a part of the System Tray, where it already has one. (Niccolò Venerandi, 6.4.0. Link)
Tooltips for Spectacle's annotation buttons now include details about how to change their behavior using keyboard modifier keys. (Noah Davis, 6.4.0. Link)
Notable Bug Fixes
Fixed a case where the service providing the screen chooser OSD could crash when certain screens were plugged in. (Vlad Zahorodnii, 6.3.0. Link)
Fixed a case where KWin could crash on launch in the X11 session. (Fushan Wen, 6.3.0. Link)
Fixed a case where Discover would crash when trying to display apps with no reviews. (Fushan Wen, 6.3.0. Link)
Fixed a case where Plasma could crash after creating a new panel with certain screen arrangements. (Fushan Wen, 6.3.0. Link)
Fixed a random KWin crash. (Vlad Zahorodnii, 6.3.0. Link)
Fixed a bug affecting System Settings' Desktop Session page that would cause it to crash upon being opened a second time, and also not show the settings in their correct states. (David Edmundson, 6.3.0. Link 1, link 2)
Fixed several cases where screen positions and other settings might get reset after waking from sleep. (Xaver Hugl, 6.3.0. Link 1, and link 2)
You can once again drag files, folders, and applications to Kickoff's Favorites view to make them favorites, after this broke at some point in the past. In addition, the change also fixes an issue where Kickoff's popup would inappropriately open rather than move out of the way when you dragged another widget over it. (Noah Davis, 6.3.0. Link1 and link 2)
Apps that launch and immediately display a dialog window along with their main window no longer have those windows go missing in the Alt+Tab switcher. (David Edmundson, 6.3.0. Link)
Improved OpenVPN cipher parsing so it won't show cipher types that don't actually exist. (Nicolas Fella, 6.3.0. Link)
Activating a Plasma panel using a keyboard shortcut in the X11 session no longer causes it to bizarrely become a window! (Marco Martin, 6.3.0. Link)
System Settings' Touchpad page is no longer missing some options in the X11 session, depending on how you open it. (Jakob Petsovits, 6.3.0. Link)
Fixed a bug that could cause panels using the Auto-Hide and Dodge Windows settings to briefly get stuck open when activated while a full-screen window was active. (Niccolò Venerandi, 6.3.0. Link)
Right-clicking an empty area of the applications or process table in System Monitor no longer shows a context menu with no appropriate items in it. (Nate Graham, 6.3.0. Link)
Fixed a bug causing a second "System Settings" item to appear on System Settings' own Shortcuts page. (Raphael Kubo da Costa, 6.4.0. Link)
You can once again copy files and folders on the desktop using the Ctrl+C shortcut, after this broke due to an unusual interaction between the desktop and a placeholder message added a few versions ago. (Marco Martin, Frameworks 6.11. Link)
Fixed a case where a Qt bug could cause apps to crash in response to certain actions from the graphics drivers. (David Redondo, Qt 6.8.3. Link)
Other bug information of note:
- 1 Very high priority Plasma bug (same as last week). Current list of bugs
- 23 15-minute Plasma bugs (same as last week). Current list of bugs
- 124 KDE bugs of all kinds fixed over the past week. Full list of bugs
Notable in Performance & Technical
Fixed a bunch of memory leaks in KScreen. (Vlad Zahorodnii, 6.3.0. Link)
How You Can Help
KDE has become important in the world, and your time and contributions have helped us get there. As we grow, we need your support to keep KDE sustainable.
You can help KDE by becoming an active community member and getting involved somehow. Each contributor makes a huge difference in KDE - you are not a number or a cog in a machine!
You don't have to be a programmer, either. Many other opportunities exist:
- Triage and confirm bug reports, maybe even identify their root cause
- Contribute designs for wallpapers, icons, and app interfaces
- Design and maintain websites
- Translate user interface text items into your own language
- Promote KDE in your local community
- …And a ton more things!
You can also help us by making a donation! Any monetary contribution - however small - will help us cover operational costs, salaries, travel expenses for contributors, and in general just keep KDE bringing Free Software to the world.
To get a new Plasma feature or a bugfix mentioned here, feel free to push a commit to the relevant merge request on invent.kde.org.
25 Jan 2025 4:00am GMT
24 Jan 2025
Planet Ubuntu
Scarlett Gately Moore: KDE: Snaps bug fixes and Kubuntu: Noble updates
Fixed a major crash bug in our apps that use webengine, I also went ahead and updated these to core24 https://bugs.launchpad.net/snapd/+bug/2095418 andhttps://bugs.kde.org/show_bug.cgi?id=498663
Fixed okular
Can't import certificates to digitally sign in Okular https://bugs.kde.org/show_bug.cgi?id=498558 Can't open files https://bugs.kde.org/show_bug.cgi?id=421987 and https://bugs.kde.org/show_bug.cgi?id=415711
Skanpage won't launch https://bugs.kde.org/show_bug.cgi?id=493847 in -edge please help test.
Ghostwriter https://bugs.kde.org/show_bug.cgi?id=481258
New KDE Snaps!
Kalm - Breathing techniques
Telly-skout - Display TV guides
Kubuntu: Plasma 5.27.12 has been uploaded to archive -proposed and should make the .2 release!
I hate asking but I am unemployable with this broken arm fiasco. If you could spare anything it would be appreciated! https://gofund.me/573cc38e
24 Jan 2025 8:00pm GMT
Planet Debian
Scarlett Gately Moore: KDE: Snaps bug fixes and Kubuntu: Noble updates
Fixed a major crash bug in our apps that use webengine, I also went ahead and updated these to core24 https://bugs.launchpad.net/snapd/+bug/2095418 andhttps://bugs.kde.org/show_bug.cgi?id=498663
Fixed okular
Can't import certificates to digitally sign in Okular https://bugs.kde.org/show_bug.cgi?id=498558 Can't open files https://bugs.kde.org/show_bug.cgi?id=421987 and https://bugs.kde.org/show_bug.cgi?id=415711
Skanpage won't launch https://bugs.kde.org/show_bug.cgi?id=493847 in -edge please help test.
Ghostwriter https://bugs.kde.org/show_bug.cgi?id=481258
New KDE Snaps!
Kalm - Breathing techniques
Telly-skout - Display TV guides
Kubuntu: Plasma 5.27.12 has been uploaded to archive -proposed and should make the .2 release!
I hate asking but I am unemployable with this broken arm fiasco. If you could spare anything it would be appreciated! https://gofund.me/573cc38e
24 Jan 2025 8:00pm GMT
Planet KDE | English
KDE: Snaps bug fixes and Kubuntu: Noble updates
Fixed a major crash bug in our apps that use webengine, I also went ahead and updated these to core24 https://bugs.launchpad.net/snapd/+bug/2095418 andhttps://bugs.kde.org/show_bug.cgi?id=498663
Fixed okular
Can't import certificates to digitally sign in Okular https://bugs.kde.org/show_bug.cgi?id=498558 Can't open files https://bugs.kde.org/show_bug.cgi?id=421987 and https://bugs.kde.org/show_bug.cgi?id=415711
Skanpage won't launch https://bugs.kde.org/show_bug.cgi?id=493847 in -edge please help test.
Ghostwriter https://bugs.kde.org/show_bug.cgi?id=481258
New KDE Snaps!
Kalm - Breathing techniques
Telly-skout - Display TV guides
Kubuntu: Plasma 5.27.12 has been uploaded to archive -proposed and should make the .2 release!
I hate asking but I am unemployable with this broken arm fiasco. If you could spare anything it would be appreciated! https://gofund.me/573cc38e
24 Jan 2025 8:00pm GMT
Linux Today
How To Change Directory And List Files In One Command In Fish Shell
In this guide, we will explain different methods to change directories and list files in one command using Fish Shell on Linux.
The post How To Change Directory And List Files In One Command In Fish Shell appeared first on Linux Today.
24 Jan 2025 6:23pm GMT
Linuxiac
How to Install Visual Studio Code on Arch Linux
Learn how to install Visual Studio Code on Arch Linux step-by-step to enjoy seamless coding with the most popular code editor.
24 Jan 2025 4:51pm GMT
OMG! Ubuntu
Ubuntu 24.04.2 Arrives Feb 13 with Linux Kernel 6.11
Ubuntu 24.04.2 LTS is scheduled for release on February 13th - in time for Valentines Day, aww. Canonical's Florent Jacquet shares the date on the Ubuntu Developer mailing list today along with a note to developers to be mindful of their package uploads to noble in the coming weeks. As a result, if you're using the latest long-term support release you may notice a slightly drop-off in the number of non-essential updates Software Updater bugs you to install between now and February 13. This allow devs to create a snapshot and test it properly. Ubuntu point releases rarely deliver new […]
You're reading Ubuntu 24.04.2 Arrives Feb 13 with Linux Kernel 6.11, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
24 Jan 2025 4:37pm GMT
Planet GNOME
Felipe Borges: Time to write proposals for GSoC 2025 with GNOME!
It is that time of the year again when we start gathering ideas and mentors for Google Summer Code.
@Mentors, please submit new proposals in our Project ideas GitLab repository before the end of January.
Proposals will be reviewed by the GNOME GSoC Admins and posted in https://gsoc.gnome.org/2025 when approved.
If you have any doubts, please don't hesitate to contact the GNOME Internship Committee.
24 Jan 2025 10:44am GMT
Linuxiac
Debian 13 Freeze Begins in March, Debian 15 Codename Revealed
Debian 13 'Trixie' freeze starts March 15, 2025. The future Debian 15 release is officially named 'Duke.'
24 Jan 2025 10:23am GMT
Fedora People
Fedora Community Blog: Infra and RelEng Update – Week 04 2025
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 20th January - 24th January 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It's responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues
Fedora Infra
- In progress:
- Please generate the Fedora 44 gpg key
- Opt-in for AWS region to mx-central-1 acct:125523088429
- Opt-in for AWS region to ap-southeast-7 acct:125523088429
- retire easyfix
- bvmhost-p09-03 ends in emergency mode
- Add StartInstanceRefresh permission for fedora-ci-testing-farm user in AWS
- distgit-bugzilla-sync poddlers in both staging and prod downloading a lot from production
- Deploy Element Server Suite operator in staging
- Create CentOS calendar
- Pagure returns error 500 trying to open a PR on https://src.fedoraproject.org/rpms/python-setuptools-gettext
- Inactive packagers policy for the F41 release cycle
- a real domain name for konflux
- Retirement of `monitor-dashboard` tracker
- Create a POC integration in Konflux for fedora-infra/webhook-to-fedora-messaging
- Retirement of `monitor-gating` tracker
- setup ipa02.stg and ipa03.stg again as replicas
- Manage our new testing.farm domain via AWS Route53
- Add support for creating RDS instances under `testing-farm-` prefix
- Move OpenShift apps from deploymentconfig to deployment
- RFE: fedoras container image register change
- The process to update the OpenH264 repos is broken
- httpd 2.4.61 causing issue in fedora infrastructure
- Support allocation dedicated hosts for Testing Farm
- fedorapeople.org directory listing theme needs refreshing
- EPEL minor version archive repos in MirrorManager
- vmhost-x86-copr01.rdu-cc.fedoraproject.org DOWN
- Add yselkowitz to list to notify when ELN builds fail
- Cleaning script for communishift
- rhel7 eol
- move resultsdb-ci-listener deployment pipeline
- rhel7 EOL - github2fedmsg
- Move from iptables to firewalld
- Help me move my discourse bots to production?
- fedmsg -> fedora-messaging migration tracker
- rhel9 adoption
- Create monitoring tool for rabbitmq certificates
- Replace Nagios with Zabbix in Fedora Infrastructure
- Migration of registry.fedoraproject.org to quay.io
- Commits don't end up on the scm-commits list
- Done:
- Move personal GitHub repository to fedora-infra org
- extra trailing slash on epel.io redirect breaks page rendering
- Messed up rpm/valgrind pull request/branch
- New Issue information page broken link
- bastion delivering locally to non contributor accounts
- mirrorlist is returning too few mirrors
- New requirements for Google and Yahoo mail at least
CentOS Infra including CentOS CI
- In progress:
- Done:
Release Engineering
- In progress:
- Unretire tachyon
- Unretire ethos
- Quay.io repository for fedora-bootc-tier-x
- Missing rawhide branch for recently created package 'pybind11-json'
- F42 Self-Contained Change: Switch to EROFS for Live Media
- Please remove two branches from octave-iso2mesh
- Send compose reports to a to-be-created separate ML
- .sqlite metadata missing in f41-updates and f41-updates-testing repositories
- F42 system-wide change: GNU Toolchain update for F42 https://fedoraproject.org/wiki/Changes/GNUToolchainF42
- Delete "firefox-fix" branch in xdg-desktop-portal
- please create epel10 based el10-openjdk tag
- Could we have fedoraproject-updates-archive.fedoraproject.org for Rawhide?
- Investigate and untag packages that failed gating but were merged in via mass rebuild
- a few mass rebuild bumps failed to git push - script should retry or error
- Renaming distribution media for Fedora Server
- Package retirements are broken in rawhide
- Implement checks on package retirements
- Untag containers-common-0.57.1-6.fc40
- orphan-all-packages.py should remove bugzilla_contact entries from fedora-scm-requests as well
- Packages that fail to build SRPM are not reported during the mass rebuild bugzillas
- When orphaning packages, keep the original owner as co-maintainer
- Create an ansible playbook to do the mass-branching
- RFE: Integration of Anitya to Packager Workflow
- Cleaning old stuff from koji composes directories
- Fix tokens for ftbfs_weekly_reminder. script
- Update bootloader components assignee to "Bootloader Engineering Team"for Improved collaboration
- Done:
- Stalled EPEL package: python-zope-exceptions
- Stalled EPEL package: python-pytest-subtests
- F42 Self-Contained Change: Promote KDE Plasma Desktop variant to Edition
- The build `localsearch-3.8~rc-1.fc42` is in `DELETED` state and is blocking me from rebuilding
- Failed request-repo action for sgx-rpm-macros
- Unretire virt-who
- latest-Fedora-Cloud-41 directory missing on the cloud compose listing
If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.
The post Infra and RelEng Update - Week 04 2025 appeared first on Fedora Community Blog.
24 Jan 2025 10:00am GMT
Planet Debian
Jonathan Dowland: FOSDEM 2025
I'm going to FOSDEM 2025!
As usual, I'll be in the Java Devroom for most of that day, which this time around is Saturday.
Please recommend me any talks!
This is my shortlist so far:
- no more boot loader: boot using the Linux kernel
- aerc, an email client for the discerning hacker
- Supersonic retro development with Docker
- Raiders of the lost hard drive
- Rediscovering the fun of programming with the Game Boy
- Fixing CVEs on Debian: almost everything you should know about it
- Building the Future: Understanding and Contributing to Immutable Linux Distributions
- Generating immutable, A/B updatable, securely booting Debian images
- a tale of several distros joining forces for a common goal: reproducible builds
- Finding Anomalies in the Debian Packaging System to Detect Supply Chain Attacks
- The State of OpenJDK
- Project Lilliput - Looking Back and Ahead
- (Almost) everything I knew about Java performance was wrong
- Reduce the size of your Java run-time image
- Project Leyden - Past and the Future
- DMARCaroni: where do DMARC reports go after they are sent?
24 Jan 2025 9:41am GMT
Fedora People
Fedora Magazine: Update on hibernation in Fedora Workstation
Goals and rationale
Hibernation stores the state of the whole operating system - the contents of memory used by the kernel and all programs - on disk. The machine is then completely powered off. Upon next boot, this state is restored and the old kernel and all the programs that were running continue execution.
Hibernation is nowadays used less often, because "suspend" - the state where CPU is powered down, but the contents of memory are preserved, works fine on most laptops and other small devices. But if the suspend is implemented poorly and it drains the battery too quickly, or if the user needs to completely power off the device for some reasons, hibernation can still be useful.
We need a storage area for hibernation. The kernel allows two options:
- either a single large-enough swap device, usually a partition,
- or a single large-enough swap file on some file system.
Fedora Linux installations by default do not use a normal swap device or file. Instead, a zram device is created, which is an in-memory compressed swap area. It is not suitable for hibernation. This means that hibernation does not work out-of-the-box on Fedora Linux. This guide describes how to create a swap file to enable hibernation.
Limitations
This method only works on UEFI!
To check that the system uses UEFI:
bootctl
If this commands prints "Not booted with EFI", then the method described below won't work. Refer to the original Hibernation in Fedora Workstation (for Fedora Linux 36) instead.
Another severe limitation is that SecureBoot must be disabled. The kernel does not allow hibernation with SecureBoot! Disabling SecureBoot reduces the security of the machine somewhat. Thus, do this only if you think that hibernation is more worth it.
Implementation
First, check whether Secure Boot is on:
bootctl
If this prints "Secure Boot: disabled" then SB is off. Otherwise, reboot the machine, go into UEFI settings, and disable Secure Boot.
Second, create and enable a swap file:
SWAPSIZE=$(free | awk '/Mem/ {x=$2/1024/1024; printf "%.0fG", (x<2 ? 2*x : x<8 ? 1.5*x : x) }') sudo btrfs subvolume create /var/swap sudo mkswap --file -L SWAPFILE --size $SWAPSIZE /var/swap/swapfile sudo bash -c 'echo /var/swap/swapfile none swap defaults 0 0 >>/etc/fstab' sudo swapon -av
This should print a message that swap was enabled on /var/swap/swapfile. The swap file is added to fstab, so it'll be permanently active. This is a good thing, it should make the system more reliable in general.
Now we are ready to test hibernation:
systemctl hibernate
After the system has shut down, boot it again and let one of the kernels start. The machine should return to the previous state from before hibernation.
This method does not require further configuration because systemd automatically stores the location of the swap file before entering hibernation in an UEFI variable, and then after the reboot, reads that variable and instruct the kernel to resume from this location. This only works on UEFI systems, but is otherwise quite simple and robust.
Reverting the changes
sudo swapoff -v /var/swap/swapfile sudo sed -r -i '/.var.swap.swapfile/d' /etc/fstab sudo btrfs subvolume rm /var/swap
After that, reenable SecureBoot if appropriate.
Troubleshooting
This process mail fail in two ways:
- either going into hibernation fails, i.e. the kernel does not save the state and the machine does not actually power off,
- or loading of saved state fails, and we end up with a fresh boot.
In both cases, the first step is to look at journalctl -b, in particular any error lines.
24 Jan 2025 8:00am GMT
Planet GNOME
This Week in GNOME: #184 Upcoming Freeze
Update on what happened across the GNOME project in the week from January 17 to January 24.
GNOME Releases
Sophie 🏳️🌈 🏳️⚧️ (she/her) reports
In about one week from today, on February 1st, APIs, features, and user interfaces are frozen for GNOME 48. The release for GNOME 48 is planned for March 19th. More details and dates are available in the release calendar.
Third Party Projects
petsoi reports
This week, I released my very first app, Words!, a game inspired by Wordle. You can find it on Flathub.
Some features, like support for different dictionaries with varying word lengths and multiple languages, are still on my todo list.
Happy word hunting!
Giant Pink Robots! announces
Varia download manager got an update that's probably its biggest since the first release.
- The most important new feature is yt-dlp integration allowing for video and audio downloads from any supported website at any supported quality setting. These downloads are fully integrated into Varia and behave like any other download type.
- New adaptive layout allows for smaller window sizes and supports mobile devices.
- Way better handling of downloads for better performance and also to crush bugs. Downloads that were paused stay paused upon relaunch, which I think was one of the biggest issues.
- More settings for torrents, allowing for adjustments to the seeding ratio and custom download directory. .torrent files can now be dragged onto the window.
You can get it here: https://giantpinkrobots.github.io/varia/
Crosswords
A crossword puzzle game and creator.
jrb announces
Crosswords 0.3.14 was released. For this version, almost all the changes were in the underlying code and Crossword Editor. Improvements include:
- libipuz is ported to GObject Introspection and has developer documentation. It's much closer to a stable API.
- Autofill is massively improved. Full boards are now solvable, and tough corners will fill or fail quicker
- Selection of cells is saner, removing the need for nested tabs.
- A preloaded dictionary from Wiktionary is included to supplement the word lists.
- Enhanced substring matching added for cryptic indicators.
- A preview window is added to check on a puzzle in development.
This is the first version of the Editor that is generally usable. If you've ever wanted to write a crossword, please give it a try and let me know how it goes.
Read more at the release notes
That's all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
24 Jan 2025 12:00am GMT
23 Jan 2025
OMG! Ubuntu
Vivaldi 7.1 Delivers Speed Dial Buffs, New Search Engine
Vivaldi web browser has just released its first major update of the year - a corker it is, too! Fans of the Chromium-based browser-though Vivaldi Technologies doesn't appear to be part of the new Linux Foundation-led Supporters of Chromium Browsers project-will discover a bunch of improvements to the Dashboard feature Vivaldi 7.0 delivered. A new weather widget can be added to see current conditions and hourly and weekly weather forecasts for custom locations, plus the ability to set a preferred temperate, precipitation and wind speed unit (celsius, mm, and mph ftw). Keeping things scandi-cool, the Norway-based browser makes use of […]
You're reading Vivaldi 7.1 Delivers Speed Dial Buffs, New Search Engine, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
23 Jan 2025 7:19pm GMT
Planet GNOME
Adetoye Anointing: Extracting Texts And Elements From SVG2
Have you ever wondered how SVG files render complex text layouts with different styles and directions so seamlessly? At the core of this magic lies text layout algorithms-an essential component of SVG rendering that ensures text appears exactly as intended.
Text layout algorithms are vital for rendering SVGs that include styled or bidirectional text. However, before layout comes text extraction-the process of collecting and organizing text content and properties from the XML tree to enable accurate rendering.
The Extraction Process
SVGs, being XML-based formats, resemble a tree-like structure similar to HTML. To extract information programmatically, you navigate through nodes in this structure.
Each node in the XML tree holds critical details for implementing the SVG2 text layout algorithm, including:
- Text content
- Bidi-control properties (manage text directionality)
- Styling attributes like font and spacing
Understanding Bidi-Control
Bidi-control refers to managing text direction (e.g., Left-to-Right or Right-to-Left) using special Unicode characters. This is crucial for accurately displaying mixed-direction text, such as combining English and Arabic.
A Basic Example
<text> foo <tspan>bar</tspan> baz </text>
The diagram and code sample shows the structure librsvg creates when it parses this XML tree.
Here, the <text> element has three children:
- A text node containing the characters "foo".
- A <tspan> element with a single child text node containing "bar".
- Another text node containing "baz".
When traversed programmatically, the extracted text from this structure would be "foobarbaz".
To extract text from the XML tree:
- Start traversing nodes from the <text> element.
- Continue through each child until the final closing tag.
- Concatenate character content into a single string.
While this example seems straightforward, real-world SVG2 files introduce additional complexities, such as bidi-control and styling, which must be handled during text extraction.
Handling Complex SVG Trees
Real-world examples often involve more than just plain text nodes. Let's examine a more complex XML tree that includes styling and bidi-control:
Example:
<text> "Hello" <tspan font-style="bold;">bold</tspan> <tspan direction="rtl" unicode-bidi="bidi-override">مرحبا</tspan> <tspan font-style="italic;">world</tspan> </text>
In this example, the <text> element has four children:
- A text node containing "Hello".
- A <tspan> element with font-style: bold, containing the text "bold".
- A <tspan> element with bidi-control set to RTL (Right-To-Left), containing Arabic text "مرحبا".
- Another <tspan> element with font-style: italic, containing "world".
This structure introduces challenges, such as:
- Styling: Managing diverse font styles (e.g., bold, italic).
- Whitespace and Positioning: Handling spacing between nodes.
- Bidirectional Control: Ensuring proper text flow for mixed-direction content.
Programmatically extracting text from such structures involves traversing nodes, identifying relevant attributes, and aggregating the text and bidi-control characters accurately.
Why Test-Driven Development Matters
One significant insight during development was the use of Test-Driven Development (TDD), thanks to my mentor Federico. Writing tests before implementation made it easier to visualize and address complex scenarios. This approach turned what initially seemed overwhelming into manageable steps, leading to robust and reliable solutions.
Conclusion
Text extraction is the foundational step in implementing the SVG2 text layout algorithm. By effectively handling complexities such as bidi-control and styling, we ensure that SVGs render text accurately and beautifully, regardless of direction or styling nuances.
If you've been following my articles and feel inspired to contribute to librsvg or open source projects, I'd love to hear from you! Drop a comment below to share your thoughts, ask questions, or offer insights. Your contributions-whether in the form of questions, ideas, or suggestions-are invaluable to both the development of librsvg and the ongoing discussion around SVG rendering.
In my next article, we'll explore how these extracted elements are processed and integrated into the text layout algorithm. Stay tuned-there's so much more to uncover!
23 Jan 2025 5:05pm GMT
Planet Ubuntu
Podcast Ubuntu Portugal: E333 GameDev, Com Soficious E Rafael Gonçalves - I
Recebemos dois convidados - Sofia «Soficious» e Rafael Gonçalves - para nos falarem do mundo dos jogos: em particular, da acessibilidade, desenvolvimento com Software Livre e Tinta Amarela. Como assim, tinta amarela? É verdade, tinta amarela! A conversa foi tão interessante, que vai ser dividida em dois episódios; esta é a primeira parte.
Já sabem: oiçam, subscrevam e partilhem!
- https://www.altaccess.tech/
- https://masto.pt/@Soficious
- https://Instagram.com/soficious_
- https://twitch.tv/soficious
- https://x.com/Soficious_
- https://linktr.ee/soficious
- https://tiktok.com/@soficious
- https://youtube.com/@soficious
- https://store.steampowered.com/app/2645830/Stunt_Xpress/
- https://godotengine.org/
- https://www.threads.net/@gonzelviz
- https://mastodon.gamedev.place/@Gonzelvis
- https://x.com/gonzelvis
- LoCo PT: https://loco.ubuntu.com/teams/ubuntu-pt/
- Nitrokey: https://shop.nitrokey.com/shop?aff_ref=3
- Mastodon: https://masto.pt/@pup
- Youtube: https://youtube.com/PodcastUbuntuPortugal
Apoios
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Atribuição e licenças
Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: "Won't see it comin' (Feat Aequality & N'sorte d'autruche)", por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.
23 Jan 2025 12:00am GMT
22 Jan 2025
OMG! Ubuntu
Ignition is a Modern Startup Applications Utility for Linux
I won't lie: it's easy to add or remove startup apps, commands, and scripts in Ubuntu. Just open the Startup Applications tool, click 'Add', and away you go. But while Ubuntu's utility is adequate, it's not as user-friendly as similar tools available elsewhere. Sure, Startup Applications is equipped with the critical customisation fields a user will need to curate a set of software/services to start at login - SSH agent, VPN app, password manager, backup script, resolution tweaks, and so on - but it's rather rote. Take the way you add an app to start at login: Ubuntu's Startup Applications […]
You're reading Ignition is a Modern Startup Applications Utility for Linux, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
22 Jan 2025 9:18pm GMT
Planet Ubuntu
Ubuntu Blog: Bringing multiple windows to Flutter desktop apps
Over the past 5 years, Canonical has been contributing to Flutter, including building out Linux support for Flutter applications, publishing libraries to help integrate into the Linux desktop and building modern applications for Ubuntu, including our software store. Last year we announced at the Ubuntu Summit that we've been working on bringing support for multiple windows to Flutter desktop apps.
Why multiple windows support for Flutter desktop apps is needed
One current limitation that Flutter desktop apps have is that they are confined to a single window. This makes sense on mobile where an app takes up the whole screen but for Flutter desktop apps there's much more space to take advantage of. We know that many members of the Flutter community - including us here at Canonical - have been waiting patiently to break out of that single window.
Canonical has a long history of working with graphical environments having produced Ubuntu Desktop for over 20 years. We want to make sure that the Flutter multi-window support works across a diverse range of desktops including all of those across the extremely varied Linux ecosystem. We're also thinking ahead about how to make sure Flutter desktop apps continue to work well as the concept of a desktop becomes more diverse.
Proposed solution for Flutter desktop apps
Desktop applications are made up of multiple windows that are used for all sorts of things, including tooltips, dialogs and menus. In the comparison below you can see the same Flutter desktop app running with the current version of Flutter on the left, and the multi-window version on the right. Notice how the app on the right feels more integrated: menus and tooltips are better aligned to the mouse cursor instead of being shifted or cropped to fit inside the window.
The best thing about the approach we've taken is both apps above are using the same standard Flutter Material widgets - the multi-window support is applied automatically. If the app is run in a situation where multi-window is not applicable (e.g. mobile), the app will revert to the traditional behaviour.
When you're ready to build a more complicated multi-window app this is easy to do as each window just fits into the Flutter widget tree.
Details for seasoned Flutter developers: you'll need to make a small update to the runner, and if you have an unmodified runner this is easy to migrate. A small change is also required to the main function in the Dart code.
Rolling this out
We're currently reviewing the changes to the Flutter engine and framework. We're working on a way to easily test these changes but if you're up for the challenge go ahead and build our branches and test it yourself.
When these changes have landed, building using the latest version of Flutter will enable your app to use multi-window on the Windows operating system. We are hard at work expanding this availability, and we will soon release multi-window support to both Linux and MacOS-based platforms.
We look forward to seeing what amazing apps you will build in the future!
22 Jan 2025 10:00am GMT
16 Jan 2025
Planet Arch Linux
Critical rsync security release 3.4.0
We'd like to raise awareness about the rsync security release version 3.4.0-1
as described in our advisory ASA-202501-1. An attacker only requires anonymous read access to a vulnerable rsync server, such as a public mirror, to execute arbitrary code on the machine the server is running on. Additionally, attackers can take control of an affected server and read/write arbitrary files of any connected client. Sensitive data can be extracted, such as OpenPGP and SSH keys, and malicious code can be executed by overwriting files such as ~/.bashrc
or ~/.popt
. We highly advise anyone who runs an rsync daemon or client prior to version 3.4.0-1
to upgrade and reboot their systems immediately. As Arch Linux mirrors are mostly synchronized using rsync, we highly advise any mirror administrator to act immediately, even though the hosted package files themselves are cryptographically signed. All infrastructure servers and mirrors maintained by Arch Linux have already been updated.
16 Jan 2025 12:00am GMT
11 Jan 2025
Kernel Planet
Pete Zaitcev: Looking for a BSSID
I'm looking for a name for a new WiFi area.
The current one is called "Tokyo-Jupiter". It turns out hard to top, it meets all the requirements. It's a geographic area. It's weeb, but from old enough times: not Naruto Shippuuden, Attack On Titan, or Kimetsu no Yaiba. Classy and unique enough.
"Konoha" is too new, too washed-up, and too short.
"Kodena" and "Yokosuka" add a patriotic American tint nicely, but also too short.
"Minas-Tirith" is a place and outstanding in its reference, but not weeb.
"Big-Sight" is an opposite of the above: too much. I'm a weeb, not otaku.
Any ideas are appreciated.
UPDATE 2025-01-11: The provisional candidate is "Nishi-Teppelin". Don't google it, it's not canon. I remain open to better ideas.
11 Jan 2025 1:42am GMT
02 Jan 2025
Kernel Planet
Matthew Garrett: The GPU, not the TPM, is the root of hardware DRM
As part of their "Defective by Design" anti-DRM campaign, the FSF recently made the following claim:
Today, most of the major streaming media platforms utilize the TPM to decrypt media streams, forcefully placing the decryption out of the user's control
(from here).
This is part of an overall argument that Microsoft's insistence that only hardware with a TPM can run Windows 11 is with the goal of aiding streaming companies in their attempt to ensure media can only be played in tightly constrained environments.
I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is. I've been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I'd certainly pick a computer with a TPM. But in terms of whether it's of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I'm not sure that's a worthwhile tradeoff.
What I can say is that the FSF's claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft's strategy here, the argument is pretty significantly undermined. I'm not aware of any streaming media platforms making use of TPMs in any way whatsoever. There is hardware DRM that the media companies use to restrict users, but it's not in the TPM - it's in the GPU.
Let's back up for a moment. There's multiple different DRM implementations, but the big three are Widevine (owned by Google, used on Android, Chromebooks, and some other embedded devices), Fairplay (Apple implementation, used for Mac and iOS), and Playready (Microsoft's implementation, used in Windows and some other hardware streaming devices and TVs). These generally implement several levels of functionality, depending on the capabilities of the device they're running on - this will range from all the DRM functionality being implemented in software up to the hardware path that will be discussed shortly. Streaming providers can choose what level of functionality and quality to provide based on the level implemented on the client device, and it's common for 4K and HDR content to be tied to hardware DRM. In any scenario, they stream encrypted content to the client and the DRM stack decrypts it before the compressed data can be decoded and played.
The "problem" with software DRM implementations is that the decrypted material is going to exist somewhere the OS can get at it at some point, making it possible for users to simply grab the decrypted stream, somewhat defeating the entire point. Vendors try to make this difficult by obfuscating their code as much as possible (and in some cases putting some of it in-kernel), but pretty much all software DRM is at least somewhat broken and copies of any new streaming media end up being available via Bittorrent pretty quickly after release. This is why higher quality media tends to be restricted to clients that implement hardware-based DRM.
The implementation of hardware-based DRM varies. On devices in the ARM world this is usually handled by performing the cryptography in a Trusted Execution Environment, or TEE. A TEE is an area where code can be executed without the OS having any insight into it at all, with ARM's TrustZone being an example of this. By putting the DRM code in TrustZone, the cryptography can be performed in RAM that the OS has no access to, making the scraping described earlier impossible. x86 has no well-specified TEE (Intel's SGX is an example, but is no longer implemented in consumer parts), so instead this tends to be handed off to the GPU. The exact details of this implementation are somewhat opaque - of the previously mentioned DRM implementations, only Playready does hardware DRM on x86, and I haven't found any public documentation of what drivers need to expose for this to work.
In any case, as part of the DRM handshake between the client and the streaming platform, encryption keys are negotiated with the key material being stored in the GPU or the TEE, inaccessible from the OS. Once decrypted, the material is decoded (again either on the GPU or in the TEE - even in implementations that use the TEE for the cryptography, the actual media decoding may happen on the GPU) and displayed. One key point is that the decoded video material is still stored in RAM that the OS has no access to, and the GPU composites it onto the outbound video stream (which is why if you take a screenshot of a browser playing a stream using hardware-based DRM you'll just see a black window - as far as the OS can see, there is only a black window there).
Now, TPMs are sometimes referred to as a TEE, and in a way they are. However, they're fixed function - you can't run arbitrary code on the TPM, you only have whatever functionality it provides. But TPMs do have the ability to decrypt data using keys that are tied to the TPM, so isn't this sufficient? Well, no. First, the TPM can't communicate with the GPU. The OS could push encrypted material to it, and it would get plaintext material back. But the entire point of this exercise was to avoid the decrypted version of the stream from ever being visible to the OS, so this would be pointless. And rather more fundamentally, TPMs are slow. I don't think there's a TPM on the market that could decrypt a 1080p stream in realtime, let alone a 4K one.
The FSF's focus on TPMs here is not only technically wrong, it's indicative of a failure to understand what's actually happening in the industry. While the FSF has been focusing on TPMs, GPU vendors have quietly deployed all of this technology without the FSF complaining at all. Microsoft has enthusiastically participated in making hardware DRM on Windows possible, and user freedoms have suffered as a result, but Playready hardware-based DRM works just fine on hardware that doesn't have a TPM and will continue to do so.
comments
02 Jan 2025 1:14am GMT
31 Dec 2024
Planet Arch Linux
2024 wrapped
Dear blog. This post is inspired by an old friend of mine who has been writing these for the past few years. I meant to do this for a while now, but ended up not preparing anything, so this post is me writing it from memory. There's likely stuff I forgot, me being gentle with myself I'll probably just permit myself to complete this list the next couple of days. I hate bragging, I try to not depend on external validation as much as possible, and being the anti-capitalist that I am, I try to be content with knowing I'm …
31 Dec 2024 12:00am GMT
29 Dec 2024
Planet Gentoo
FOSDEM 2025
It's FOSDEM time again! Join us at Université Libre de Bruxelles, Campus du Solbosch, in Brussels, Belgium. The upcoming FOSDEM 2025 will be held on February 1st and 2nd 2025. Our developers will be happy to greet all open source enthusiasts at our Gentoo stand (exact location still to be announced), which we will share this year with then Gentoo-based Flatcar Container Linux. Of course there's also the chance to celebrate 25 years of compiling! Visit this year's wiki page to see who's coming and for more practical information.
29 Dec 2024 6:00am GMT
24 Dec 2024
Planet Arch Linux
Goodbye, Sam
A eulogy for the greatest dog of all, and a friend I will never forget.
24 Dec 2024 12:00am GMT
20 Dec 2024
Planet Gentoo
Poetry(-core), or the ultimate footgun
I've been complaining about the Poetry project a lot, in particular about its use (or more precisely, the use of poetry-core) as a build system. In fact, it pretty much became a synonym of a footgun for me - and whenever I'm about to package some project using poetry-core, or switching to it, I've learned to expect some predictable mistake. I suppose the time has come to note all these pitfalls in a single blog post.
The nightmarish caret operator
One of the first things Poetry teaches us is to pin dependencies, SemVer-style. Well, I'm not complaining. I suppose it's a reasonable compromise between pinning exact versions (which just asks for dependency conflicts between different packages), and leaving user at the mercy of breaking changes in dependencies. The problem is, Poetry teaches us to treat these pins in a wholesale, one-size-fits-all manner.
What I'm talking about is the (in)famous caret operator. I mean, I suppose it's quite convenient for the general case of semantic versioning, where e.g. ^1.2.3 is handy short for >=1.2.3,<2.0.0, and works quite well for the non-exactly-SemVer case of ^0.2.3 for >=0.2.3,<0.3.0. However, the way it is presented as a panacea means that most of the time people use it for all their dependencies, whether it is meaningful there or not.
So some pins are correct, some are too strict and others are too lax. In the end, you get the worst of both worlds: you annoy distro packagers like us who have to keep relaxing your dependencies, and you don't help users who still get incidental breakage. Some people even use the caret operator for packages that clearly don't fit it at all. My favorite example is the equivalent of the following dependency:
tzdata = "^2023.3"
This actually suffers from two problems. Firstly, this package clearly uses CalVer rather than SemVer, so pinning to 2023 seems fishy. Secondly, since we are talking about timezone data, there is really no point in pinning at all - on the contrary, you always want to use up-to-date timezone data.
The misleading include key
When people want to control which files are included in the source distributions, they resort to the include and exclude keys. And they add "obvious" blocks like the following:
include = [ "CHANGELOG", "README.md", "LICENSE", ]
Except that this is entirely wrong! A plain entry in the include key is included both in source and in binary distribution. Or, to put it more clearly, this code causes the following files to be installed:
/usr/lib/python3.12/site-packages/CHANGELOG /usr/lib/python3.12/site-packages/LICENSE /usr/lib/python3.12/site-packages/README.md
What you need to do instead is to annotate every file with the desired format, i.e.:
include = [ { path = "CHANGELOG", format = "sdist" }, { path = "README.md", format = "sdist" }, { path = "LICENSE", format = "sdist" }, ]
Yes, this is absolutely confusing and counterintuitive. On top of that, even today the first example in the linked documentation is clearly wrong. And people keep repeating this mistake over and over again - I know because I keep sending pull requests fixing them, and there is no end to them! In fact, I've even seen people adding additional entries without the format just below entries that did have it!
Schrödinger's optional dependency
Poetry has a custom way of declaring optional dependencies. You declare them just like a regular dependency, and add an optional key to it, e.g.:
[tool.poetry.dependencies] python = "^3.7" filetype = "^1.0.7" deprecation = "^2.1.0" # yaml-plugin extra "ruamel.yaml" = {version = "^0.16.12", optional = true}
Well, so that last dependency is optional, right? Well, not necessarily! It is not, unless you actually add it to some dependency group, such as:
[tool.poetry.extras] yaml-plugin = ["ruamel.yaml"]
And again, this weird behavior leads to real problems. If you declare a dependency as optional, but forget to add it to some group, Poetry will just silently treat it as a required dependency. And this is really easy to miss, unless you actually look at the generated wheel metadata. A bug about confusing handling of optional dependencies has been filed back in 2020.
Summary
These are the handful of common issues I've repeatedly seen happening when people tried to use poetry-core as a build system. Sure, other PEP 517 backends aren't perfect and have their own issues. For one, setuptools pretty much consists of tons of legacy, buggy code, deprecated bits everyone uses anyway, and is barely kept alive these days. People also fall into pitfalls there.
However, I have never seen any other Python or non-Python build system that would be as counterintuitive and mistake-prone as Poetry is. On top of that, implementing PEP 621 (the standard for pyproject.toml pretty much every other PEP 517 backend follows) took 3 years - and even today, Poetry still defaults to their own, nonstandard configuration format.
Whenever I criticize Poetry, people ask me about the alternatives. For completeness, let me repeat my PEP517 backend recommendations here:
For pure Python packages: use either flit-core (lightweight, simple, no dependencies), or hatchling (popular and quite powerful, and we have to deal with its disadvantages anyway). For Python packages with C extensions, meson-python combines the power and correctness of Meson with good Python integration. For Python packages with Rust extensions, Maturin is the way to go.
20 Dec 2024 2:32pm GMT
12 Dec 2024
Kernel Planet
Matthew Garrett: When should we require that firmware be free?
The distinction between hardware and software has historically been relatively easy to understand - hardware is the physical object that software runs on. This is made more complicated by the existence of programmable logic like FPGAs, but by and large things tend to fall into fairly neat categories if we're drawing that distinction.
Conversations usually become more complicated when we introduce firmware, but should they? According to Wikipedia, Firmware is software that provides low-level control of computing device hardware
, and basically anything that's generally described as firmware certainly fits into the "software" side of the above hardware/software binary. From a software freedom perspective, this seems like something where the obvious answer to "Should this be free" is "yes", but it's worth thinking about why the answer is yes - the goal of free software isn't freedom for freedom's sake, but because the freedoms embodied in the Free Software Definition (and by proxy the DFSG) are grounded in real world practicalities.
How do these line up for firmware? Firmware can fit into two main classes - it can be something that's responsible for initialisation of the hardware (such as, historically, BIOS, which is involved in initialisation and boot and then largely irrelevant for runtime[1]) or it can be something that makes the hardware work at runtime (wifi card firmware being an obvious example). The role of free software in the latter case feels fairly intuitive, since the interface and functionality the hardware offers to the operating system is frequently largely defined by the firmware running on it. Your wifi chipset is, these days, largely a software defined radio, and what you can do with it is determined by what the firmware it's running allows you to do. Sometimes those restrictions may be required by law, but other times they're simply because the people writing the firmware aren't interested in supporting a feature - they may see no reason to allow raw radio packets to be provided to the OS, for instance. We also shouldn't ignore the fact that sufficiently complicated firmware exposed to untrusted input (as is the case in most wifi scenarios) may contain exploitable vulnerabilities allowing attackers to gain arbitrary code execution on the wifi chipset - and potentially use that as a way to gain control of the host OS (see this writeup for an example). Vendors being in a unique position to update that firmware means users may never receive security updates, leaving them with a choice between discarding hardware that otherwise works perfectly or leaving themselves vulnerable to known security issues.
But even the cases where firmware does nothing other than initialise the hardware cause problems. A lot of hardware has functionality controlled by registers that can be locked during the boot process. Vendor firmware may choose to disable (or, rather, never to enable) functionality that may be beneficial to a user, and then lock out the ability to reconfigure the hardware later. Without any ability to modify that firmware, the user lacks the freedom to choose what functionality their hardware makes available to them. Again, the ability to inspect this firmware and modify it has a distinct benefit to the user.
So, from a practical perspective, I think there's a strong argument that users would benefit from most (if not all) firmware being free software, and I don't think that's an especially controversial argument. So I think this is less of a philosophical discussion, and more of a strategic one - is spending time focused on ensuring firmware is free worthwhile, and if so what's an appropriate way of achieving this?
I think there's two consistent ways to view this. One is to view free firmware as desirable but not necessary. This approach basically argues that code that's running on hardware that isn't the main CPU would benefit from being free, in the same way that code running on a remote network service would benefit from being free, but that this is much less important than ensuring that all the code running in the context of the OS on the primary CPU is free. The maximalist position is not to compromise at all - all software on a system, whether it's running at boot or during runtime, and whether it's running on the primary CPU or any other component on the board, should be free.
Personally, I lean towards the former and think there's a reasonably coherent argument here. I think users would benefit from the ability to modify the code running on hardware that their OS talks to, in the same way that I think users would benefit from the ability to modify the code running on hardware the other side of a network link that their browser talks to. I also think that there's enough that remains to be done in terms of what's running on the host CPU that it's not worth having that fight yet. But I think the latter is absolutely intellectually consistent, and while I don't agree with it from a pragmatic perspective I think things would undeniably be better if we lived in that world.
This feels like a thing you'd expect the Free Software Foundation to have opinions on, and it does! There are two primarily relevant things - the Respects your Freedoms campaign focused on ensuring that certified hardware meets certain requirements (including around firmware), and the Free System Distribution Guidelines, which define a baseline for an OS to be considered free by the FSF (including requirements around firmware).
RYF requires that all software on a piece of hardware be free other than under one specific set of circumstances. If software runs on (a) a secondary processor and (b) within which software installation is not intended after the user obtains the product
, then the software does not need to be free. (b) effectively means that the firmware has to be in ROM, since any runtime interface that allows the firmware to be loaded or updated is intended to allow software installation after the user obtains the product.
The Free System Distribution Guidelines require that all non-free firmware be removed from the OS before it can be considered free. The recommended mechanism to achieve this is via linux-libre, a project that produces tooling to remove anything that looks plausibly like a non-free firmware blob from the Linux source code, along with any incitement to the user to load firmware - including even removing suggestions to update CPU microcode in order to mitigate CPU vulnerabilities.
For hardware that requires non-free firmware to be loaded at runtime in order to work, linux-libre doesn't do anything to work around this - the hardware will simply not work. In this respect, linux-libre reduces the amount of non-free firmware running on a system in the same way that removing the hardware would. This presumably encourages users to purchase RYF compliant hardware.
But does that actually improve things? RYF doesn't require that a piece of hardware have no non-free firmware, it simply requires that any non-free firmware be hidden from the user. CPU microcode is an instructive example here. At the time of writing, every laptop listed here has an Intel CPU. Every Intel CPU has microcode in ROM, typically an early revision that is known to have many bugs. The expectation is that this microcode is updated in the field by either the firmware or the OS at boot time - the updated version is loaded into RAM on the CPU, and vanishes if power is cut. The combination of RYF and linux-libre doesn't reduce the amount of non-free code running inside the CPU, it just means that the user (a) is more likely to hit since-fixed bugs (including security ones!), and (b) has less guidance on how to avoid them.
As long as RYF permits hardware that makes use of non-free firmware I think it hurts more than it helps. In many cases users aren't guided away from non-free firmware - instead it's hidden away from them, leaving them less aware that their freedom is constrained. Linux-libre goes further, refusing to even inform the user that the non-free firmware that their hardware depends on can be upgraded to improve their security.
Out of sight shouldn't mean out of mind. If non-free firmware is a threat to user freedom then allowing it to exist in ROM doesn't do anything to solve that problem. And if it isn't a threat to user freedom, then what's the point of requiring linux-libre for a Linux distribution to be considered free by the FSF? We seem to have ended up in the worst case scenario, where nothing is being done to actually replace any of the non-free firmware running on people's systems and where users may even end up with a reduced awareness that the non-free firmware even exists.
[1] Yes yes SMM
comments
12 Dec 2024 3:57pm GMT
10 Nov 2024
Planet Gentoo
The peculiar world of Gentoo package testing
While discussing uv tests with Fedora developers, it occurred to me how different your average Gentoo testing environment is - not only from these used upstream, but also from these used by other Linux distributions. This article will be dedicated exactly to that: to pointing out how it's different, what does that imply and why I think it's not a bad thing.
Gentoo as a source-first distro
The first important thing about Gentoo is that it is a source-first distribution. The best way to explain this is to compare it with your average "binary" distribution.
In a "binary" distribution, source and binary packages are somewhat isolated from one another. Developers work with source packages (recipes, specs) and use them to build binary packages - either directly, or via an automation. Then the binary packages hit repositories. The end users usually do not interface with sources at all - may well not even be aware that such a thing exists.
In Gentoo, on the other hand, source packages are the first degree citizens. All users use source repositories, and can optionally use local or remote binary package repositories. I think the best way of thinking about binary packages is: as a form of "cache".
If the package manager is configured to use binary packages, it attempts to find a package that matches the build parameters - the package version, USE flags, dependencies. If it finds a match, it can use it. If it doesn't, it just proceeds with building from source. If configured to do so, it may write a binary package as a side effect of that - almost literally cache it. It can also be set to create a binary package without installing it (pre-fill the "cache"). It should hardly surprise anyone at this point that the default local binary packages repository is under the /var/cache tree.
A side implication of this is that the binary packages provided by Gentoo are a subset of all packages available - and on top of that, only a small number of viable package configurations are covered by the official packages.
The build phases
The source build in Gentoo is split into a few phases. The central phases that are of interest here are largely inspired by how autotools-based packages were built. These are:
- src_configure - meant to pass input parameters to the build system, and get it to perform necessary platform checks. Usually involves invoking a configure script, or an equivalent action of a build system such as CMake, Meson or another.
- src_compile - meant to execute the bulk of compilation, and leave the artifacts in the build tree. Usually involves invoking a builder such as make or ninja.
- src_test - meant to run the test suite, if the user wishes testing to be done. Usually involves invoking the check or test target.
- src_install - meant to install the artifacts and other files from the work directory into a staging directory (not the live system). The files can be afterwards transferred to the live system and/or packed into a binary package. Usually involves invoking the install target.
Clearly, it's very similar to how you'd compile and install software yourself: configure, build, optionally test before installing, and then install.
Of course, this process is not really one-size-fits-all. For example, the modern Python packages no longer even try fitting into it. Instead, we build the wheel in the PEP 517 blackbox manner, and install it to a temporary directory straight in the compile phase. As a result, the test phase is run with a locally-installed package (relying on the logic from virtual environments), and the install phase merely moves files around for the package manager to pick them up.
The implications for testing
The key takeaways of the process are these:
- The test phase is run inside the working tree, against package that was just built but not installed into the live system.
- All the package's build-time dependencies should be installed into the live system.
- However, the system may contain any other packages, including packages that could affect the just-built package or its test suite in unpredictable ways.
- As a corollary, the live system may or may not contain a copy of the package in question already installed. And if it does, it may be a different version, and/or a different build configuration.
All of these mean trouble. Sometimes random packages will cause the tests to fail as false positives - and sometimes they make also them wrongly pass or get ignored. Sometimes packages already installed will prevent developers from seeing that they've missed some dependency. Often mismatches between installed packages will make reproducing issues hard. On top of that, sometimes an earlier installed copy of the package will leak into the test environment, causing confusing problems.
If there are so many negatives, why do we do it then? Because there is also a very important positive: the packages are being tested as close to the production environment as possible (short of actually installing them - but we want to test before that happens). Presence of a certain package may cause tests to fail as false positive - but it may also uncover an actual runtime issue, one that would not otherwise be caught until it actually broke production. And I'm not talking theoretical here. While I don't have any links handy right now, over and over again we were hitting real issues - either these that haven't been caught by upstream CI setups yet, or that simply couldn't have been caught in an idealized test environment.
So yeah, testing stuff this way may be quite a pain, and a source of huge frustration with the constant stream of false positives. But it's also an important strength that no idealized - not to say "lazy" - test environment can bring. Add to that the fact that a fair number of Gentoo users are actually installing their packages with tests enabled, and you get testing on a huge variety of systems, with different architectures, dependency versions and USE flags, configuration files… and on top of that, a knack for hacking. Yeah, people hate us for finding all these bugs they'd rather not hear about.
10 Nov 2024 2:33pm GMT
16 Oct 2024
Planet Maemo
Adding buffering hysteresis to the WebKit GStreamer video player
The <video>
element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.
The player private can have 3 buffering modes:
- On-disk buffering: This is the typical mode on desktop systems, but is frequently disabled on purpose on embedded devices to avoid wearing out their flash storage memories. All the video content is downloaded to disk, and the buffering percentage refers to the total size of the video. A GstDownloader element is present in the pipeline in this case. Buffering level monitoring is done by polling the pipeline every second, using the
fillTimerFired()
method. - In-memory buffering: This is the typical mode on embedded systems and on desktop systems in case of streamed (live) content. The video is downloaded progressively and only the part of it ahead of the current playback time is buffered. A GstQueue2 element is present in the pipeline in this case. Buffering level monitoring is done by listening to GST_MESSAGE_BUFFERING bus messages and using the buffering level stored on them. This is the case that motivates the refactoring described in this blog post, what we actually wanted to correct in Broadcom platforms, and what motivated the addition of hysteresis working on all the platforms.
- Local files: Files, MediaStream sources and other special origins of video don't do buffering at all (no GstDownloadBuffering nor GstQueue2 element is present on the pipeline). They work like the on-disk buffering mode in the sense that
fillTimerFired()
is used, but the reported level is relative, much like in the streaming case. In the initial version of the refactoring I was unaware of this third case, and only realized about it when tests triggered the assert that I added to ensure that the on-disk buffering method was working in GST_BUFFERING_DOWNLOAD mode.
The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.
So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.
All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.
Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.
For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().
So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.
16 Oct 2024 6:12am GMT
10 Sep 2024
Planet Maemo
Don’t shoot yourself in the foot with the C++ move constructor
Move semantics can be very useful to transfer ownership of resources, but as many other C++ features, it's one more double edge sword that can harm yourself in new and interesting ways if you don't read the small print.
For instance, if object moving involves super and subclasses, you have to keep an extra eye on what's actually happening. Consider the following classes A and B, where the latter inherits from the former:
#include <stdio.h> #include <utility> #define PF printf("%s %p\n", __PRETTY_FUNCTION__, this) class A { public: A() { PF; } virtual ~A() { PF; } A(A&& other) { PF; std::swap(i, other.i); } int i = 0; }; class B : public A { public: B() { PF; } virtual ~B() { PF; } B(B&& other) { PF; std::swap(i, other.i); std::swap(j, other.j); } int j = 0; };
If your project is complex, it would be natural that your code involves abstractions, with part of the responsibility held by the superclass, and some other part by the subclass. Consider also that some of that code in the superclass involves move semantics, so a subclass object must be moved to become a superclass object, then perform some action, and then moved back to become the subclass again. That's a really bad idea!
Consider this usage of the classes defined before:
int main(int, char* argv[]) { printf("Creating B b1\n"); B b1; b1.i = 1; b1.j = 2; printf("b1.i = %d\n", b1.i); printf("b1.j = %d\n", b1.j); printf("Moving (B)b1 to (A)a. Which move constructor will be used?\n"); A a(std::move(b1)); printf("a.i = %d\n", a.i); // This may be reading memory beyond the object boundaries, which may not be // obvious if you think that (A)a is sort of a (B)b1 in disguise, but it's not! printf("(B)a.j = %d\n", reinterpret_cast<B&>(a).j); printf("Moving (A)a to (B)b2. Which move constructor will be used?\n"); B b2(reinterpret_cast<B&&>(std::move(a))); printf("b2.i = %d\n", b2.i); printf("b2.j = %d\n", b2.j); printf("^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place\n"); printf("Destroying b2, a, b1\n"); return 0; }
If you've read the code, those printfs will have already given you some hints about the harsh truth: if you move a subclass object to become a superclass object, you're losing all the subclass specific data, because no matter if the original instance was one from a subclass, only the superclass move constructor will be used. And that's bad, very bad. This problem is called object slicing. It's specific to C++ and can also happen with copy constructors. See it with your own eyes:
Creating B b1 A::A() 0x7ffd544ca690 B::B() 0x7ffd544ca690 b1.i = 1 b1.j = 2 Moving (B)b1 to (A)a. Which move constructor will be used? A::A(A&&) 0x7ffd544ca6a0 a.i = 1 (B)a.j = 0 Moving (A)a to (B)b2. Which move constructor will be used? A::A() 0x7ffd544ca6b0 B::B(B&&) 0x7ffd544ca6b0 b2.i = 1 b2.j = 0 ^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place Destroying b2, a, b1 virtual B::~B() 0x7ffd544ca6b0 virtual A::~A() 0x7ffd544ca6b0 virtual A::~A() 0x7ffd544ca6a0 virtual B::~B() 0x7ffd544ca690 virtual A::~A() 0x7ffd544ca690
Why can something that seems so obvious become such a problem, you may ask? Well, it depends on the context. It's not unusual for the codebase of a long lived project to have started using raw pointers for everything, then switching to using references as a way to get rid of null pointer issues when possible, and finally switch to whole objects and copy/move semantics to get rid or pointer issues (references are just pointers in disguise after all, and there are ways to produce null and dangling references by mistake). But this last step of moving from references to copy/move semantics on whole objects comes with the small object slicing nuance explained in this post, and when the size and all the different things to have into account about the project steals your focus, it's easy to forget about this.
So, please remember: never use move semantics that convert your precious subclass instance to a superclass instance thinking that the subclass data will survive. You can regret about it and create difficult to debug problems inadvertedly.
Happy coding!
10 Sep 2024 7:58am GMT
17 Jun 2024
Planet Maemo
Incorporating 3D Gaussian Splats into the graphics pipeline
3D Gaussian splatting is the emerging rendering technique that is overtaking NeRFs. Since it is centered around point primitives, it is more compatible with traditional graphics pipelines that already support point rendering.
Gaussian splats essentially enhance the concept of point rendering by converting the point primitive into a 3D ellipsoid, which is then projected into 2D during the rendering process.. This concept was initially described in 2002 [3], but the technique of extending Structure from Motion scans in this way was only detailed more recently [1].
In this post, I explore how to integrate Gaussian splats into the traditional graphics pipeline. This allows them to be used alongside triangle-based primitives and interact with them through the depth buffer for occlusion (see header image). This approach also simplifies deployment by eliminating the need for CUDA.
Storage
The original implementation uses .ply files as their checkpoint format, focusing on maintaining training-relevant data structures at the expense of storage efficiency, leading to increased file sizes.
For example, it stores the covariance as scaling and a rotation quaternion, necessitating reconstruction during rendering. A more efficient approach would be to leverage orthogonality, storing only the diagonal and upper triangular vectors, thereby eliminating reconstruction and reducing storage requirements.
Further analysis of the storage usage for each attribute shows that the spherical harmonics of orders 1-3 are the main contributors to the file size. However, according to the ablation study in the original publication [1], these harmonics only lead to a modest PSNR improvement of 0.5.
Therefore, the most straightforward way to decrease storage is by discarding the higher-order spherical harmonics. Additionally, the level 0 spherical harmonics can be converted into a diffuse color and merged with opacity to form a single RGBA value. These simple yet effective methods were implemented in one of the early WebGL implementations, resulting in the .splat format. As an added benefit, this format can be easily interpreted by viewers unaware of Gaussian splats as a simple colored point cloud:
By directly storing the covariance as previously mentioned we can reduce the precision from float32
to float16
, thereby halving the storage needed for that data. Furthermore, since most splats have limited spatial extents, we can also utilize float16
for position data, yielding additional storage savings.
With these changes, we achieve a storage requirement of 22 bytes per splat, in contrast to the 44 bytes needed by the .splat format and 236 bytes in the original implementation. Thus, we have attained a 10x reduction in storage compared to the original implementation simply by using more suitable data types.
Blending
The image formation model presented in the original paper [1] is similar to the NeRF rendering, as it is compared to it. This involves casting a ray and observing its intersection with the splats, which leads to front-to-back blending. This is precisely the approach taken by the provided CUDA implementation.
Blending remains a component of the fixed-function unit within the graphics pipeline, which can be set up for front-to-back blending [2] by using the factors (one_minus_dest_alpha, one)
and by multiplying color and alpha in the shader as color.rgb * color.a
. This results in the following equation:
\begin{aligned}C_{dst} &= (1 - \alpha_{dst}) \cdot \alpha_{src} C_{src} &+ C_{dst}\\ \alpha_{dst} &= (1 - \alpha_{dst})\cdot\alpha_{src} &+ \alpha_{dst}\end{aligned}
However, this method requires the framebuffer alpha value to be zero before rendering the splats, which is not typically the case as any previous render pass could have written an arbitrary alpha value.
A simple solution is to switch to back-to-front sorting and use the standard alpha blending factors (src_alpha, one_minus_src_alpha)
for the following blending equation:
C_{dst} = \alpha_{src} \cdot C_{src} + (1 - \alpha_{src}) \cdot C_{dst}
This allows us to regard Gaussian splats as a special type of particles that can be rendered together with other transparent elements within a scene.
References
- Kerbl, Bernhard, et al. "3d gaussian splatting for real-time radiance field rendering." ACM Transactions on Graphics 42.4 (2023): 1-14.
- Green, Simon. "Volumetric particle shadows." NVIDIA Developer Zone (2008).
- Zwicker, Matthias, et al. "EWA splatting." IEEE Transactions on Visualization and Computer Graphics 8.3 (2002): 223-238.
17 Jun 2024 1:28pm GMT
18 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub
I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.
So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.
Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.
Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org
But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.
The equipment deployed in this installation currently contains:
-
a rather beefy Supermicro 2U server with EPYC 7113P CPU and 4x PCIe, two of which are populated with Digium TE820 cards resulting in a total of 16 E1 ports
-
an icE1usb with RS422 interface board connected via 100m RS422 to an Ericsson GPS03 receiver. There's two layers of of over-voltage protection on the RS422 (each with gas discharge tubes and TVS) and two stages of over-voltage protection in the coaxial cable between antenna and GPS receiver.
-
a Livingston Portmaster3 RAS server
-
a Cisco AS5400 RAS server
For more details, see this wiki page and this ticket
Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.
In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.
Acknowledgements
I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware
18 Sep 2022 10:00pm GMT
08 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front
Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.
This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.
My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.
The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about
In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.
So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.
08 Sep 2022 10:00pm GMT
Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables
If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.
In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.
In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.
So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.
Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.
bursty bit clock changes until link is up
The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4
As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.
What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.
As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.
This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.
But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.
clock drift between master and slave cards
Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.
When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(
Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.
This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.
The work-around
If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.
In this setup, your slave card[s] will have perfect bit clock sync/lock.
Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).
08 Sep 2022 10:00pm GMT