26 Apr 2024

feedLXer Linux News

13 Best Free and Open Source Linux Disk Cloning Tools

Disk cloning software is not simply helpful for system backups. It has a wide range of other uses such as provisioning new computers in the workplace, restoring computers from a master image, and system recovery.

26 Apr 2024 9:14am GMT

feedPlanet Debian

Russell Coker: Humane AI Pin

I wrote a blog post The Shape of Computers [1] exploring ideas of how computers might evolve and how we can use them. One of the devices I mentioned was the Humane AI Pin, which has just been the recipient of one of the biggest roast reviews I've ever seen [2], good work Marques Brownlee! As an aside I was once given a product to review which didn't work nearly as well as I think it should have worked so I sent an email to the developers saying "sorry this product failed to work well so I can't say anything good about it" and didn't publish a review.

One of the first things that caught my attention in the review is the note that the AI Pin doesn't connect to your phone. I think that everything should connect to everything else as a usability feature. For security we don't want so much connecting and it's quite reasonable to turn off various connections at appropriate times for security, the Librem5 is an example of how this can be done with hardware switches to disable Wifi etc. But to just not have connectivity is bad.

The next noteworthy thing is the external battery which also acts as a magnetic attachment from inside your shirt. So I guess it's using wireless charging through your shirt. A magnetically attached external battery would be a great feature for a phone, you could quickly swap a discharged battery for a fresh one and keep using it. When I tried to make the PinePhonePro my daily driver [3] I gave up and charging was one of the main reasons. One thing I learned from my experiment with the PinePhonePro is that the ratio of charge time to discharge time is sometimes more important than battery life and being able to quickly swap batteries without rebooting is a way of solving that. The reviewer of the AI Pin complains later in the video about battery life which seems to be partly due to wireless charging from the detachable battery and partly due to being physically small. It seems the "phablet" form factor is the smallest viable personal computer at this time.

The review glosses over what could be the regarded as the 2 worst issues of the device. It does everything via the cloud (where "the cloud" means "a computer owned by someone I probably shouldn't trust") and it records everything. Strange that it's not getting the hate the Google Glass got.

The user interface based on laser projection of menus on the palm of your hand is an interesting concept. I'd rather have a Bluetooth attached tablet or something for operations that can't be conveniently done with voice. The reviewer harshly criticises the laser projection interface later in the video, maybe technology isn't yet adequate to implement this properly.

The first criticism of the device in the "review" part of the video is of the time taken to answer questions, especially when Internet connectivity is poor. His question "who designed the Washington Monument" took 8 seconds to start answering it in his demonstration. I asked the Alpaca LLM the same question running on 4 cores of a E5-2696 and it took 10 seconds to start answering and then printed the words at about speaking speed. So if we had a free software based AI device for this purpose it shouldn't be difficult to get local LLM computation with less delay than the Humane device by simply providing more compute power than 4 cores of a E5-2696v3. How does a 32 core 1.05GHz Mali G72 from 2017 (as used in the Galaxy Note 9) compare to 4 cores of a 2.3GHz Intel CPU from 2015? Passmark says that Intel CPU can do 48GFlop with all 18 cores so 4 cores can presumably do about 10GFlop which seems less than the claimed 20-32GFlop of the Mali G72. It seems that with the right software even older Android phones could give adequate performance for a local LLM. The Alpaca model I'm testing with takes 4.2G of RAM to run which is usable in a Note 9 with 8G of RAM or a Pixel 8 Pro with 12G. A Pixel 8 Pro could have 4.2G of RAM reserved for a LLM and still have as much RAM for other purposes as my main laptop as of a few months ago. I consider the speed of Alpaca on my workstation to be acceptable but not great. If we can get FOSS phones running a LLM at that speed then I think it would be great for a first version - we can always rely on newer and faster hardware becoming available.

Marques notes that the cause of some of the problems is likely due to a desire to make it a separate powerful product in the future and that if they gave it phone connectivity in the start they would have to remove that later on. I think that the real problem is that the profit motive is incompatible with good design. They want to have a product that's stand-alone and justifies the purchase price plus subscription and that means not making it a "phone accessory". While I think that the best thing for the user is to allow it to talk to a phone, a PC, a car, and anything else the user wants. He compares it to the Apple Vision Pro which has the same issue of trying to be a stand-alone computer but not being properly capable of it.

One of the benefits that Marques cites for the AI Pin is the ability to capture voice notes. Dictaphones have been around for over 100 years and very few people have bought them, not even in the 80s when they became cheap. While almost everyone can occasionally benefit from being able to make a note of an idea when it's not convenient to write it down there are few people who need it enough to carry a separate device, not even if that device is tiny. But a phone as a general purpose computing device with microphone can easily be adapted to such things. One possibility would be to program a phone to start a voice note when the volume up and down buttons are pressed at the same time or when some other condition is met. Another possibility is to have a phone have a hotkey function that varies by what you are doing, EG if bushwalking have the hotkey be to take a photo or if on a flight have it be taking a voice note. On the Mobile Apps page on the Debian wiki I created a section for categories of apps that I think we need [4]. In that section I added the following list:

  1. Voice input for dictation
  2. Voice assistant like Google/Apple
  3. Voice output
  4. Full operation for visually impaired people

One thing I really like about the AI Pin is that it has the potential to become a really good computing and personal assistant device for visually impaired people funded by people with full vision who want to legally control a computer while driving etc. I have some concerns about the potential uses of the AI Pin while driving (as Marques stated an aim to do), but if it replaces the use of regular phones while driving it will make things less bad.

Marques concludes his video by warning against buying a product based on the promise of what it can be in future. I bought the Librem5 on exactly that promise, the difference is that I have the source and the ability to help make the promise come true. My aim is to spend thousands of dollars on test hardware and thousands of hours of development time to help make FOSS phones a product that most people can use at low price with little effort.

Another interesting review of the pin is by Mrwhostheboss [5], one of his examples is of asking the pin for advice about a chair but without him knowing the pin selected a different chair in the room. He compares this to using Google's apps on a phone and seeing which item the app has selected. He also said that he doesn't want to make an order based on speech he wants to review a page of information about it. I suspect that the design of the pin had too much input from people accustomed to asking a corporate travel office to find them a flight and not enough from people who look through the details of the results of flight booking services trying to save an extra $20. Some people might say "if you need to save $20 on a flight then a $24/month subscription computing service isn't for you", I reject that argument. I can afford lots of computing services because I try to get the best deal on every moderately expensive thing I pay for. Another point that Mrwhostheboss makes is regarding secret SMS, you probably wouldn't want to speak a SMS you are sending to your SO while waiting for a train. He makes it clear that changing between phone and pin while sharing resources (IE not having a separate phone number and separate data store) is a desired feature.

The most insightful point Mrwhostheboss made was when he suggested that if the pin had come out before the smartphone then things might have all gone differently, but now anything that's developed has to be based around the expectations of phone use. This is something we need to keep in mind when developing FOSS software, there's lots of different ways that things could be done but we need to meet the expectations of users if we want our software to be used by many people.

I previously wrote a blog post titled Considering Convergence [6] about the possible ways of using a phone as a laptop. While I still believe what I wrote there I'm now considering the possibility of ease of movement of work in progress as a way of addressing some of the same issues. I've written a blog post about Convergence vs Transferrence [7].

26 Apr 2024 8:30am GMT

feedLXer Linux News

Upgrade to Ubuntu 24.04 "Noble Numbat" From Ubuntu 22.04

Here are the complete steps and precautions you need to take before upgrading to Ubuntu 24.04 LTS "Noble Numbat" from Ubuntu 22.04 LTS "Jammy Jellyfish".

26 Apr 2024 7:44am GMT

feedPlanet Debian

Russell Coker: Convergence vs Transference

I previously wrote a blog post titled Considering Convergence [1] about the possible ways of using a phone as a laptop. While I still believe what I wrote there I'm now considering the possibility of ease of movement of work in progress as a way of addressing some of the same issues.

Currently the expected use is that if you have web pages open on Chrome on Android it's possible to instruct Chrome on the desktop to open the same page if both instances of Chrome are signed in to the same GMail account. It's also possible to view the Chrome history with CTRL-H, select "tabs from other devices" and load things that were loaded on other devices some time ago. This is very minimal support for moving work between devices and I think we can do better.

Firstly for web browsing the Chrome functionality is barely adequate. It requires having a heavyweight login process on all browsers that includes sharing stored passwords etc which isn't desirable. There are many cases where moving work is desired without sharing such things, one example is using a personal device to research something for work. Also the Chrome method of sending web pages is slow and unreliable and the viewing history method gets all closed tabs when the common case is "get the currently open tabs from one browser window" without wanting the dozens of web pages that turned out not to be interesting and were closed. This could be done with browser plugins to allow functionality similar to KDE Connect for sending tabs and also the option of emailing a list of URLs or a JSON file that could be processed by a browser plugin on the receiving end. I can send email between my home and work addresses faster than the Chrome share to another device function can send a URL.

For documents we need a way of transferring files. One possibility is to go the Chromebook route and have it all stored on the web. This means that you rely on a web based document editing system and the FOSS versions are difficult to manage. Using Google Docs or Sharepoint for everything is not something I consider an acceptable option. Also for laptop use being able to run without Internet access is a good thing.

There are a range of distributed filesystems that have been used for various purposes. I don't think any of them cater to the use case of having a phone/laptop and a desktop PC (or maybe multiple PCs) using the same files.

For a technical user it would be an option to have a script that connects to a peer system (IE another computer with the same accounts and access control decisions) and rsync a directory of working files and the shell history, and then opens a shell with the HISTFILE variable, current directory, and optionally some user environment variables set to match. But this wouldn't be the most convenient thing even for technical users.

For programs that are integrated into the desktop environment it's possible for them to be restarted on login if they were active when the user logged out. The session tracking for that has about 1/4 the functionality needed for requesting a list of open files from the application, closing the application, transferring the files, and opening it somewhere else. I think that this would be a good feature to add to the XDG setup.

The model of having programs and data attached to one computer or one network server that terminals of some sort connect to worked well when computers were big and expensive. But computers continue to get smaller and cheaper so we need to think of a document based use of computers to allow things to be easily transferred as convenient. With convenience being important so the hacks of rsync scripts that can work for technical users won't work for most people.

26 Apr 2024 7:30am GMT

feedLXer Linux News

Revive that Old Computer with antiX Linux

In this article, you will be reading a review for antiX, and why it is such a good Linux distribution for reviving old computers.

26 Apr 2024 6:14am GMT

feedFedora People

Remi Collet: PHP version 8.2.19RC1 and 8.3.7RC1

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, the perfect solution for such tests, and also as base packages.

RPMs of PHP version 8.3.7RC1 are available

RPMs of PHP version 8.2.19RC1 are available

emblem-notice-24.png The Fedora 39, 40, EL-8 and EL-9 packages (modules and SCL) are available for x86_64 and aarch64.

emblem-notice-24.pngPHP version 8.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation: follow the wizard instructions.

emblem-notice-24.png Announcements:

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Parallel installation of version 8.2 as Software Collection:

yum --enablerepo=remi-test install php82

Update of system version 8.3 (EL-7) :

yum --enablerepo=remi-php83,remi-php83-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.2 (EL-7) :

yum --enablerepo=remi-php82,remi-php82-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module switch-to php:remi-8.2
dnf --enablerepo=remi-modular-test update php\*

emblem-notice-24.png Notice:

Software Collections (php82, php83)

Base packages (php)

26 Apr 2024 4:38am GMT

feedLinux Today

How to Install and Configure Memcached on Ubuntu 22.04

Memcached is a free and open-source memory object caching system that speeds up dynamic web applications by caching data in memory. This tutorial will show you how to install Memcached on an Ubuntu 22.04 server.

The post How to Install and Configure Memcached on Ubuntu 22.04 appeared first on Linux Today.

26 Apr 2024 2:00am GMT

feedOMG! Ubuntu

Ubuntu 24.04 Official Flavours Available to Download

Arriving alongside the main Ubuntu 24.04 LTS release are new versions of the official Ubuntu flavours, including Kubuntu, Xubuntu, and Ubuntu Cinnamon. What follows is a concise, top-level overview of the key new features and changes in some of the most popular Ubuntu flavours, plus the relevant downloads links to snag an ISO need should be tempted into trying a few flavors first-hand. Unless otherwise noted, all flavours share the same foundational footprint as the main release, e.g., Linux kernel, graphics drivers, tooling, etc. But some fears, like the Flutter-based OS installer and the snap-centric App Center aren't used in […]

You're reading Ubuntu 24.04 Official Flavours Available to Download, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

26 Apr 2024 1:26am GMT

feedLinux Today

How to Install Fedora 40 Server with Screenshots

In this guide, we'll walk you through the step-by-step process of installing Fedora 40 Server, ensuring a smooth setup for your server environment.

The post How to Install Fedora 40 Server with Screenshots appeared first on Linux Today.

26 Apr 2024 12:20am GMT

25 Apr 2024

feedPlanet Ubuntu

The Fridge: Ubuntu 24.04 LTS (Noble Numbat) released.

Ubuntu 24.04 LTS, codenamed "Noble Numbat", is here. This release continues Ubuntu's proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, together with the community and our partners, to introduce new features and fix bugs.

Our 10th Long Term Supported release sets a new standard in performance engineering, enterprise security and developer experience.

Ubuntu Desktop brings the Subiquity installer to an LTS for the first time. In addition to a refreshed user experience and a minimal install by default, the installer now includes experimental support for ZFS and TPM-based full disk encryption and the ability to import auto-install configurations. Post install, users will be greeted with the latest GNOME 46 alongside a new App Center and firmware-updater. Netplan is now the default for networking configuration and supports bidirectionality with NetworkManager.

Ubuntu now enables frame pointers by default on 64-bit architectures to enable CPU and off-CPU profiling for workload optimisation, alongside a suite of critical performance tools pre-installed. The Linux 6.8 kernel now enables low-latency features by default. For IoT vendors leveraging 32-bit arm hardware, our armhf build has been updated to resolve the upcoming 2038 issue by implementing 64-bit time_t in all necessary packages.

As always, Ubuntu ships with the latest toolchain versions. .NET 8 is now fully supported on Ubuntu 24.04 LTS (and Ubuntu 22.04 LTS) for the full lifecycle of the release and OpenJDK 21 and 17 are both TCK certified to adhere to Java interoperability standards. Ubuntu 24.04 LTS ships Rust 1.75 and a simpler Rust toolchain snap framework to enable future rust versions to be delivered to developers on this release in years to come.

The newest Edubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Cinnamon, Ubuntu Kylin, Ubuntu MATE, Ubuntu Studio, Ubuntu Unity, and Xubuntu are also being released today. More details can be found for these at their individual release notes under the Official Flavours section:

https://discourse.ubuntu.com/t/noble-numbat-release-notes/

Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud and Ubuntu Core. All the remaining flavours will be supported for 3 years. Additional security support is available with ESM (Extended Security Maintenance).

To get Ubuntu 24.04 LTS

In order to download Ubuntu 24.04 LTS, visit:

https://ubuntu.com/download

Users of Ubuntu 23.10 will soon be offered an automatic upgrade to 24.04. Users of 22.04 LTS will be offered the automatic upgrade when 24.04.1 LTS is released, which is scheduled for the 15th of August. For further information about upgrading, see:

https://ubuntu.com/download/desktop/upgrade

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the release notes, which document caveats and workarounds for known issues, and provide more in-depth information on the release itself. They are available at:

https://discourse.ubuntu.com/t/noble-numbat-release-notes/

Find out what's new in this release with a graphical overview:

https://ubuntu.com/desktop
https://ubuntu.com/desktop/features

If you have a question, or if you think you may have found a bug but aren't sure, you can try asking in any of the following places:

#ubuntu on irc.libera.chat
https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
https://ubuntuforums.org
https://askubuntu.com
https://discourse.ubuntu.com

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways
you can participate at:

https://discourse.ubuntu.com/contribute

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, IoT, cloud, and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

https://ubuntu.com/support

More Information

You can learn more about Ubuntu and about this release on our website listed below:

https://ubuntu.com

To sign up for future Ubuntu announcements, please subscribe to Ubuntu's very low volume announcement list at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Thu Apr 25 15:20:52 UTC 2024 by Utkarsh Gupta on behalf of the Ubuntu Release Team

25 Apr 2024 11:47pm GMT

feedLinux Today

How to Run a Python Script on a PHP/HTML File

Discover the easiest and right way to run a Python script or Python code in your PHP or HTML file using two different methods with practical examples.

The post How to Run a Python Script on a PHP/HTML File appeared first on Linux Today.

25 Apr 2024 10:00pm GMT

feedPlanet Debian

Dirk Eddelbuettel: RQuantLib 0.4.22 on CRAN: Maintenance

A new minor release 0.4.22 of RQuantLib arrived at CRAN earlier today, and has been uploaded to Debian.

QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for more than twenty years (!!) as it was one of the first packages I uploaded there.

This release of RQuantLib updates to QuantLib version 1.34 which was just released yesterday, and deprecates use of an access point / type for price/yield conversion for bonds. We also made two minor earlier changes.

Changes in RQuantLib version 0.4.22 (2024-04-25)

  • Small code cleanup removing duplicate R code

  • Small improvements to C++ compilation flags

  • Robustify internal version comparison to accommodate RC releases

  • Adjustments to two C++ files for QuantLib 1.34

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 Apr 2024 9:25pm GMT

feedOMG! Ubuntu

How to Enable OneDrive File Access in Ubuntu 24.04 LTS

How to enable OneDrive in Ubuntu 24.04Among the many new features in Ubuntu 24.04 LTS is the ability to access your Microsoft OneDrive files through the Nautilus file manager. No 3rd-party app downloads, no dodgy scripts to run, and no paid plans to cough up for because this nifty feature is part of GNOME 46 (and available in any Linux distribution using it, not just the latest Ubuntu LTS). OneDrive file access works the same way as the (long-standing and popular) Google Drive integration: a Gvfs backend authorised through GNOME Online Accounts (via the Settings app), and then surfaced as an entry in the Nautilus sidebar. […]

You're reading How to Enable OneDrive File Access in Ubuntu 24.04 LTS, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

25 Apr 2024 8:15pm GMT

feedLinuxiac

Proxmox VE 8.2 Launches with Enhanced Migration Tools

Proxmox VE 8.2 Launches with Enhanced Migration Tools

Proxmox Virtual Environment 8.2 launches featuring QEMU 8.1.5, LXC 6.0.0, ZFS 2.2.3, Ceph updates, and more.

25 Apr 2024 7:49pm GMT

feedPlanet Ubuntu

Kubuntu General News: Kubuntu 24.04 LTS Noble Numbat Released

The Kubuntu Team is happy to announce that Kubuntu 24.04 has been released, featuring the 'beautiful' KDE Plasma 5.27 simple by default, powerful when needed.

Codenamed "Noble Numbat", Kubuntu 24.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

Under the hood, there have been updates to many core packages, including a new 6.8-based kernel, KDE Frameworks 5.115, KDE Plasma 5.27 and KDE Gear 23.08.

Kubuntu 24.04 with Plasma 5.27.11

Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Haruna, Krita, Kdevelop, Yakuake, and many many more applications are updated.

Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.

For a list of other application updates, and known bugs be sure to read our release notes.

Download Kubuntu 24.04, or learn how to upgrade from 23.10 or 22.04 LTS.

Note: For upgrades from 23.10, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.

25 Apr 2024 4:16pm GMT

feedPlanet KDE | English

Kubuntu 24.04 LTS Noble Numbat Released

The Kubuntu Team is happy to announce that Kubuntu 24.04 has been released, featuring the 'beautiful' KDE Plasma 5.27 simple by default, powerful when needed.

Codenamed "Noble Numbat", Kubuntu 24.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

Under the hood, there have been updates to many core packages, including a new 6.8-based kernel, KDE Frameworks 5.115, KDE Plasma 5.27 and KDE Gear 23.08.

Kubuntu 24.04 with Plasma 5.27.11

Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Haruna, Krita, Kdevelop, Yakuake, and many many more applications are updated.

Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.

For a list of other application updates, and known bugs be sure to read our release notes.

Download Kubuntu 24.04, or learn how to upgrade from 23.10 or 22.04 LTS.

Note: For upgrades from 23.10, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.

25 Apr 2024 4:16pm GMT

feedPlanet Ubuntu

Ubuntu Studio: Ubuntu Studio 24.04 LTS Released

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 24.04 LTS, code-named "Noble Numbat". This marks Ubuntu Studio's 34th release. This release is a Long-Term Support release and as such, it is supported for 3 years (36 months, until April 2027).

Since it's just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a more complete list of changes and known issues. Listed here are some of the major highlights.

You can download Ubuntu Studio 24.04 LTS from our download page.

Special Notes

The Ubuntu Studio 24.04 LTS disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a standard DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.

Minimum installation media requirements: Dual-Layer DVD or 8GB USB drive.

Images can be obtained from this link: https://cdimage.ubuntu.com/ubuntustudio/releases/24.04/beta/

Full updated information, including Upgrade Instructions, are available in the Release Notes.

Please note that upgrading from 22.04 before the release of 24.04.1, due August 2024, is unsupported.

Upgrades from 23.10 should be enabled within a month after release, so we appreciate your patience.

New This Release

All-New System Installer

In cooperation with the Ubuntu Desktop Team, we have an all-new Desktop installer. This installer uses the underlying code of the Ubuntu Server installer ("Subiquity") which has been in-use for years, with a frontend coded in "Flutter". This took a large amount of work for this release, and we were able to help a lot of other official Ubuntu flavors transition to this new installer.

Be on the lookout for a special easter egg when the graphical environment for the installer first starts. For those of you who have been long-time users of Ubuntu Studio since our early days (even before Xfce!), you will notice exactly what it is.

PipeWire 1.0.4

Now for the big one: PipeWire is now mature, and this release contains PipeWire 1.0. With PipeWire 1.0 comes the stability and compatibility you would expect from multimedia audio. In fact, at this point, we recommend PipeWire usage for both Professional, Prosumer, and Everyday audio needs. At Ubuntu Summit 2023 in Riga, Latvia, our project leader Erich Eickmeyer used PipeWire to demonstrate live audio mixing with much success and has since done some audio mastering work using it. JACK developers even consider it to be "JACK 3".

PipeWire's JACK compatibility is configured to use out-of-the-box and is zero-latency internally. System latency is configurable via Ubuntu Studio Audio Configuration.

However, if you would rather use straight JACK 2 instead, that's also possible. Ubuntu Studio Audio Configuration can disable and enable PipeWire's JACK compatibility on-the-fly. From there, you can simply use JACK via QJackCtl.

With this, we consider audio production with Ubuntu Studio so mature that it can now rival operating systems such as macOS and Windows in ease-of-use since it's ready to go out-of-the-box.

Deprecation of PulseAudio/JACK setup/Studio Controls

Due to the maturity of PipeWire, we now consider the traditional PulseAudio/JACK setup, where JACK would be started/stopped by Studio Controls and bridged to PulseAudio, deprecated. This configuration is still installable via Ubuntu Studio Audio Configuration, but we do not recommend it. Studio Controls may return someday as a PipeWire fine-tuning solution, but for now it is unsupported by the developer. For that reason, we recommend users not use this configuration. If you do, it is at your own risk and no support will be given. In fact, it's likely to be dropped for 24.10.

Ardour 8.4

While this does not represent the latest release of Ardour, Ardour 8.4 is a great release. If you would like the latest release, we highly recommend purchasing one-time or subscribing to Ardour directly from the developers to help support this wonderful application. Also, for that reason, this will be an application we will not directly backport. More on that later.

Ubuntu Studio Audio Configuration

Ubuntu Studio Audio Configuration has undergone a UI overhaul and contains the ability to start and stop a Dummy Audio Device which can also be configured to start or stop upon login. When assigned as the default, this will free-up channels that would normally be assigned to your system audio to be assigned to a null device.

Meta Package for Music Education

In cooperation with Edubuntu, we have created a metapackage for music education. This package is installable from Ubuntu Studio Installer and includes the following packages:

New Artwork

Thanks to the work of Eylul and the submissions to the Ubuntu Studio Noble Numbat Wallpaper Contest, we have a number of wallpapers to choose from and a new default wallpaper.

Deprecation of Ubuntu Studio Backports Is In Effect

As stated in the Ubuntu 23.10 Release Announcement, the Ubuntu Studio Backports PPA is now deprecated in favor of the official Ubuntu Backports repository. However, the Backports repository only works for LTS releases and for good reason. There are a few requirements for backporting:

If you have a suggestion for an application for which to backport that meets those requirements, feel free to join and email the Ubuntu Studio Users Mailing List with your suggestion with the tag "[BPO]" at the beginning of the subject line. Backports to 22.04 LTS are now closed and backports to 24.04 LTS are now open. Additionally, suggestions must pertain to Ubuntu Studio and preferably must be applications included with Ubuntu Studio. Suggestions can be rejected at the Project Leader's discretion.

One package that is exempt to backporting is Ardour. To help support Ardour's funding, you may obtain later versions directly from them. To do so, please one-time purchase or subscribe to Ardour from their website. If you wish to get later versions of Ardour from us, you will have to wait until the next regular release of Ubuntu Studio, due in October 2024.

We're back on Matrix

You'll notice that the menu links to our support chat and on our website will now take you to a Matrix chat. This is due to the Ubuntu community carving its own space within the Matrix federation.

However, this is not only a support chat. This is also a creativity discussion chat. You can pass ideas to each other and you're welcome to it if the topic remains within those confines. However, if a moderator or admin warns you that you're getting off-topic (or the intention for the chat room), please heed the warning.

This is a persistent connection, meaning if you close the window (or chat), it won't lose your place as you may only need to sign back in to resume the chat.

Frequently Asked Questions

Q: Does Ubuntu Studio contain snaps?
A: Yes. Mozilla's distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.

Thunderbird also became a snap during this cycle for the maintainers to get security patches delivered faster.

Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.

Q: Will you make an ISO with {my favorite desktop environment}?
A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we're completely volunteer-run, we don't have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio - which does *not* convert that flavor to Ubuntu Studio but adds its benefits.

Q: What if I don't want all these packages installed on my machine?
A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don't want or need!

Looking Toward the Future

Plasma 6

Ubuntu Studio, in cooperation with Kubuntu, will be switching to Plasma 6 during the 24.10 development cycle. Likewise, Lubuntu will be switching to LXQt 2.0 and Qt 6, so the three flavors will be cooperating to do the move.

New Look

Ubuntu Studio has been using the same theming, "Materia" (except for the 22.04 LTS release which was a re-colored Breeze theme) since 19.04. However, Materia has gone dead upstream. To stay consistent, we found a fork called "Orchis" which seems to match closely and will be switching to that. More on that soon.

Minimal Installation

The new system installer has the capability to do minimal installations. This was something we did not have time to implement this cycle but intend to do for 24.10. This will let users install a minimal desktop to get going and then install what they need via Ubuntu Studio Installer. This will make a faster installation process but will not make the installation .iso image smaller. However, we have an idea for that as well.

Minimal Installation .iso Image

We are going to research what it will take to create a minimal installer .iso image that will function much like the regular .iso image minus the ability to install everything and allow the user to customize the installation via Ubuntu Studio Installer. This should lead to a much smaller initial download. Unlike creating a version with a different desktop environment, the Ubuntu Technical Board has been on record as saying this would not require going through the new flavor creation process. Our friends at Xubuntu recently did something similar.

Get Involved!

A wonderful way to contribute is to get involved with the project directly! We're always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

Our project leader, Erich Eickmeyer, is now working on Ubuntu Studio at least part-time, and is hoping that the users of Ubuntu Studio can give enough to generate a monthly part-time income. Your donations are appreciated! If other distributions can do it, surely we can! See the sidebar for ways to give!

Special Thanks

Huge special thanks for this release go to:

A Note from the Project Leader

When I started out working on Ubuntu Studio six years ago, I had a vision of making it not only the easiest Linux-based operating system for content creation, but the easiest content creation operating system… full-stop.

With the release of Ubuntu Studio 24.04 LTS, I believe we have achieved that goal. No longer do we have to worry about whether an application is JACK or PulseAudio or… whatever. It all just works! Audio applications can be patched to each other!

If an audio device doesn't depend on complex drivers (i.e. if the device is class-compliant), it will just work. If a user wishes to lower the latency or change the sample rate, we have a utility that does that (Ubuntu Studio Audio Configuration). If a user wants to have finer control use pure JACK via QJackCtl, they can do that too!

I honestly don't know how I would replicate this on Windows, and replicating on macOS would be much harder without downloading all sorts of applications. With Ubuntu Studio 24.04 LTS, it's ready to go and you don't have to worry about it.

Where we are now is a dream come true for me, and something I've been hoping to see Ubuntu Studio become. And now, we're finally here, and I feel like it can only get better.

-Erich Eickmeyer

25 Apr 2024 3:16pm GMT

feedLinuxiac

Ubuntu 24.04 LTS (Noble Numbat) Released, This Is What’s New

Ubuntu 24.04 LTS (Noble Numbat) Released, This Is What’s New

Ubuntu 24.04 LTS (Noble Numbat) is now generally available. Here's what's new for the world's most popular Linux desktop.

25 Apr 2024 3:10pm GMT

feedOMG! Ubuntu

Ubuntu 24.04 LTS Available to Download, This is What’s New

Laptop on a desk with Ubuntu 24.04 running on itAfter 6 frenzied months of development the final stable Ubuntu 24.04 LTS release has arrived and is available for download. Ubuntu 24.04 LTS (codenamed 'Noble Numbat') includes a rich array of new features ranging from an enhanced desktop installer and a the latest GNOME desktop to gaming improvements and a new Linux kernel. As a long-term support release Ubuntu 24.04 LTS gets 5 years of select apps updates, security fixes, kernel upgrades, and other buffs, and a further 5 years of extended security coverage via Ubuntu Pro. Plus, enterprise customers can buy an additional 2 years of coverage to make […]

You're reading Ubuntu 24.04 LTS Available to Download, This is What's New, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

25 Apr 2024 2:04pm GMT

feedPlanet KDE | English

Mixing C++ and Rust for Fun and Profit: Part 3

In the two previous posts (Part 1 and Part 2), we looked at how to build bindings between C++ and Rust from scratch. However, while building a binding generator from scratch is fun, it's not necessarily an efficient way to integrate Rust into your C++ project. Let's look at some existing technologies for mixing C++ and Rust that you can easily deploy today.

bindgen

bindgen is an official tool of the Rust project that can create bindings around C headers. It can also wrap C++ headers, but there are limitations to its C++ support. For example, while you can wrap classes, they won't have their constructors or destructors automatically called. You can read more about these limitations on the bindgen C++ support page. Another quirk of bindgen is that it only allows you to call C++ from Rust. If you want to go the other way around, you have to add cbindgen to generate C headers for your Rust code.

CXX

CXX is a more powerful framework for integrating C++ and Rust. It's used in some well-known projects, such as Chromium. It does an excellent job at integrating C++ and Rust, but it is not an actual binding generator. Instead, all of your bindings have to be manually created. You can read the tutorial to learn more about how CXX works.

autocxx

Since CXX doesn't generate bindings itself, if you want to use it in your project, you'll need to find a generator that wraps C++ headers with CXX bindings. autocxx is a Google project that does just that, using bindgen to generate Rust bindings around C++ headers. However, it gets better-autocxx can also create C++ bindings for Rust functions.

CXX-Qt

While CXX is one of the best C++/Rust binding generators available, it fails to address Qt users. Since Qt depends so heavily on the moc to enable features like signals and slots, it's almost impossible to use it with a general-purpose binding generator. That's where CXX-Qt comes in. KDAB has created the CXX-Qt crate to allow you to integrate Rust into your C++/Qt application. It works by leveraging CXX to generate most of the bindings but then adds a Qt support layer. This allows you to easily use Rust on the backend of your Qt app, whether you're using Qt Widgets or QML. CXX-Qt is available on Github and crates.io.

If you're interested in integrating CXX-Qt into your C++ application, let us know. To learn more about CXX-Qt, you can check out this blog.

Other options

There are some other binding generators out there that aren't necessarily going to work well for migrating your codebase, but you may want to read about them and keep an eye on them:

In addition, there are continuing efforts to improve C++/Rust interoperability. For example, Google recently announced that they are giving $1 million dollars to the Rust foundation to improve interoperability.

Conclusion

In the world of programming tools and frameworks, there is never a single solution that will work for everybody. However, CXX, CXX-Qt, and autocxx seem to be the best options for anyone who wants to port their C++ codebase to Rust. Even if you aren't looking to completely remove C++ from your codebase, these binding generators may be a good option for you to promote memory safety in critical areas of your application.

Have you successfully integrated Rust in your C++ codebase with one of these tools? Have you used a different tool or perhaps a different programming language entirely? Leave a comment and let us know. Memory-safe programming languages like Rust are here to stay, and it's always good to see programmers change with the times.

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Mixing C++ and Rust for Fun and Profit: Part 3 appeared first on KDAB.

25 Apr 2024 7:00am GMT

Berlin mega-sprint recap

For the past 8 days I've been in Berlin for what is technically four sprints: first a two-day KDE e.V. Board of Directors sprint, and then right afterwards, the KDE Goals mega-sprint for the Eco, Accessibility, and Automation/Systematization Goals! Both were hosted in the offices of KDE Patron MBition, a great partner to KDE which uses our software both internally and in some Mercedes cars. Thanks a lot, MBition! It's been quite a week, but a productive one. So I thought I'd share what we did.

If you're a KDE e.V. member, you've already received an email recap about the Board sprint's discussion topics and decisions. Overall the organization is healthy and in great shape. Something substantive I can share publicly is that we posed for this wicked sick picture:

Moving onto the combined Goals sprint: this was in fact the first sprint for any of the KDE Goals, and having all three represented in one room attracted a unique cross-section of people from the KDE Eco crowd, usability-interested folks, and deeply technical core KDE developers.

Officially I'm the Goal Champion of the Automation & Systematization Goal, and a number of folks attended to work on those topics, ranging from adding and fixing tests to creating a code quality dashboards. Expect more blog posts from other participants regarding what they worked on!

Speaking personally, I changed the bug janitor bot to direct Debian users on old Plasma versions to Debian's own Bug tracker-as in fact the Debian folks advise their own users to do. I added an autotest to validate the change, but regrettably it caused a regression anyway, which I corrected quickly. Further investigation into the reason why this went uncaught revealed that all the autotests for the version-based bug janitor actions are faulty. I worked on fixing them but unfortunately have not met with success yet. Further efforts are needed.

In the process of doing this work, I also found that the bug janitor operates on a hardcoded list of Plasma components, which has of course drifted out of sync with reality since it was originally authored. This causes the bot to miss many bugs at the moment.

Fellow sprint participant Tracey volunteered to address this, so I helped get her set up with a development environment for the bug janitor bot so she can auto-generate the list from GitLab and sysadmin repo metadata. This is in progress and proceeding nicely.

I also proposed a merge request template for the plasma-workspace git repo, modeled on the one we currently use in Elisa. The idea is to encourage people to write better merge request descriptions, and also nudge people in the direction of adding autotests for their merge requests, or at least mentioning reviewers can test the changes. If this ends up successful, I have high hopes about rolling it out more broadly.

But I was also there for the other goals too! Joseph delivered a wonderful presentation about KDE Eco topics, which introduced my new favorite cartoon, and it got me thinking about efficiency and performance. For a while I'd been experiencing high CPU usage in KDE's NeoChat app, and with all the NeoChat developers there in the room, I took the opportunity to prod them with my pointy stick. This isn't the first time I'd mentioned the performance issue to them, but in the past no one could reproduce it and we had to drop the investigation. Well, this time I think everyone else was also thinking eco thoughts, and they really moved heaven and earth to try to reproduce it. Eventually James was able to, and it wasn't long before the issue was put six feet under. The result: NeoChat's background CPU usage is now 0% for people using Intel GPUs, down from 30%. A big win for eco and laptop battery life, which I'm already appreciating as I write this blog post in an airport disconnected from AC power.

To verify the CPU usage, just for laughs I added the Catwalk widget to my panel. It's so adorable that I haven't had the heart to remove it, and now I notice things using CPU time when they should be idle much more than I did before. More visibility for performance issues should over time add up to more fixes for them!

  • Sleeping cat showing 0.5% CPU usage
  • Running cat showing 22.5% CPU usage

Another interesting thing happens when you get a bunch of talented KDE contributors in a room: people can't help but initiate discussions about pressing topics. The result was many conversations about release schedules, dependency policy, visual design vision, and product branding. One discussion very relevant to the sprint was the lack of systematicity in how we use units for spacing in the QtQuick-based UIs we build. This resulted in a proposal that's already generating some spirited feedback.

All in all it was a happy and productive week, and after doing one of these I always feel super privileged to be able to work with as impressively talented and friendly a group of colleagues as this is:

Full disclosure: KDE e.V. paid for my travel and lodging expenses, but not my döner kebab expenses!

25 Apr 2024 12:16am GMT

24 Apr 2024

feedPlanet GNOME

Christian Hergert: Collaborating on Builder

It's no secret I have way more projects to manage than hours in the day.

I hope to rectify this by sharing more knowledge on how my projects are built. The most important project, Builder, is quite a large code-base. It is undoubtedly daunting to dive in and figure out where to start.

A preview of a few pages of the book including the cover, and two pages from the table of contents.

Here is a sort of engineers journal (PDF) in book form about how the components fit together. The writing tries to be brief and to the point. You can always reference the source code for finer points.

There is so much more that can happen with Builder if we have regular contributors and subsystem maintainers.

We still need to bring back a new designer. We still need real strong git commit integration. We still need a physical device simulator to go with our architecture emulator. Our debugger API could use data visualizers and support for the debug-adapter-protocol. Container integration has a long tail of desired features.

You get the idea.

Lots of exciting features will happen once people dive into the code-base to learn more. Hopefully this book helps you along that journey.

24 Apr 2024 10:14pm GMT

feedLinuxiac

Nginx 1.26 Released with Experimental HTTP/3 Support

Nginx 1.26 Released with Experimental HTTP/3 Support

Nginx 1.26 web server debuts with HTTP/3 experimental support, per-server HTTP/2, advanced stream modules, and more.

24 Apr 2024 12:54pm GMT

feedFedora People

Peter Czanik: Using syslog-ng on multiple platforms

Your favorite Linux distribution is X. You test everything there. However, your colleagues use distro Y, and another team distro Z. Nightmares start here: the same commands install a different set of syslog-ng features, configuration defaults and use different object names in the default configuration. I ran into these problems while working with Gábor Samu on his HPC logging blog.

From this blog you can learn about some of the main differences in packaging and configuration of syslog-ng in various Linux distributions and FreeBSD, and how to recognize these when configuring syslog-ng on a different platform.

https://www.syslog-ng.com/community/b/blog/posts/using-syslog-ng-on-multiple-platforms

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

24 Apr 2024 12:07pm GMT

Fedora Magazine: How to rebase to Fedora Linux 40 on Silverblue

Fedora Silverblue is an operating system for your desktop built on Fedora Linux. It's excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. If you want to update or rebase to Fedora Linux 40 on your Fedora Silverblue system, this article tells you how. It not only shows you what to do, but also how to revert things if something unforeseen happens.

Update your existing system

Prior to actually doing the rebase to Fedora Linux 40, you should apply any pending updates. Enter the following in the terminal:

$ rpm-ostree update

or install updates through GNOME Software and reboot.

Note

rpm-ostree is the underlying atomic technology that all the Fedora Atomic Desktops use. The techniques described here for Silverblue will apply to all of them with proper modifications for the appropriate desktop.

Rebasing using GNOME Software

GNOME Software shows you that there is new version of Fedora Linux available on the Updates screen.

<figure class="wp-block-image size-full is-style-default">GNOME_Software_download_screenshot</figure>

First thing to do is download the new image, so select the Download button. This will take some time. When it is done you will see that the update is ready to install.

<figure class="wp-block-image size-full">GNOME_Software_update_screenshot</figure>

Select the Restart & Upgrade button. This step will take only a few moments and the computer will restart when the update is completed. After the restart you will end up in a new and shiny release of Fedora Linux 40. Easy, isn't it?

Rebasing using terminal

If you prefer to do everything in a terminal, then this part of the guide is for you.

Rebasing to Fedora Linux 40 using the terminal is easy. First, check if the 40 branch is available:

$ ostree remote refs fedora

You should see the following in the output:

fedora:fedora/40/x86_64/silverblue

If you want to pin the current deployment (meaning that this deployment will stay as an option in GRUB until you remove it), you can do this by running this command:

# 0 is entry position in rpm-ostree status
$ sudo ostree admin pin 0

To remove the pinned deployment use the following command:

# 2 is entry position in rpm-ostree status 
$ sudo ostree admin pin --unpin 2

Next, rebase your system to the Fedora Linux 40 branch.

$ rpm-ostree rebase fedora:fedora/40/x86_64/silverblue

Finally, the last thing to do is restart your computer and boot to Fedora Linux 40.

How to roll back

If anything bad happens (for instance, if you can't boot to Fedora Linux 40 at all) it's easy to go back. At boot time, pick the entry in the GRUB menu for the version prior to Fedora Linux 40 and your system will start in that previous version rather than Fedora Linux 40. If you don't see the GRUB menu, try to press ESC during boot. To make the change to the previous version permanent, use the following command:

$ rpm-ostree rollback

That's it. Now you know how to rebase Fedora Silverblue to Fedora Linux 40 and roll back. So why not do it today?

FAQ

Because there are similar questions in comments for each blog about rebasing to newer version of Silverblue I will try to answer them in this section.

Question: Can I skip versions during rebase of Fedora? For example from Fedora 38 Silverblue to Fedora 40 Silverblue?

Answer: Although it could be sometimes possible to skip versions during rebase, it is not recommended. You should always update to one version above (38->39 for example) to avoid unnecessary errors.

Question: I have rpm-fusion layered and I get errors during rebase. How should I do the rebase?

Answer: If you have rpm-fusion layered on your Silverblue installation, you should do the following before rebase:

$ rpm-ostree update --uninstall rpmfusion-free-release --uninstall rpmfusion-nonfree-release --install rpmfusion-free-release --install rpmfusion-nonfree-release

After doing this you can follow the guide in this blog post.

Question: Could this guide be used for other ostree editions (Fedora Atomic Desktops) as well like Kinoite, Sericea (Sway Atomic), Onyx (Budgie Atomic),…?

Yes, you can follow the Rebasing using the terminal part of this guide for every Fedora Atomic Desktop. Just use the corresponding branch. For example, for Kinoite use fedora:fedora/40/x86_64/kinoite

24 Apr 2024 8:00am GMT

feedPlanet GNOME

Miguel de Icaza: 23 Apr 2024

Embeddable Game Engine

Many years ago, when working at Xamarin, where we were building cross-platform libraries for mobile developers, we wanted to offer both 2D and 3D gaming capabilities for our users in the form of adding 2D or 3D content to their mobile applications.

For 2D, we contributed and developed assorted Cocos2D-inspired libraries.

For 3D, the situation was more complex. We funded a few over the years, and we contributed to others over the years, but nothing panned out (the history of this is worth a dedicated post).

Around 2013, we looked around, and there were two contenders at the time, one was an embeddable engine with many cute features but not great UI support called Urho, and the other one was a Godot, which had a great IDE, but did not support being embedded.

I reached out to Juan at the time to discuss whether Godot could be turned into such engine. While I tend to take copious notes of all my meetings, those notes sadly were gone as part of the Microsoft acquisition, but from what I can remember Juan told me, "Godot is not what you are looking for" in two dimensions, there were no immediate plans to turn it into an embeddable library, and it was not as advanced as Urho, so he recommended that I go with Urho.

We invested heavily in binding Urho and created UrhoSharp that would go into becoming a great 3D library for our C# users and worked not only on every desktop and mobile platform, but we did a ton of work to make it great for AR and VR headsets. Sadly, Microsoft's management left UrhoSharp to die.

Then, the maintainer of Urho stepped down, and Godot became one of the most popular open-source projects in the world.

Last year, @Faolan-Rad contributed a patch to Godot to turn it into a library that could be embedded into applications. I used this library to build SwiftGodotKit and have been very happy with it ever since - allowing people to embed Godot content into their application.

However, the patch had severe limitations; it could only ever run one Godot game as an embedded system and could not do much more. The folks at Smirk Software wanted to take this further. They wanted to host independent Godot scenes in their app and have more control over those so they could sprinkle Godot content at their heart's content on their mobile app (demo)

They funded some initial work to do this and hired Gergely Kis's company to do this work.

Gergely demoed this work at GodotCon last year. I came back very excited from GodotCon and I decided to turn my prototype Godot on iPad into a complete product.

One of the features that I needed was the ability to embed chunks of Godot in discrete components in my iPad UI, so we worked with Gergely to productize and polish this patch for general consumption.

Now, there is a complete patch under review to allow people to embed arbitrary Godot scenes into their apps. For SwiftUI users, this means that you can embed a Godot scene into a View and display and control it at will.

Hopefully, the team will accept this change into Godot, and once this is done, I will update SwiftGodotKit to get these new capabilities to Swift users (bindings for other platforms and languages are left as an exercise to the reader).

It only took a decade after talking to Juan, but I am back firmly in Godot land.

24 Apr 2024 1:41am GMT

23 Apr 2024

feedPlanet GNOME

Development blog for GNOME Shell and Mutter: Notifications in 46 and beyond

One of the things we're tackling as part of the STF infrastructure initiative is improving notifications. Other platforms have advanced significantly in this area over the past decade, while we still have more or less the same notifications we had since the early GNOME 3 days, both in terms of API and feature set. There's plenty to do here 🙂

The notification drawer on GNOME 45

Modern needs

As part of the effort to port GNOME Shell to mobile Jonas looked into the delta between what we currently support and what we'd need for a more modern notification experience. Some of these limitations are specific to GNOME's implementation, while others are relevant to all desktops.

Tie notifications to apps

As of GNOME 45 there's no clear identification on notification bubbles which app they were sent by. Sometimes it's hard to tell where a notification is coming from, which can be annoying when managing notifications in Settings. This also has potential security implications, since the lack of identification makes it trivial to impersonate other apps.

We want all notifications to be clearly identified as coming from a specific app.

Global notification sounds

GNOME Shell can't play notification sounds in all cases, depending on the API the app is using (see below). Apps not primarily targeting GNOME Shell directly tend to play sounds themselves because they can't rely on the system always doing it (it's an optional feature of the XDG Notification API which different desktops handle differently). This works, but it's messy for app developers because it's hard to test and they have to implement a fallback sound played by the app. From a user perspective it's annoying that you can't always tell where sounds are coming from because they're not necessarily tied to a notification bubble. There's also no central place to manage the notification behavior and it doesn't respect Do Not Disturb.

Notification grouping

Currently all notifications are just added to a single chronological list, which gets messy very quickly. In order to limit the length of the list we only keep the latest 3 notifications for every app, so notifications can disappear before you have a chance to act on them.

Other platforms solve this by grouping notifications by app, or even by message thread, but we don't have anything like this at the moment.

Notifications grouped by app on the iOS lock screen

Expand media support

Currently each notification bubble can only contain one (small) image. It's mostly used for user avatars (for messages, emails, and the like), but sometimes also for actual content (e.g. a thumbnail for the image someone sent).

Ideally what we want is to be able to show larger images in addition to avatars, as the actual content of the notification.

As of GNOME 45 we only have a single slot for images on notifications, and it's too small for actual content.
Other platforms have multiple slots (app icon, user avatar, and content image), and media can be expanded to much larger sizes.

There's also currently no way to include descriptive text for images in notifications, so they are inaccessible to screen readers. This isn't as big a deal with the current icons since they're small and mostly used for ornamental purposes, but will be important when we add larger images in the body.

Updating notification content

It's not possible for apps to update the content inside notifications they sent earlier. This is needed to show progress bars in notifications, or updating the text if a chat message was modified.

How do we get there?

Unfortunately, it turns out that improving notifications is not just a matter of standardizing a few new features and implementing them in GNOME Shell. The way notifications work today has grown organically over the years and the status quo is messy. There are three different APIs used by apps today: XDG Notification, Gio.Notification, and XDG Portal.

How different notification APIs are used today

XDG Notification

This is the Freedesktop specification for a DBus interface for apps to send notifications to the system. It's the oldest notification API still in use. Other desktops mostly use this API, e.g. KDE's KNotification implements this spec.

Somewhat confusingly, this standard has never actually been finalized and is still marked as a draft today, despite not having seen significant changes in the past decade.

Gio.Notification

This is an API in GLib/Gio to send notifications, so it's only used by GTK apps. It abstracts over different OS notification APIs, primarily the XDG one mentioned above, a private GNOME Shell API, the portal API, and Cocoa (macOS).

The primary one being used is the private DBus interface with GNOME Shell. This API was introduced in the early GNOME 3 days because the XDG standard API was deemed too complicated and was missing some features (in particular notifications were not tied to a specific app).

When using Gio.Notification apps can't know which backend is used, and how a notification will be displayed or behave. For example, notifications can only persist after the app is closed if the private GNOME Shell API is used. These differences are specific to GNOME Shell, since the private API is only implemented there.

XDG Portal

XDG portals are secure, standardized system APIs for the Linux desktop. They were introduced as part of the push for app sandboxing around Flatpak, but can (and should) be used by non-sandboxed apps as well.

The XDG notification portal was inspired by the private GNOME Shell API, with some additional features from the XDG API mixed in.

XDG portals consist of a frontend and a backend. In the case of the notification portal, apps talk to the frontend using the portal API, while the backend talks to the system notification API. Backends are specific to the desktop environment, e.g. GNOME or KDE. On GNOME, the backend uses the private GNOME Shell API when possible.

The plan

From the GNOME Shell side we have the XDG API (used by non-GNOME apps), and the private API (used via Gio.Notification by GNOME apps). From the app side we additionally have the XDG portal API. Neither of these can easily supersede the others, because they all have different feature sets and are widely used. This makes improving our notifications tricky, because it's not obvious which of the APIs we should extend.

After several discussions over the past few months we now have consensus that it makes the most sense to invest in the XDG portal API. Portals are the future of system APIs on the free desktop, and enable app sandboxing. Neither of the other APIs can fill this role.

Our plan for notification APIs going forward: Focus on the portal API

This requires work in a number of different modules, including the XDG portal spec, the XDG portal backend for GNOME, GNOME Shell, and client libraries such as Gio.Notification (in GLib), libportal, libnotify, and ashpd.

In the XDG portal spec, we are adding support for a number of missing features:

  • Tying notifications to apps
  • Grouping by message thread
  • Larger images in the notification body
  • Special notifications for e.g. calls and alarms
  • Clearing up some instances of undefined behavior (e.g. markup in the body, playing sounds, whether to show notifications on the lock screen, etc.)

This is the draft XDG desktop portal proposal for the spec changes.

On the GNOME Shell side, these are the primary things we're doing (some already done in 46):

  • Cleanups and refactoring to make the code easier to work on
  • Improve keyboard navigation and screen reader accessibility
  • Header with app name and icon
  • Show full notification body and buttons in the drawer
  • Larger notification icons (e.g. user avatars on chat notifications)
  • Group notifications from the same app as a stack
  • Allow message threads to be grouped in a single notification bubbles
  • Larger images in the notification body
Mockups of what we'd ideally want, including grouping by app, threading, etc.

There are also animated mockups for some of this, courtesy of Jakub Steiner.

The long-term goal is for apps to switch to the portal API and deprecate both of the others as application-facing APIs. Internally we will still need something to communicate between the portal backend and GNOME Shell, but this isn't public API so we're much more flexible here. We might expand either the XDG API or the private GNOME Shell protocol for this purpose, but it has not been decided yet how we'll do this.

What we did in GNOME 46

When we started the STF project late last year we thought we could just pull the trigger on a draft proposal Jonas for an API with the new capabilities needed for mobile. However, as we started discussing things in more detail we realized that this was the the wrong place to start. GNOME Shell already didn't implement a number of features that are in the XDG notification spec, so standardizing new features was not the main blocker.

The code around notifications in GNOME Shell has grown historically and has seen multiple major UI redesigns since GNOME 3.0. Additional complexity comes from the fact that we try to avoid breaking extensions, which means it's difficult to e.g. change function names or signatures. Over time this has resulted in technical debt, such as weird anachronistic structures and names. It was also not using many of the more recent GJS features which didn't exist yet when this code was written originally.

Anyone remember that notifications used to be on the bottom? This is what they looked like in GNOME 3.6 (2012).

As a first step we restructured and cleaned up legacy code, ported it to the most recent GJS features, updated the coding style, and so on. This unfortunately means extensions need to be updated, but it puts us on much firmer ground for the future.

With this out of the way we added the first batch of features from our list above, namely adding notification headers, expanding notifications in the drawer, larger icons, and some style fixes to icons. We also fixed a very annoying issue with "App is ready" notifications not working as expected when clicking a notification (!3198 and !3199).

We also worked on a few other things that didn't make it in time for 46, most notably grouping notifications by app (which there's a draft MR for), and additionally grouping them by thread (prototype only).

Throughout the cycle we also continued to discuss the portal spec, as mentioned above. There are MRs against against XDG desktop portal and the libportal client library implementing the spec changes. There's also a draft implementation for the GTK portal backend.

Future work

With all the groundwork laid in GNOME 46 and the spec draft mostly ready we're in a good position to continue iterating on notifications in 47 and beyond. In GNOME 47 we want to add some of the first newly spec'd features, in particular notification sounds, markup support in the body, and display hints (e.g. showing on the lock screen or not).

We also want to continue work on the UI to unlock even more improvements in the future. In particular, grouping by app will allow us to drop the "only keep 3 notifications per app" behavior and will generally make notifications easier to manage, e.g. allowing to dismiss all notifications from a given app. We're also planning to work on improving keyboard navigation and ensuring all content is accessible to screen readers.

Due to the complex nature of the UI for grouping by app and the many moving parts with moving forward on the spec it's unclear if we'll be able to do more than this in the scope of STF and within the 47 cycle. This means that additional features that require the new spec and/or lots of UI work, such as grouping by thread and custom UI for call or alarm notifications will probably be 48+ material.

Conclusion

As we hope this post has illustrated, notifications are way more complex than they might appear. Improving them requires untangling decades of legacy stuff across many different components, coordinating with other projects, and engaging with standards bodies. That complexity has made this hard to work on for volunteers, and there has not been any recent corporate interest in the area, which is why it has been stagnant for some time.

The Sovereign Tech Fund investment has allowed us to take the time to properly work through the problem, clean up technical debt, and make a plan for the future. We hope to leverage this momentum over the coming releases, for a best-in-class notification experience on the free desktop. Stay tuned 🙂

23 Apr 2024 1:01pm GMT

18 Apr 2024

feedKernel Planet

Pete Zaitcev: sup Python you okay bro

What do you think this does:

class A(object):
 def aa(self):
 return 'A1'
class A(object):
 def aa(self):
 return 'A2'
a = A()
print("%s" % a.aa())

It prints "A2".

But before you think "what's the big deal, the __dict__ of A is getting updated", how about this:

class A(object):
 def aa(self):
 return 'A1'
class A(object):
 def bb(self):
 return 'A2'
a = A()
print("%s" % a.aa())

This fails with "AttributeError: 'A' object has no attribute 'aa'".

Apparently, the latter definition replaces the former completely. This is darkly amusing.

Python 3.12.2

18 Apr 2024 2:44am GMT

16 Apr 2024

feedKernel Planet

Pete Zaitcev: Trailing whitespace in vim

Problem:
When copying from tmux in gnome-terminal, the text is full of whitespace. How do I delete it in gvim?

Solution:
/ \+$

Obviously.

This is an area where tmux is a big regression from screen. Too bad.

16 Apr 2024 8:26pm GMT

15 Apr 2024

feedKernel Planet

Pete Zaitcev: Boot management magic in Fedora 39

Problem: After an update to F39, a system continues to boot F38 kernels

The /bin/kernel-install generates entries in /boot/efi/loader/entries instead of /boot/loader/entries. Also, they are in BLS Type 1 format, and not in the legacy GRUB format. So I cannot copy them over.

Solution:
[root@chihiro zaitcev]# dnf install ostree
[root@chihiro zaitcev]# rm -rf /boot/efi/$(cat /etc/machine-id) /boot/efi/loader/

I've read a bunch of docs and the man page for kernel-install(8), but they are incomprehensible. Still the key insight was that all that Systemd stuff loves to autodetect by finding this directory or that.

The way to test is:
[root@chihiro zaitcev]# /bin/kernel-install -v add 6.8.5-201.fc39.x86_64 /lib/modules/6.8.5-201.fc39.x86_64/vmlinuz

15 Apr 2024 5:51pm GMT

feedPlanet Arch Linux

Arch Linux 2024 Leader Election Results

Recently we held our leader election, and the previous Project Leader Levente "anthraxx" Polyák ran again while no other people were nominated for the role. As per our election rules he is re-elected for a new term. The role of of the project lead within Arch Linux is connected to a few responsibilities regarding decision making (when no consensus can be reached), handling financial matters with SPI and overall project management tasks. Congratulations to Levente and all the best wishes for another successful term! 🥳

15 Apr 2024 12:00am GMT

10 Apr 2024

feedPlanet Gentoo

Gentoo Linux becomes an SPI associated project

SPI Inc. logo

As of this March, Gentoo Linux has become an Associated Project of Software in the Public Interest, see also the formal invitation by the Board of Directors of SPI. Software in the Public Interest (SPI) is a non-profit corporation founded to act as a fiscal sponsor for organizations that develop open source software and hardware. It provides services such as accepting donations, holding funds and assets, … SPI qualifies for 501(c)(3) (U.S. non-profit organization) status. This means that all donations made to SPI and its supported projects are tax deductible for donors in the United States. Read on for more details…

Questions & Answers

Why become an SPI Associated Project?

Gentoo Linux, as a collective of software developers, is pretty good at being a Linux distribution. However, becoming a US federal non-profit organization would increase the non-technical workload.

The current Gentoo Foundation has bylaws restricting its behavior to that of a non-profit, is a recognized non-profit only in New Mexico, but a for-profit entity at the US federal level. A direct conversion to a federally recognized non-profit would be unlikely to succeed without significant effort and cost.

Finding Gentoo Foundation trustees to take care of the non-technical work is an ongoing challenge. Robin Johnson (robbat2), our current Gentoo Foundation treasurer, spent a huge amount of time and effort with getting bookkeeping and taxes in order after the prior treasurers lost interest and retired from Gentoo.

For these reasons, Gentoo is moving the non-technical organization overhead to Software in the Public Interest (SPI). As noted above, SPI is already now recognized at US federal level as a full-fleged non-profit 501(c)(3). It also handles several projects of similar type and size (e.g., Arch and Debian) and as such has exactly the experience and background that Gentoo needs.

What are the advantages of becoming an SPI Associated Project in detail?

Financial benefits to donors:

Financial benefits to Gentoo:

Non-financial benefits to Gentoo:

[1] Presently, almost no donations to the Gentoo Foundation provide a tax benefit for donors anywhere in the world. Becoming a SPI Associated Project enables tax benefits for donors located in the USA. Some other countries do recognize donations made to non-profits in other jurisdictions and provide similar tax credits.

[2] This also depends on jurisdictions and local tax laws of the donor, and is often tied to tax deductions.

[3] The Gentoo Foundation currently pays $1500/year in tax preparation costs.

[4] In recent fiscal years, through careful budgetary planning on the part of the Treasurer and advice of tax professionals, the Gentoo Foundation has used depreciation expenses to offset taxes owing; however, this is not a sustainable strategy.

[5] Non-profits are eligible for reduced fees, e.g., of Paypal (savings of 0.9-1.29% per donation) and other services.

[6] Some sponsorship programs are only available to verified 501(c)(3) organizations

Can I still donate to Gentoo, and how?

Yes, of course, and please do so! For the start, you can go to SPI's Gentoo page and scroll down to the Paypal and Click&Pledge donation links. More information and more ways will be set up soon. Keep in mind, donations to Gentoo via SPI are tax-deductible in the US!

In time, Gentoo will contact existing recurring donors, to aid transitions to SPI's donation systems.

What will happen to the Gentoo Foundation?

Our intention is to eventually transfer the existing assets to SPI and dissolve the Gentoo Foundation. The precise steps needed on the way to this objective are still under discussion.

Does this affect in any way the European Gentoo e.V.?

No. Förderverein Gentoo e.V. will continue to exist independently. It is also recognized to serve public-benefit purposes (§ 52 Fiscal Code of Germany), meaning that donations are tax-deductible in the E.U.

10 Apr 2024 5:00am GMT

08 Apr 2024

feedPlanet Arch Linux

Ratatui Received Funding: What's Next?

Let's delve into the realm of open source funding along with Ratatui's journey.

08 Apr 2024 12:00am GMT

07 Apr 2024

feedPlanet Arch Linux

Increasing the default vm.max_map_count value

The vm.max_map_count paramater will be increased from the default 65530 value to 1048576. This change should help address performance, crash or start-up issues for a number of memory intensive applications, particularly for (but not limited to) some Windows games played through Wine/Steam Proton. Overall, end users should have a smoother experience out of the box with no expressed concerns about potential downsides in the related proposal on arch-dev-public mailing list. This vm.max_map_count increase is introduced in the 2024.04.07-1 release of the filesystem package and will be effective right after the upgrade. Before upgrading, in case you are already setting your own value for that parameter in a sysctl.d configuration file, either remove it (to switch to the new default value) or make sure your configuration file will be read with a higher priority than the /usr/lib/sysctl.d/10-arch.conf file (to supersede the new default value).

07 Apr 2024 12:00am GMT

01 Apr 2024

feedPlanet Gentoo

The interpersonal side of the xz-utils compromise

While everyone is busy analyzing the highly complex technical details of the recently discovered xz-utils compromise that is currently rocking the internet, it is worth looking at the underlying non-technical problems that make such a compromise possible. A very good write-up can be found on the blog of Rob Mensching...

"A Microcosm of the interactions in Open Source projects"

01 Apr 2024 2:54pm GMT

15 Mar 2024

feedPlanet Gentoo

Optimizing parallel extension builds in PEP517 builds

The distutils (and therefore setuptools) build system supports building C extensions in parallel, through the use of -j (--parallel) option, passed either to build_ext or build command. Gentoo distutils-r1.eclass has always passed these options to speed up builds of packages that feature multiple C files.

However, the switch to PEP517 build backend made this problematic. While the backend uses the respective commands internally, it doesn't provide a way to pass options to them. In this post, I'd like to explore the different ways we attempted to resolve this problem, trying to find an optimal solution that would let us benefit from parallel extension builds while preserving minimal overhead for packages that wouldn't benefit from it (e.g. pure Python packages). I will also include a fresh benchmark results to compare these methods.

The history

The legacy build mode utilized two ebuild phases: the compile phase during which the build command was invoked, and the install phase during which install command was invoked. An explicit command invocation made it possible to simply pass the -j option.

When we initially implemented the PEP517 mode, we simply continued calling esetup.py build, prior to calling the PEP517 backend. The former call built all the extensions in parallel, and the latter simply reused the existing build directory.

This was a bit ugly, but it worked most of the time. However, it suffered from a significant overhead from calling the build command. This meant significantly slower builds in the vast majority of packages that did not feature multiple C source files that could benefit from parallel builds.

The next optimization was to replace the build command invocation with more specific build_ext. While the former also involved copying all .py files to the build directory, the latter only built C extensions - and therefore could be pretty much a no-op if there were none. As a side effect, we've started hitting rare bugs when custom setup.py scripts assumed that build_ext is never called directly. For a relatively recent example, there is my pull request to fix build_ext -j… in pyzmq.

I've followed this immediately with another optimization: skipping the call if there were no source files. To be honest, the code started looking messy at this point, but it was an optimization nevertheless. For the no-extension case, the overhead of calling esetup.py build_ext was replaced by the overhead of calling find to scan the source tree. Of course, this still had some risk of false positives and false negatives.

The next optimization was to call build_ext only if there were at least two files to compile. This mostly addressed the added overhead for packages building only one C file - but of course it couldn't resolve all false positives.

One more optimization was to make the call conditional to DISTUTILS_EXT variable. While the variable was introduced for another reason (to control adding debug flag), it provided a nice solution to avoid both most of the false positives (even if they were extremely rare) and the overhead of calling find.

The last step wasn't mine. It was Eli Schwartz's patch to pass build options via DIST_EXTRA_CONFIG. This provided the ultimate optimization - instead of trying to hack a build_ext call around, we were finally able to pass the necessary options to the PEP517 backend. Needless to say, it meant not only no false positives and no false negatives, but it effectively almost eliminated the overhead in all cases (except for the cost of writing the configuration file).

The timings

Table 1. Benchmark results for various modes of package builds
Django 5.0.3 Cython 3.0.9
Serial PEP517 build 5.4 s 46.7 s
build total 3.1 s 8.4 s 20.8 s 23.5 s
PEP517 5.3 s 2.7 s
build_ext total 0.6 s 6 s 20.8 s 23.5 s
PEP517 5.4 s 2.7 s
find + build_ext total 0.06 s 5.5 s 20.9 s 23.6 s
PEP517 5.4 s 2.7 s
Parallel PEP517 build 5.4 s 22.8 s
Plot of the results from the table
Figure 1. Benchmark times plot

For a pure Python package (django here), the table clearly shows how successive iterations have reduced the overhead from parallel build supports, from roughly 3 seconds in the earliest approach, resulting in 8.4 s total build time, to the same 5.4 s as the regular PEP517 build.

For Cython, all but the ultimate solution result in roughly 23.5 s total, half of the time needed for a serial build (46.7 s). The ultimate solution saves another 0.8 s on the double invocation overhead, giving the final result of 22.8 s.

Test data and methodology

The methods were tested against two packages:

  1. Django 5.0.3, representing a moderate size pure Python package, and
  2. Cython 3.0.9, representing a package with a moderate number of C extensions.

Python 3.12.2_p1 was used for testing. The timings were done using time command from bash. The results were averaged from 5 warm cache test runs. Testing was done on AMD Ryzen 5 3600, with pstates boost disabled.

The PEP517 builds were performed using the following command:

python3.12 -m build -nwx

The remaining commands and conditions were copied from the eclass. The test scripts, along with the results, spreadsheet and plot source can be found in the distutils-build-bench repository.

15 Mar 2024 3:41pm GMT

02 Feb 2024

feedPlanet Maemo

libSDL2 and VVVVVV for the Wii

Just a quick update on something that I've been working on in my free time.

I recently refurbished my old Nintendo Wii, and for some reason I cannot yet explain (not even to myself) I got into homebrew programming and decided to port libSDL (the 2.x version -- a 1.x port already existed) to it. The result of this work is already available via the devkitPro distribution, and although I'm sure there are still many bugs, it's overall quite usable.

In order to prove it, I ported the game VVVVVV to the Wii:

During the process I had to fix quite a few bugs in my libSDL port and in a couple of other libraries used by VVVVVV, which will hopefully will make it easier to port more games. There's still an issue that bothers me, where the screen resolution seems to be not totally supported by my TV (or is it the HDMI adaptor's fault?), resulting in a few pixels being cut at the top and at the bottom of the screen. But unless you are a perfectionist, it's a minor issue.

In case you have a Wii to spare, or wouldn't mind playing on the Dolphin emulator, here's the link to the VVVVVV release. Have fun! :-)

0 Add to favourites0 Bury

02 Feb 2024 5:50pm GMT

30 Nov 2023

feedPlanet Maemo

Maemo Community e.V. - Invitation to the General Assembly 2023

Maemo Community e.V. - Invitation to the General Assembly 2023

Dear Member,

The meeting will be held on Friday, December 29th 2023 at 12:00 CET on irc.libera.chat channel #maemo-meeting.

Unless any further issues are raised, the agenda includes the following topics:
1. Welcome by the Chairman of the Board
2. Determination of the proper convocation and the quorum of the General Assembly
3. Acceptance of the annual report for the fiscal year and actions of the Executive
6. Any other business

Requests for additions to the agenda must be submitted to the Board in writing one week prior to the meeting (§ 9.2 of the Statutes).

On Behalf of the Maemo Council, Jussi Ohenoja

0 Add to favourites0 Bury

30 Nov 2023 8:52am GMT

15 Nov 2023

feedPlanet Maemo

stb_image_resize2.h – performance

Recently there was an large rework to the STB single-file image_resize library (STBIR) bumping it to 2.0. While the v1 was really slow and merely usable if you needed to quickly get some code running, the 2.0 rewrite claims to be more considerate of performance by using SIMD. So lets put it to a test.

As references, I chose the moderately optimized C only implementation of Ogre3D and the highly optimized SIMD implementation in OpenCV.

Below you find time to scale a 1024x1024px byte image to 512x512px. All libraries were set to linear interpolation. The time is the accumulated time for 200 runs.

RGB RGBA
Ogre3D 14.1.2 660 ms 668 ms
STBIR 2.01 632 ms 690 ms
OpenCV 4.8 245 ms 254 ms

For the RGBA test, STIBIR was set to the STBIR_4CHANNEL pixel layout. All libraries were compiled with -O2 -msse. Additionally OpenCV could dispatch AVX2 code. Enabling AVX2 with STBIR actually decreased performance.

Note that while STBIR has no performance advantage over a C only implementation for the simple resizing case, it offers some neat features if you want to handle SRGB data or non-premultiplied alpha.

0 Add to favourites0 Bury

15 Nov 2023 1:50pm GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT