02 Feb 2023

feedLinux Today

How to Install Drupal With Docker on Ubuntu 22.04

Drupal is an open-source content management system written in PHP. Here's how to install it using Docker on a Ubuntu 22.04 server.

The post How to Install Drupal With Docker on Ubuntu 22.04 appeared first on Linux Today.

02 Feb 2023 9:00pm GMT

feedLinuxiac

elementary OS 7 Takes Its Place Among the Best Linux Desktops

elementary OS 7 Takes Its Place Among the Best Linux Desktops

The anticipated elementary OS 7 "Horus" release is here, continuing to compete for the prize of best Linux desktop. Here's our review!

02 Feb 2023 8:59pm GMT

feedLXer Linux News

useradd Vs. adduser

Linux is a popular open-source operating system that runs on a variety of hardware platforms, including desktops, servers, and smartphones. One of the key features of Linux is the command-line interface (CLI), which allows users to perform a wide range of tasks using text-based commands.

02 Feb 2023 8:21pm GMT

feedLinux Today

{{unknown}}

The post appeared first on Linux Today.

02 Feb 2023 7:38pm GMT

8 Best Window Managers for Linux

Want to organize your windows and use all the screen space you have? These window managers for Linux should come in handy!

The post 8 Best Window Managers for Linux appeared first on Linux Today.

02 Feb 2023 7:00pm GMT

How to Delete Files With Specific Extensions From the Command Line

Here's how you can delete a large number of files with the same extension or a similar pattern of files you need to remove from your system.

The post How to Delete Files With Specific Extensions From the Command Line appeared first on Linux Today.

02 Feb 2023 6:00pm GMT

John the Ripper: Password Cracking Tutorial and Review

John the Ripper is a popular open-source password cracking tool that can be used to perform brute-force attacks. Learn more here.

The post John the Ripper: Password Cracking Tutorial and Review appeared first on Linux Today.

02 Feb 2023 5:00pm GMT

feedKernel Planet

Linux Plumbers Conference: Preliminary Dates and Location for LPC2023

The 2023 LPC PC is pleased to announce that we've begun exclusive negotiations with the Omni Hotel in Richmond, VA to host Plumbers 2023 from 13-15 November. Note: These dates are not yet final (nor is the location; we have had one failure at this stage of negotiations from all the Plumbers venues we've chosen). We will let you know when this preliminary location gets finalized (please don't book irrevocable travel until then).

The November dates were the only ones that currently work for the venue, but Richmond is on the same latitude as Seville in Spain, so it should still be nice and warm.

02 Feb 2023 4:18pm GMT

feedLinux Today

KDE Gear 22.12.2 Brings Improvements to Dolphin, Elisa, Spectacle

KDE Gear 22.12.2 brings further improvements to K3b, Kalendar, Kate, Kdenlive, KGet, KMail, Konsole, and other apps. Learn more here.

The post KDE Gear 22.12.2 Brings Improvements to Dolphin, Elisa, Spectacle appeared first on Linux Today.

02 Feb 2023 4:00pm GMT

feedLXer Linux News

Open source Ray 2.2 boosts machine learning observability to help scale services like OpenAI's ChatGPT

Ray, the popular open-source machine learning (ML) framework, has released its 2.2 version with improved performance and observability capabilities, as well as features that can help to enable reproducibility.

02 Feb 2023 3:03pm GMT

feedLinux Today

The Open Source Initiative Improves Its Licensing Rules

The Open Source Initiative - defenders of open source - is making the approval process for new open-source licenses clearer and easier.

The post The Open Source Initiative Improves Its Licensing Rules appeared first on Linux Today.

02 Feb 2023 3:00pm GMT

Steam Client Update Enables Big Picture Mode, Adds Linux Fixes

The biggest change in the new Steam Client update is the enablement of the new Big Picture mode by default. Learn more here.

The post Steam Client Update Enables Big Picture Mode, Adds Linux Fixes appeared first on Linux Today.

02 Feb 2023 2:00pm GMT

feedLXer Linux News

Red Hat gives an ARM up to OpenShift Kubernetes operations

With the new release, Red Hat is integrating new capabilities to help improve security and compliance for OpenShift, as well as new deployment options on ARM-based architectures. The OpenShift 4.12 release comes as Red Hat continues to expand its footprint, announcing partnerships with Oracle and SAP this week.

02 Feb 2023 12:40pm GMT

feedKernel Planet

Matthew Garrett: Blocking free API access to Twitter doesn't stop abuse

In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.

There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?

To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.

The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.

comment count unavailable comments

02 Feb 2023 10:36am GMT

feedLXer Linux News

Revisited: termusic – terminal-based music player

When we reviewed termusic back in April 2022 we lamented that this music player was a strong candidate for someone looking for a terminal-based music player with one exception. The software lacked gapless playback.

02 Feb 2023 10:17am GMT

Automatically decrypt your disk using TPM2

Entering the passphrase to decrypt the disk at boot can become quite tedious. On modern systems a secure hardware chip called "TPM" (Trusted Platform Module) can store a secret to automatically decrypt your LUKS partitions.

02 Feb 2023 7:54am GMT

Use ChatGPT From The Command Line With This Wrapper

ChatGPT Wrapper is an unofficial open source command-line interface and Python API for interacting with ChatGPT.

02 Feb 2023 5:31am GMT

Red Hat Beds With Oracle in New Cloud Deal

Somebody should make a movie of this story, with Red Hat as the damsel who finds her power, and Oracle cast as the volatile leading man.

02 Feb 2023 3:08am GMT

feedLinux Today

WTF: Terminal Dashboard

WTF (also known as 'wtfutil') is a Go-based tool billed as "the personal information dashboard for your terminal." Learn more here.

The post WTF: Terminal Dashboard appeared first on Linux Today.

02 Feb 2023 3:00am GMT

What Is Linux? and How Does Linux Work?

Linux is an open-source, community-developed operating system with the kernel at its core, alongside other tools, applications, and services. Learn more here.

The post What Is Linux? and How Does Linux Work? appeared first on Linux Today.

02 Feb 2023 1:00am GMT

feedLXer Linux News

Convert Plain English To Commands Using GPT-3 Powered Shell Genie

Shell Genie is a new command line tool that can be used to ask in plain English how to perform various tasks, and it gives you the shell command you need. To generate the commands, it uses OpenAI's GPT-3 or Free Genie, a free to use backend provided by the Shell Genie developer.

02 Feb 2023 12:45am GMT

01 Feb 2023

feedLinuxiac

How to Install VMware Workstation Player on Fedora

How to Install VMware Workstation Player on Fedora

Get the most out of your Fedora's virtualization capabilities by installing VMware Workstation Player. Learn how here!

01 Feb 2023 11:42pm GMT

feedLXer Linux News

Export a manpage to (almost) any format

At some point you may want to export a manpage to a file. Using 'man' options, you can convert a manual page to PDF, plain text or GROFF, among other formats.

01 Feb 2023 10:22pm GMT

How to Install Zeek Network Security Monitoring Tool on Ubuntu 22.04

Zeek is a free, open-source, and worlds leading security monitoring tool used as a network intrusion detection system and network traffic analyzer. This post will show you how to install the Zeek network security tool on Ubuntu 22.04.

01 Feb 2023 7:59pm GMT

How to Install DokuWiki on Debian 11

DokuWiki is an open-source wiki application written in PHP programming language. It is mainly aimed at creating documentation of any kind. All data is stored in plain text; hence no database server is required.

01 Feb 2023 5:36pm GMT

Linux Mint 21.2 “Victoria” Is Slated for Release on June 2023, Here’s What to Expect

The Linux Mint developers shared today some details on the next major release of their Ubuntu-based distribution, Linux Mint 21.2, which is slated for release this summer with new features and improvements.

01 Feb 2023 3:13pm GMT

GNOME 44 Alpha is Out, Shaping Up to Be A Moderate Release

First testing images of the upcoming GNOME 44 release are now available. Here's a quick look at the new features.

01 Feb 2023 12:50pm GMT

Command Line Internet Radio Player PyRadio 0.9.0 Stable Released With www.radio-browser.info Support

PyRadio, a command line Internet radio player for Linux, Windows and macOS, has been updated to version 0.9.0 (stable) a couple of days ago, receiving new features such as support for Radio Browser (search, list and play https://www.radio-browser.info radio stations), remote control server, and more.

01 Feb 2023 10:27am GMT

Monitoring Oracle Servers With Checkmk

Databases are essential for many IT processes. Their performance and reliability depends on many factors and it makes sense to use a dedicated tool that helps you to stay on top of things. Monitoring your database with an external tool helps you identify performance issues proactively, but there are many factors to consider. With the wrong approach, you run the risk of missing valuable information and also can waste a lot of time configuring your database monitoring. In this tutorial, I will give a quick guide on how to monitor Oracle Database with Checkmk, a universal monitoring tool for all kinds of IT assets.

01 Feb 2023 8:04am GMT

How to Install ionCube Loader on Debian 11

This tutorial will explain how to install ionCube Loader on a Debian 11 server. IonCube is a PHP extension that can decode secured encrypted PHP files at runtime.

01 Feb 2023 5:41am GMT

30 Jan 2023

feedLinuxiac

OpenSnitch App-Level Firewall May Find a Home in Debian 12

Opensnitch App-Level Firewall May Find a Home in Debian 12

A discussion that began in 2018 about adopting OpenSnitch in Debian repositories will probably find a resolution in Debian 12.

30 Jan 2023 2:45pm GMT

29 Jan 2023

feedLinuxiac

Budgie Desktop 10.7: A Sleek and Improved User Experience

Budgie Desktop 10.7: A Sleek and Improved User Experience

The Budgie 10.7 desktop environment is here, bringing many new features and improvements. Check out what's new!

29 Jan 2023 10:12pm GMT

How to Install VS Code on Raspberry Pi OS in 3 Easy Steps

How to Install VS Code on Raspberry Pi OS in 3 Easy Steps

Get started coding on your Raspberry Pi with this guide on installing Visual Studio Code in just a few easy steps.

29 Jan 2023 1:40pm GMT

27 Jan 2023

feedKernel Planet

Matthew Garrett: Further adventures in Apple PKCS#11 land

After my previous efforts, I wrote up a PKCS#11 module of my own that had no odd restrictions about using non-RSA keys and I tested it. And things looked much better - ssh successfully obtained the key, negotiated with the server to determine that it was present in authorized_keys, and then went to actually do the key verification step. At which point things went wrong - the Sign() method in my PKCS#11 module was never called, and a strange
debug1: identity_sign: sshkey_sign: error in libcrypto
sign_and_send_pubkey: signing failed for ECDSA "testkey": error in libcrypto"

error appeared in the ssh output. Odd. libcrypto was originally part of OpenSSL, but Apple ship the LibreSSL fork. Apple don't include the LibreSSL source in their public source repo, but do include OpenSSH. I grabbed the OpenSSH source and jumped through a whole bunch of hoops to make it build (it uses the macosx.internal SDK, which isn't publicly available, so I had to cobble together a bunch of headers from various places), and also installed upstream LibreSSL with a version number matching what Apple shipped. And everything worked - I logged into the server using a hardware-backed key.

Was the difference in OpenSSH or in LibreSSL? Telling my OpenSSH to use the system libcrypto resulted in the same failure, so it seemed pretty clear this was an issue with the Apple version of the library. The way all this works is that when OpenSSH has a challenge to sign, it calls ECDSA_do_sign(). This then calls ECDSA_do_sign_ex(), which in turn follows a function pointer to the actual signature method. By default this is a software implementation that expects to have the private key available, but you can also register your own callback that will be used instead. The OpenSSH PKCS#11 code does this by calling EC_KEY_set_method(), and as a result calling ECDSA_do_sign() ends up calling back into the PKCS#11 code that then calls into the module that communicates with the hardware and everything works.

Except it doesn't under macOS. Running under a debugger and setting a breakpoint on EC_do_sign(), I saw that we went down a code path with a function called ECDSA_do_sign_new(). This doesn't appear in any of the public source code, so seems to be an Apple-specific patch. I pushed Apple's libcrypto into Ghidra and looked at ECDSA_do_sign() and found something that approximates this:

nid = EC_GROUP_get_curve_name(curve);
if (nid == NID_X9_62_prime256v1) {
  return ECDSA_do_sign_new(dgst,dgst_len,eckey);
}
return ECDSA_do_sign_ex(dgst,dgst_len,NULL,NULL,eckey);

What this means is that if you ask ECDSA_do_sign() to sign something on a Mac, and if the key in question corresponds to the NIST P256 elliptic curve type, it goes down the ECDSA_do_sign_new() path and never calls the registered callback. This is the only key type supported by the Apple Secure Enclave, so I assume it's special-cased to do something with that. Unfortunately the consequence is that it's impossible to use a PKCS#11 module that uses Secure Enclave keys with the shipped version of OpenSSH under macOS. For now I'm working around this with an SSH agent built using Go's agent module, forwarding most requests through to the default session agent but appending hardware-backed keys and implementing signing with them, which is probably what I should have done in the first place.

comment count unavailable comments

27 Jan 2023 11:39pm GMT

feedLinuxiac

Pale Moon 32 Browser Released with Web Compatibility Features

Pale Moon 32 Web Browser Released with Web Compatibility Features

The new Pale Moon 32 web browser release fully supports the ECMAScript 2016-2020 JavaScript specifications.

27 Jan 2023 2:20pm GMT

26 Jan 2023

feedLinuxiac

Ubuntu Pro Subscription Is Here: What Does This Mean for Users?

Ubuntu Pro Subscription Hits GA: What It Means for Users?

Launched as a beta in October 2022, Ubuntu Pro subscription is now generally available to anyone and free to use on up to five computers.

26 Jan 2023 10:11pm GMT

OpenVPN 2.6.0 Release Prepared, Brings Remote Entries Support

OpenVPN 2.6.0 Released with Remote Entries Support

The new OpenVPN 2.6.0 release comes with OpenSSL 3.0 support and support for an unlimited number of connection entries and remote entries.

26 Jan 2023 1:41pm GMT

feedKernel Planet

Paul E. Mc Kenney: What Does It Mean To Be An RCU Implementation?

Under Construction

A correspondent closed out 2022 by sending me an off-list email asking whether or not a pair of Rust crates (rcu_clean and left_right) were really implementations of read-copy update (RCU), with an LWN commenter throwing in crossbeam's epoch crate for good measure. At first glance, this is a pair of simple yes/no questions that one should be able to answer off the cuff.

What Is An RCU?

Except that there is quite a variety of RCU implementations in the wild. Even if we remain within the cozy confines of the Linux kernel, we have: (1) The original "vanilla" RCU, (2) Sleepable RCU (SRCU), (3) Tasks RCU, (4) Tasks Rude RCU, and Tasks Trace RCU. These differ not just in performance characteristics, in fact, it is not in general possible to mechanically convert (say) SRCU to RCU. The key attributes of RCU implementations are the marking of read-side code regions and data accesses on the one hand and some means of waiting on all pre-existing readers on the other. For more detail, see the 2019 LWN article and for more background, see the Linux Foundation RCU presentations here and here.

The next sections provide an overview of the Linux-kernel RCU implementations' functional properties, with performance and scalability characteristics left as an exercise for the interested reader.

Vanilla RCU

Vanilla RCU has quite a variety of bells and whistles:

Sleepable RCU (SRCU)

SRCU has a similar variety of bells and whistles, but some important differences. The most important difference is that SRCU supports multiple domains, each represented by an srcu_struct structure. A reader in one domain does not block a grace period in another domain. In contrast, RCU is global in nature, with exactly one domain. On the other hand, the price SRCU pays for this flexibility is reduced amortization of grace-period overhead.

Tasks RCU

Tasks RCU was designed specially to handle the trampolines used in Linux-kernel tracing.

Tasks Rude RCU

By design, Tasks RCU does not wait for idle tasks. Something about them never doing any voluntary context switches on CPUs that remain idle for long periods of time. So trampoline that might be involved in tracing of code within the idle loop need something else, and that something is Tasks Rude RCU.

Tasks Trace RCU

Both Tasks RCU and Tasks Rude RCU disallow sleeping while executing in a given trampoline. Some BPF programs need to sleep, hence Tasks Trace RCU.

DYNIX/ptx rclock

The various Linux examples are taken from a code base in which RCU has been under active development for more than 20 years, which might yield an overly stringent set of criteria. In contrast, the 1990s DYNIX/ptx implementation of RCU (called "rclock" for "read-copy lock") was only under active development for about five years. The implementation was correspondingly minimal, as can be seen from this February 2001 patch (hat trick to Greg Lehey):

Perhaps this can form the basis of an RCU classification system, though some translation will no doubt be required to bridge from C to Rust. There is ownership, if nothing else!

RCU Classification and Rust RCU Crates

Except that the first RCU crate, rcu_clean, throws a monkey wrench into the works. It does not have any grace-period primitives, but instead a clean() function that takes a reference to a RCU-protected data item. The user invokes this at some point in the code where it is known that there are no readers, either within this thread or anywhere else. In true Rust fashion, in some cases, the compiler is able to prove the presence or absence of readers and issue a diagnostic when needed. The documentation notes that the addition of grace periods (also known as "epochs") would allow greater accuracy.

This sort of thing is not unprecedented. The userspace RCU library has long had an rcu_quiescent_state() function that can be invoked from a given thread when that particular thread is in a quiescent state, and thus cannot have references to any RCU-protected object. However, rcu_clean takes this a step further by having no RCU grace-period mechanism at all.

Nevertheless, rcu_clean could be used to implement the add-only list RCU use case, so it is difficult to argue that is not an RCU implementation. But it is clearly a very primitive implementation. That said, primitive implementations do have their place, for example:

In addition, an RCU implementation even more primitive than rcu_clean would omit the clean() function, instead leaking memory that had been removed from an RCU-protected structure.

The left_right crate definitely uses RCU in the guise of epochs, and it can be used for at least some of the things that RCU can be used for. It does have a single-writer restriction, though as the documentation says, you could use a Mutex to serialize at least some multi-writer use cases. In addition, it has long been known that RCU use cases involving only a single writer thread permit wait-free updaters as well as wait-free readers.

One might argue that the fact that the left_right crate uses RCU means that it cannot possibly be itself an implementation of RCU. Except that in the Linux kernel, RCU Tasks uses vanilla RCU, RCU Tasks Trace uses SRCU, and previous versions of SRCU used vanilla RCU. So let's give the left_right crate the benefit of the doubt, at least for the time being, but with the understanding that it might eventually instead be classified as an RCU use case rather than an RCU implementation.

The crossbeam epoch crate again uses the guise of epochs. It has explicit read-side markers in RAII guard form using the pin function and its Atomic pointers. Grace periods are computed automatically, and the defer method provides an asynchronous grace-period-wait function. As with DYNIX/ptx, the crossbeam epoch crate lacks any other means of waiting for grace periods, and it also lacks a callback-wait API. However, to it credit, and unlike DYNIX/ptx, this crate does provide safe means for handling pointers to RCU-protected data.

Here is a prototype classification system, again, leaving performance and scalability aside:

  1. Are there explicit RCU read-side markers? Of the Linux-kernel RCU implementations, RCU Tasks and RCU Tasks Rude lack such markers. Given the Rust borrow checker, it is hard to imagine an implementation without such markers, but feel free to prove me wrong.
  2. Are grace periods computed automatically? (If not, as in rcu_clean, none of the remaining questions apply.)
  3. Are there synchronous grace-period-wait APIs? All of the Linux-kernel implementations do, and left_right also looks to.
  4. Are there asynchronous grace-period-wait APIs? If so, are there callback-wait APIs?All of the Linux-kernel implementations do, but left_right does not appear to. Providing them seems doable, but might result in more than two copies of recently-updated data structures. The crossbeam's epoch crate provides an asynchronous grace-period-wait function in the form of defer, but lacks a callback-wait API.
  5. Are there polled grace-period-wait APIs? The Linux-kernel RCU and SRCU implementations do.
  6. Are there multiple grace-period domains? The Linux-kernel SRCU implementation does.

But does this classification scheme work for your favorite RCU implementation? What about your favorite RCU use case?

History

26 Jan 2023 1:26am GMT

25 Jan 2023

feedLinuxiac

How to Install VMware Workstation Player on Ubuntu 22.04

How to Install VMware Workstation Player on Ubuntu 22.04

This guide walks you step-by-step through installing VMware Workstation Player virtualization software on Ubuntu 22.04 LTS.

25 Jan 2023 8:10pm GMT

24 Jan 2023

feedLinuxiac

Tails 5.9 Fixes Numerous Bugs and Enhances Security Measures

Tails 5.9 Fixes Numerous Bugs and Enhances Security Measures

Tails 5.9 mainly focuses on bug fixes from the previous release and comes with updated versions of the Tor software.

24 Jan 2023 8:54pm GMT

23 Jan 2023

feedKernel Planet

Matthew Garrett: Build security with the assumption it will be used against your friends

Working in information security means building controls, developing technologies that ensure that sensitive material can only be accessed by people that you trust. It also means categorising people into "trustworthy" and "untrustworthy", and trying to come up with a reasonable way to apply that such that people can do their jobs without all your secrets being available to just anyone in the company who wants to sell them to a competitor. It means ensuring that accounts who you consider to be threats shouldn't be able to do any damage, because if someone compromises an internal account you need to be able to shut them down quickly.

And like pretty much any security control, this can be used for both good and bad. The technologies you develop to monitor users to identify compromised accounts can also be used to compromise legitimate users who management don't like. The infrastructure you build to push updates to users can also be used to push browser extensions that interfere with labour organisation efforts. In many cases there's no technical barrier between something you've developed to flag compromised accounts and the same technology being used to flag users who are unhappy with certain aspects of management.

If you're asked to build technology that lets you make this sort of decision, think about whether that's what you want to be doing. Think about who can compel you to use it in ways other than how it was intended. Consider whether that's something you want on your conscience. And then think about whether you can meet those requirements in a different way. If they can simply compel one junior engineer to alter configuration, that's very different to an implementation that requires sign-offs from multiple senior developers. Make sure that all such policy changes have to be clearly documented, including not just who signed off on it but who asked them to. Build infrastructure that creates a record of who decided to fuck over your coworkers, rather than just blaming whoever committed the config update. The blame trail should never terminate in the person who was told to do something or get fired - the blame trail should clearly indicate who ordered them to do that.

But most importantly: build security features as if they'll be used against you.

comment count unavailable comments

23 Jan 2023 10:44am GMT

22 Jan 2023

feedKernel Planet

Kernel Podcast: S2E1 – 2023/01/21

Prologue

This is the pilot episode for what will become season 2 of the Linux Kernel Podcast. Back in 2008-2009 I recorded a daily "kernel podcast" that summarized the happenings of the Linux Kernel Mailing List (LKML). Eventually, daily became a little too much, and the podcast went weekly, followed by…not. This time around, I'm not committing to any specific cadence - let's call it "periodic" (every few weeks). In each episode, I will aim to broadly summarize the latest happenings in the "plumbing" of the Linux kernel, and occasionally related bits of userspace "plumbing" (glibc, systemd, etc.), as well as impactful toolchain changes that enable new features or rebaseline requirements. I welcome your feedback. Please let me know what you think about the format, as well as what you would like to see covered in future episodes. I'm going to play with some ideas over time. These may include "deep diving" into topics of interest to a broader audience. Keep in mind that this podcast is not intended to editorialize, but only to report on what is happening. Both this author, and others, have their own personal opinions, but this podcast aims to focus only on the facts, regardless of who is involved, or their motives."

On with the show.

For the week ending January 21st 2023, I'm Jon Masters and this is the Linux Kernel Podcast.

Summary

The latest stable kernel is Linux 6.1.7, released by Greg K-H on January 18th 2023.

The latest mainline (development) kernel is 6.2-rc4, released on January 15th 2023.

Long Term Stable 6.1?

The "stable" kernel series is maintained by Greg K-H (Kroah-Hartman), who posts hundreds of patches with fixes to each Linus kernel. This is where the ".7" comes in on top of Linux 6.1. Such stable patches are maintained between kernel releases, so when 6.2 is released, it will become the next "stable" kernel. Once every year or so, Greg will choose a kernel to be the next "Long Term Stable" (LTS) kernel that will receive even more patches, potentially for many years at a time. Back in October, Kaiwan N Billimoria (author of a book titled "Linux Kernel Programming"), seeking a baseline for the next edition, asked if 6.1 would become the next LTS kernel. A great amount of discussion has followed, with Greg responding to a recent ping by saying, "You tell me please. How has your testing gone for 6.1 so far? Does it work properly for you? Are you and/or your company willing to test out the -rc releases and provide feedback if it works or not for your systems?" and so on. This motivated various others to pile on with comments about their level of testing, though I haven't seen an official 6.1 LTS as of yet.

Linux 6.2 progress

Linus noted in his 6.2-rc4 announcement mail that this came "with pretty much everybody back from winter holidays, and so things should be back to normal. And you can see that in the size, this is pretty much bang in the middle of a regular rc size for this time in the merge window." The "merge window" is the period of time during which disruptive changes are allowed to be merged (typically the first two weeks of a kernel cycle prior to the first "RC") so Linus means to refer to a "cycle" and not "merge window" in his announcement.

Speaking of Linux 6.2, it counts among new features additional support for Rust. Linux 6.1 had added initial Rust patches capable of supporting a "hello world" kernel module (but not much more). 6.2 adds support for accessing certain kernel data structures (such as "task_struct", the per-task/process structure) and handles converting C-style structure "objects" with collections of (possibly null pointers) into the "memory safe" structures understood by Rust. As usual, Linux Weekly News (LWN) has a great article going into much more detail.

Ongoing Development

Richard Guy Briggs posted the 6th version of a patch series titled "fanotify: Allow user space to pass back additional audit info", which "defines a new flag (FAN_INFO) and new extensions that define additional information which are appended after the response structure returned from user space on a permission event". This allows audit logging to much more usefully capture why a policy allowed (or disallowed) certain access. The idea is to "enable the creation of tools that can suggest changes to the policy similar to how audit2allow can help refine labeled security".

Maximillian Luz posted a patch series titled "firmware: Add support for Qualcomm UEFI Secure Application" that allows regular UEFI applications to access EFI variable via proxy calls to the "UEFI Secure Application" (uefisecapp) running in Q's "secure world" implementation of Arm Trustzone. He has tested using this on a variety of tables, including a Surface Pro X. The application interface was reverse engineer from the Windows QcTrEE8180.sys driver.

Kees Cook requested a stable kernel backport of support for "oops_limit", a new kernel feature that seeks to limit the number of "oopses" allowed before a kernel will "panic". An "oops" is what happens when the kernel attempts to access a null pointer reference. Normal application software will crash (with a "segmentation fault") when this happens. Inside the kernel, the access is caught (provided it happened while in process context), and the associated (but perhaps unrelated) userspace task (process) is killed in the process of generating an "oops" with a backtrace. The kernel may at that moment leak critical resources associated with the process, such as file handles, memory areas, or locks. These aren't cleaned up. Consequently, it is possible that repeated oopses can be generated by an attacker and used for privilege escalation. The "oops_limit" patches mitigate this by limiting the number of such oopses allowed before the kernel will give up and "panic" (properly crash, and reboot, depending on config).

Vegard Nossum posted version 3 of a patch series titled "kmod: harden user namespaces with new kernel.ns_modules_allowed syscall", which seeks to "reduce the attack surface and block exploits by ensuring that user namespaces cannot trigger module (auto-)loading".

Arseniy Lesin reposted an RFC (Request For Comments) of a "SIGOOM Proposal" that would seek to enable the kernel to send a signal whenever a task (process) was in danger of being killed by the "OOM" (Out Of Memory) killer due to consuming too much anonymous (regular) memory. Willy Tarreau and Ted Ts'o noted that we were actually essentially out of space for new signals, and so rather than declaring a new "SIGOOM", it would be better to allow a process to select which of the existing signals should be used for this process when it registered to receive such notifications. Arseniy said they would follow up with patches that followed this approach.

Architectures

On the architecture front, Mark Brown posted the 4th version of a patch series enabling support for Arm's SME (Scalable Matrix Extension) version 2 and 2.1. Huang Ying posted patches enabling "migrate_pages()" (which moves memory between NUMA nodes - memory chips specific to e.g. a certain socket in a server) to support batching of the new(er) memory "folios", rather than doing them one at a time. Batching allows associated TLB invalidation (tearing down the MMU's understanding of active virtual to physical addresses) to be batched, which is important on Intel systems using IPIs (Inter-Processor-Interrupts), which are reduced by 99.1% during the associated testing, increasing pages migrated per second on a 2P server by 291.7%.

Xin Li posted version 6 of a patch series titled "x86: Enable LKGS instruction". The "LKGS instruction is introduced with Intel FRED (flexible return and event delivery) specification. As LKGS is independent of FRED, we enable it as a standalone feature". LKGS (which is an abbreviation of "load into IA32_KERNEL_GS_BASE") "behaves like the MOV to GS instruction except that it loads the base address into the IA32_KERNEL_GS_BASE MSR instead of the GS segment's descriptor cache." This means that an Operating System can perform the necessary work to context switch a user-level thread by updating IA32_KERNEL_GS_BASE and avoiding an explicit set of balanced calls to SWAPGS. This is part of the broader "FRED" architecture defined by Intel in the Flexible Return and Event Delivery (FRED) Specification.

David E. Box posted version 2 of a patch series titled "Extend Intel On Demand (SDSi) support, noting that "Intel Software Defined Silicon (SDSi) is now known as Intel On Demand". These patches enable support for the Intel feature intended to allow users to load signed payloads into their CPUs to turn on certain features after purchasing a system. This might include (for example) certain accelerators present in future chips that could be enabled as needed, similar to how certain automobiles now include subscription-locked heated seats and other features.

Meanwhile, Anup Patel posted patches titled "RISC-V KVM virtualize AIA CSRs" that enable support for the new AIA (Advanced Interrupt Architecture), which replaces the legacy "PLIC", and Sia Jee Heng posted patches that enable "RISC-V Hibernation Support".

Final words

A number of conferences are returning in 2023, including the Linux Storage, Filesystem, Memory Management, and BPF (LSF/MM/BPF) Summit, which will be held from May 8 to May 10 at the Vancouver Convention Center. Josef Bacik noted that the CFP was now open.

Don't forget to give me your feedback on this pilot episode! jcm@jonmasters.org.

22 Jan 2023 4:16am GMT

19 Jan 2023

feedKernel Planet

Dave Airlie (blogspot): vulkan video decoding: anv status update

After hacking the Intel media-driver and ffmpeg I managed to work out how the anv hardware mostly works now for h264 decoding.

I've pushed a branch [1] and a MR[2] to mesa. The basics of h264 decoding are working great on gen9 and compatible hardware. I've tested it on my one Lenovo WhiskeyLake laptop.

I have ported the code to hasvk as well, and once we get moving on this I'll polish that up and check we can h264 decode on IVB/HSW devices.

The one feature I know is missing is status reporting, radv can't support that from what I can work out due to firmware, but anv should be able to so I might dig into that a bit.

[1] https://gitlab.freedesktop.org/airlied/mesa/-/tree/anv-vulkan-video-decode

[2] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/20782

19 Jan 2023 3:53am GMT

18 Jan 2023

feedKernel Planet

Matthew Garrett: PKCS#11. hardware keystores, and Apple frustrations

There's a bunch of ways you can store cryptographic keys. The most obvious is to just stick them on disk, but that has the downside that anyone with access to the system could just steal them and do whatever they wanted with them. At the far end of the scale you have Hardware Security Modules (HSMs), hardware devices that are specially designed to self destruct if you try to take them apart and extract the keys, and which will generate an audit trail of every key operation. In between you have things like smartcards, TPMs, Yubikeys, and other platform secure enclaves - devices that don't allow arbitrary access to keys, but which don't offer the same level of assurance as an actual HSM (and are, as a result, orders of magnitude cheaper).

The problem with all of these hardware approaches is that they have entirely different communication mechanisms. The industry realised this wasn't ideal, and in 1994 RSA released version 1 of the PKCS#11 specification. This defines a C interface with a single entry point - C_GetFunctionList. Applications call this and are given a structure containing function pointers, with each entry corresponding to a PKCS#11 function. The application can then simply call the appropriate function pointer to trigger the desired functionality, such as "Tell me how many keys you have" and "Sign this, please". This is both an example of C not just being a programming language and also of you having to shove a bunch of vendor-supplied code into your security critical tooling, but what could possibly go wrong.

(Linux distros work around this problem by using p11-kit, which is a daemon that speaks d-bus and loads PKCS#11 modules for you. You can either speak to it directly over d-bus, or for apps that only speak PKCS#11 you can load a module that just transports the PKCS#11 commands over d-bus. This moves the weird vendor C code out of process, and also means you can deal with these modules without having to speak the C ABI, so everyone wins)

One of my work tasks at the moment is helping secure SSH keys, ensuring that they're only issued to appropriate machines and can't be stolen afterwards. For Windows and Linux machines we can stick them in the TPM, but Macs don't have a TPM as such. Instead, there's the Secure Enclave - part of the T2 security chip on x86 Macs, and directly integrated into the M-series SoCs. It doesn't have anywhere near as many features as a TPM, let alone an HSM, but it can generate NIST curve elliptic curve keys and sign things with them and that's good enough. Things are made more complicated by Apple only allowing keys to be used by the app that generated them, so it's hard for applications to generate keys on behalf of each other. This can be mitigated by using CryptoTokenKit, an interface that allows apps to present tokens to the systemwide keychain. Although this is intended for allowing a generic interface for access to such tokens (kind of like PKCS#11), an app can generate its own keys in the Secure Enclave and then expose them to other apps via the keychain through CryptoTokenKit.

Of course, applications then need to know how to communicate with the keychain. Browsers mostly do so, and Apple's version of SSH can to an extent. Unfortunately, that extent is "Retrieve passwords to unlock on-disk keys", which doesn't help in our case. PKCS#11 comes to the rescue here! Apple ship a module called ssh-keychain.dylib, a PKCS#11 module that's intended to allow SSH to use keys that are present in the system keychain. Unfortunately it's not super well maintained - it got broken when Big Sur moved all the system libraries into a cache, but got fixed up a few releases later. Unfortunately every time I tested it with our CryptoTokenKit provider (and also when I retried with SecureEnclaveToken to make sure it wasn't just our code being broken), ssh would tell me "provider /usr/lib/ssh-keychain.dylib returned no slots" which is not especially helpful. Finally I realised that it was actually generating more debug output, but it was being sent to the system debug logs rather than the ssh debug output. Well, when I say "more debug output", I mean "Certificate []: algorithm is not supported, ignoring it", which still doesn't tell me all that much. So I stuck it in Ghidra and searched for that string, and the line above it was

iVar2 = __auth_stubs::_objc_msgSend(uVar7,"isEqual:",*(undefined8*)__got::_kSecAttrKeyTypeRSA);

with it immediately failing if the key isn't RSA. Which it isn't, since the Secure Enclave doesn't support RSA. Apple's PKCS#11 module appears incapable of making use of keys generated on Apple's hardware.

There's a couple of ways of dealing with this. The first, which is taken by projects like Secretive, is to implement the SSH agent protocol and have SSH delegate key management to that agent, which can then speak to the keychain. But if you want this to work in all cases you need to implement all the functionality in the existing ssh-agent, and that seems like a bunch of work. The second is to implement a PKCS#11 module, which sounds like less work but probably more mental anguish. I'll figure that out tomorrow.

comment count unavailable comments

18 Jan 2023 5:26am GMT

17 Jan 2023

feedKernel Planet

Dave Airlie (blogspot): vulkan video decoding: av1 (yes av1) status update

Needless to say h264/5 weren't my real goals in life for video decoding. Lynne and myself decided to see what we could do to drive AV1 decode forward by creating our own extensions called VK_MESA_video_decode_av1. This is a radv only extension so far, and may expose some peculiarities of AMD hardware/firmware.

Lynne's blog entry[1] has all the gory details, so go read that first. (really read it first).

Now that you've read and understood all that, I'll just rant here a bit. Figuring out the DPB management and hw frame ref and curr_pic_idx fields was a bit of a nightmare. I spent a few days hacking up a lot of wrong things before landing on the thing we agreed was the least wrong which was having the ffmpeg code allocate a frame index in the same fashion as the vaapi radeon implementation did. I had another hacky solution that involved overloading the slotIndex value to mean something that wasn't DPB slot index, but it wasn't really any better. I think there may be something about the hw I don't understand so hopefully we can achieve clarity later.

[1] https://lynne.ee/vk_mesa_video_decode_av1.html

17 Jan 2023 7:54am GMT

15 Jan 2023

feedKernel Planet

Matthew Garrett: Blogging and microblogging

Long-term Linux users may remember that Alan Cox used to write an online diary. This was before the concept of a "Weblog" had really become a thing, and there certainly weren't any expectations around what one was used for - while now blogging tends to imply a reasonably long-form piece on a specific topic, Alan was just sitting there noting small life concerns or particular technical details in interesting problems he'd solved that day. For me, that was fascinating. I was trying to figure out how to get into kernel development, and was trying to read as much LKML as I could to figure out how kernel developers did stuff. But when you see discussion on LKML, you're frequently missing the early stages. If an LKML patch is a picture of an owl, I wanted to know how to draw the owl, and most of the conversations about starting in kernel development were very "Draw two circles. Now draw the rest of the owl". Alan's musings gave me insight into the thought processes involved in getting from "Here's the bug" to "Here's the patch" in ways that really wouldn't have worked in a more long-form medium.

For the past decade or so, as I moved away from just doing kernel development and focused more on security work instead, Twitter's filled a similar role for me. I've seen people just dumping their thought process as they work through a problem, helping me come up with effective models for solving similar problems. I've learned that the smartest people in the field will spend hours (if not days) working on an issue before realising that they misread something back at the beginning and that's helped me feel like I'm not unusually bad at any of this. It's helped me learn more about my peers, about my field, and about myself.

Twitter's now under new ownership that appears to think all the worst bits of Twitter were actually the good bits, so I've mostly bailed to the Fediverse instead. There's no intrinsic length limit on posts there - Mastodon defaults to 500 characters per post, but that's configurable per instance. But even at 500 characters, it means there's more room to provide thoughtful context than there is on Twitter, and what I've seen so far is more detailed conversation and higher levels of meaningful engagement. Which is great! Except it also seems to discourage some of the posting style that I found so valuable on Twitter - if your timeline is full of nuanced discourse, it feels kind of rude to just scream "THIS FUCKING PIECE OF SHIT IGNORES THE HIGH ADDRESS BIT ON EVERY OTHER WRITE" even though that's exactly the sort of content I'm there for.

And, yeah, not everything has to be for me. But I worry that as Twitter's relevance fades for the people I'm most interested in, we're replacing it with something that's not equivalent - something that doesn't encourage just dropping 50 characters or so of your current thought process into a space where it can be seen by thousands of people. And I think that's a shame.

comment count unavailable comments

15 Jan 2023 10:40pm GMT

10 Jan 2023

feedKernel Planet

Matthew Garrett: Integrating Linux with Okta Device Trust

I've written about bearer tokens and how much pain they cause me before, but sadly wishing for a better world doesn't make it happen so I'm making do with what's available. Okta has a feature called Device Trust which allows to you configure access control policies that prevent people obtaining tokens unless they're using a trusted device. This doesn't actually bind the tokens to the hardware in any way, so if a device is compromised or if a user is untrustworthy this doesn't prevent the token ending up on an unmonitored system with no security policies. But it's an incremental improvement, other than the fact that for desktop it's only supported on Windows and MacOS, which really doesn't line up well with my interests.

Obviously there's nothing fundamentally magic about these platforms, so it seemed fairly likely that it would be possible to make this work elsewhere. I spent a while staring at the implementation using Charles Proxy and the Chrome developer tools network tab and had worked out a lot, and then Okta published a paper describing a lot of what I'd just laboriously figured out. But it did also help clear up some points of confusion and clarified some design choices. I'm not going to give a full description of the details (with luck there'll be code shared for that before too long), but here's an outline of how all of this works. Also, to be clear, I'm only going to talk about the desktop support here - mobile is a bunch of related but distinct things that I haven't looked at in detail yet.

Okta's Device Trust (as officially supported) relies on Okta Verify, a local agent. When initially installed, Verify authenticates as the user, obtains a token with a scope that allows it to manage devices, and then registers the user's computer as an additional MFA factor. This involves it generating a JWT that embeds a number of custom claims about the device and its state, including things like the serial number. This JWT is signed with a locally generated (and hardware-backed, using a TPM or Secure Enclave) key, which allows Okta to determine that any future updates from a device claiming the same identity are genuinely from the same device (you could construct an update with a spoofed serial number, but you can't copy the key out of a TPM so you can't sign it appropriately). This is sufficient to get a device registered with Okta, at which point it can be used with Fastpass, Okta's hardware-backed MFA mechanism.

As outlined in the aforementioned deep dive paper, Fastpass is implemented via multiple mechanisms. I'm going to focus on the loopback one, since it's the one that has the strongest security properties. In this mode, Verify listens on one of a list of 10 or so ports on localhost. When you hit the Okta signin widget, choosing Fastpass triggers the widget into hitting each of these ports in turn until it finds one that speaks Fastpass and then submits a challenge to it (along with the URL that's making the request). Verify then constructs a response that includes the challenge and signs it with the hardware-backed key, along with information about whether this was done automatically or whether it included forcing the user to prove their presence. Verify then submits this back to Okta, and if that checks out Okta completes the authentication.

Doing this via loopback from the browser has a bunch of nice properties, primarily around the browser providing information about which site triggered the request. This means the Verify agent can make a decision about whether to submit something there (ie, if a fake login widget requests your creds, the agent will ignore it), and also allows the issued token to be cross-checked against the site that requested it (eg, if g1thub.com requests a token that's valid for github.com, that's a red flag). It's not quite at the same level as a hardware WebAuthn token, but it has many of the anti-phishing properties.

But none of this actually validates the device identity! The entire registration process is up to the client, and clients are in a position to lie. Someone could simply reimplement Verify to lie about, say, a device serial number when registering, and there'd be no proof to the contrary. Thankfully there's another level to this to provide stronger assurances. Okta allows you to provide a CA root[1]. When Okta issues a Fastpass challenge to a device the challenge includes a list of the trusted CAs. If a client has a certificate that chains back to that, it can embed an additional JWT in the auth JWT, this one containing the certificate and signed with the certificate's private key. This binds the CA-issued identity to the Fastpass validation, and causes the device to start appearing as "Managed" in the Okta device management UI. At that point you can configure policy to restrict various apps to managed devices, ensuring that users are only able to get tokens if they're using a device you've previously issued a certificate to.

I've managed to get Linux tooling working with this, though there's still a few drawbacks. The main issue is that the API only allows you to register devices that declare themselves as Windows or MacOS, followed by the login system sniffing browser user agent and only offering Fastpass if you're on one of the officially supported platforms. This can be worked around with an extension that spoofs user agent specifically on the login page, but that's still going to result in devices being logged as a non-Linux OS which makes interpreting the logs more difficult. There's also no ability to choose which bits of device state you log: there's a couple of existing integrations, and otherwise a fixed set of parameters that are reported. It'd be lovely to be able to log arbitrary material and make policy decisions based on that.

This also doesn't help with ChromeOS. There's no real way to automatically launch something that's bound to localhost (you could probably make this work using Crostini but there's no way to launch a Crostini app at login), and access to hardware-backed keys is kind of a complicated topic in ChromeOS for privacy reasons. I haven't tried this yet, but I think using an enterprise force-installed extension and the chrome.enterprise.platformKeys API to obtain a device identity cert and then intercepting requests to the appropriate port range on localhost ought to be enough to do that? But I've literally never written any Javascript so I don't know. Okta supports falling back from the loopback protocol to calling a custom URI scheme, but once you allow that you're also losing a bunch of the phishing protection, so I'd prefer not to take that approach.

Like I said, none of this prevents exfiltration of bearer tokens once they've been issued, and there's still a lot of ecosystem work to do there. But ensuring that tokens can't be issued to unmanaged machines in the first place is still a step forwards, and with luck we'll be able to make use of this on Linux systems without relying on proprietary client-side tooling.

(Time taken to code this implementation: about two days, and under 1000 lines of new code. Time taken to figure out what the fuck to write: rather a lot longer)

[1] There's also support for having Okta issue certificates, but then you're kind of back to the "How do I know this is my device" situation

comment count unavailable comments

10 Jan 2023 5:48am GMT

08 Jan 2023

feedKernel Planet

James Bottomley: Using SIP to Replace Mobile and Land Lines

If you read more than a few articles in my blog you've probably figured out that I'm pretty much a public cloud Luddite: I run my own cloud (including my own email server) and don't really have much of my data in any public cloud. I still have public cloud logins: everyone wants to share documents with Google nowadays, but Google regards people who don't use its services "properly" with extreme prejudice and I get my account flagged with a security alert quite often when I try to log in.

However, this isn't about my public cloud phobia, it's about the evolution of a single one of my services: a cloud based PBX. It will probably come as no surprise that the PBX I run is Asterisk on Linux but it may be a surprise that I've been running it since the early days (since 1999 to be exact). This is the story of why.

I should also add that the motivation for this article is that I'm unable to get a discord account: discord apparently has a verification system that requires a phone number and explicitly excludes any VOIP system, which is all I have nowadays. This got me to thinking that my choices must be pretty unusual if they're so pejoratively excluded by a company whose mission is to "Create Space for Everyone to find Belonging". I'm sure the suspicion that this is because Discord the company also offers VoIP services and doesn't like the competition is unworthy.

Early Days

I've pretty much worked remotely in the US all my career. In the 90s this meant having three phone lines (These were actually physical lines into the house): one for the family, one for work and one for the modem. When DSL finally became a thing and we were running a business, the modem was replaced by a fax machine. The minor annoyance was knowing which line was occupied but if line 1 is the house and line 2 the office, it's not hard. The big change was unbundling. This meant initially the call costs to the UK through the line provider skyrocketed and US out of state rates followed. The way around this was to use unbundled providers via dial-around (a prefix number), but finding the cheapest was hard and the rates changed almost monthly. I needed a system that could add the current dial-around prefix for the relevant provider automatically. The solution: asterisk running on a server in the basement with two digium FX cards for the POTS lines (fax facility now being handled by asterisk) and Aastra 9113i SIP phones wired over the house ethernet with PoE injectors. Some fun jiggery pokery with asterisk busy lamp feature allowed the lights on the SIP phone to indicate busy lines and extensions.conf could be programmed to keep the correct dial-around prefix. For a bonus, asterisk can be programmed to do call screening, so now if the phone system doesn't recognize your number you get told we don't accept solicitation calls and to hang up now, otherwise press 0 to ring the house phone … and we've had peaceful dinner times ever after. It was also somewhat useful to have each phone in the house on its own PBX extension so people could call from the living room to my office without having to yell.

Enter SIP Trunking

While dial-arounds worked successfully for a few years, they always ended with problems (usually signalled by a massive phone bill) and a new dial-around was needed. However by 2007 several companies were offering SIP trunking over the internet. The one I chose (Localphone, a UK based company) was actually a successful ring back provider before moving into SIP. They offered a pay as you go service with phone termination in whatever country you were calling. The UK and US rates were really good, so suddenly the phone bills went down and as a bonus they gave me a free UK incoming number (called a DID - Direct Inward Dialing) which family and friends in the UK could call us on at local UK rates. Pretty much every call apart from local ones was now being routed over the internet, although most incoming calls, apart for those from the UK, were still over the POTS lines.

The beginning of Mobile (For Me)

I was never really a big consumer of mobile phones, but that all changed in 2009 when google presented all kernel developers with a Nexus One. Of course, they didn't give us SIM cards to go with it, so my initial experiments were all over wifi. I soon had CyanogenMod installed and found a SIP client called Sipdroid. This allowed me to install my Nexus One as a SIP extension on the house network. SIP calls over 2G data were not very usable (the bandwidth was too low), but implementing multiple codecs and speex support got it to at least work (and is actually made me an android developer … scratching my own itch again). The bandwidth problems on 2G evaporated on 3G and SIP became really usable (although I didn't have a mobile "plan", I did use pay-as-you-go SIMs while travelling). It already struck me that all you really needed the mobile network for was data and then all calls could simply travel to a SIP provider. When LTE came along it seemed to be confirming this view because IP became the main communication layer.

I suppose I should add that I used the Nexus One long beyond its design life, even updating its protocol stack so it kept working. I did this partly because it annoyed certain people to see me with an old phone (I have a set of friends who were very amused by this and kept me supplied with a stock of Nexus One phones in case my old one broke) but mostly because of inertia and liking small phones.

SIP Becomes My Only Phone Service

In 2012, thanks to a work assignment, we relocated from the US to London. Since these moves take a while, I relocated the in-house PBX machine to a dedicated server in Los Angeles (my nascent private cloud), ditched the POTS connections and used the UK incoming number as our primary line that could be delivered to us while we were in temporary accommodation as well as when we were in our final residence in London. This did have the somewhat inefficient result that when you called from the downstairs living room to the upstairs office, the call was routed over an 8,000 mile round trip from London to Los Angeles and back, but thanks to internet latency improvements, you couldn't really tell. The other problem was that the area code I'd chosen back in 2007 was in Whitby, some 200 Miles north of London but fortunately this didn't seem to be much of an issue except for London Pizza delivery places who steadfastly refused to believe we lived locally.

When the time came in 2013 to move back to Seattle in the USA, the adjustment was simply made by purchasing a 206 area code DID and plugging it into the asterisk system and continued using a fully VoIP system based in Los Angeles. Although I got my incoming UK number for free, being an early service consumer, renting DIDs now costs around $1 per month depending on your provider.

SIP and the Home Office

I've worked remotely all my career (even when in London). However, I've usually worked for a company with a physical office setup and that means a phone system. Most corporate PBX's use SIP under the covers or offer a SIP connector. So, by dint of finding the PBX administrator I've usually managed to get a SIP extension that will simply plug into my asterisk PBX. Using correct dial plan routing (and a prefix for outbound calling), the office number usually routes to my mobile and desk phone, meaning I can make and receive calls from my office number wherever in the world I happen to be. For those who want to try this at home, the trick is to find the phone system administrator; if you just ask the IT department, chances are you'll simply get a blanket "no" because they don't understand it might be easy to do and definitely don't want to find out.

Evolution to Fully SIP (Data Only) Mobile

Although I said above that I maintained a range of in-country Mobile SIMs, this became less true as the difficulty in running in-country SIMs increased (most started to insist you add cash or use them fairly regularly). When COVID hit in 2020, and I had no ability to travel, my list of in-country SIMs was reduced to one from 3 UK largely because they allowed you to keep your number provided you maintained a balance (and they had a nice internet roaming agreement which meant you paid UK data rates in a nice range of countries). The big problem giving up a mobile number was no text messaging when travelling (SMS). For years I've been running a xmpp server, but the subset of my friends who have xmpp accounts has always been under 10% so it wasn't very practical (actually, this is somewhat untrue because I wrote an xmpp to google chat bridge but the interface became very impedance mismatched as Google moved to rich media).

The major events that got me to move away from xmpp and the Nexus One were the shutdown of the 3G network in the US and the viability of the Matrix federated chat service (the Matrix android client relied on too many modern APIs ever to be backported to the version of android that ran on the Nexus One). Of the available LTE phones, I chose the Pixel-3 as the smallest and most open one with the best price/performance (and rapidly became acquainted with the fact that only some of them can actually be rooted) and LineageOS 17.1 (Android 10). The integration of SIP with the Dialer is great (I can now use SIP on the car's bluetooth, yay!) but I rapidly ran into severe bugs in the Google SIP implementation (which hasn't been updated for years). I managed to find and fix all the bugs (or at least those that affected me most, repositories here; all beginning with android_ and having the jejb-10 branch) but that does now mean I'm stuck on Android 10 since Google ripped SIP out in Android 12.

For messaging I adopted matrix (Apart from the Plumbers Matrix problem, I haven't really written about it since matrix on debian testing just works out of the box) and set up bridges to Signal, Google Chat, Slack and WhatsApp (The WhatsApp one requires you be running WhatsApp on your phone, but I run mine on an Android VM in my cloud) all using the 3 UK Sim number where they require a mobile number confirmation. The final thing I did was to get a universal roaming data SIM and put it in my phone, meaning I now rely on matrix for messaging and SIP for voice when I travel because the data SIM has no working mobile number at all (either for voice or SMS). In many ways, this is no hardship: I never really had a permanent SMS number when travelling because of the use of in-country SIMs, so no-one has a number for me they rely on for SMS.

Conclusion and Problems

Although I implied above I can't receive SMS, that's not quite true: one of my VOIP numbers does accept SMS inbound and is able to send outbound, the problem is that it doesn't come over the SIP MESSAGE protocol, but instead goes to a web page in the provider backend, making it inconvenient to use and meaning I have to know the message is coming (although I do use it for things like Delta Boarding passes, which only send the location of the web page to receive pkpasses over SMS). However, this isn't usually a problem because most people I know have moved on from SMS to rich messaging over one of the protocols I have (and if one came along with a new protocol, well I can install a bridge for that).

In terms of SIP over an IP substrate giving rise to unbundled services, I could claim to be half right, since most of the modern phone like services have a SIP signalling core. However, the unbundling never really came about: the silo provider just moved from landline to mobile (or a mobile resale service like Google Fi). Indeed, today, if you give anyone your US phone number they invariably assume it is a mobile (and then wonder why you don't reply to their SMS messages). This mobile assumption problem can be worked around by emphasizing "it's a landline" every time you give out your VOIP number, but people don't always retain the information.

So what about the future? I definitely still like the way my phone system works … having a single number for the house which any household member can answer from anywhere and side numbers for travelling really suits me, and I have the technical skills to maintain it indefinitely (provided the SIP trunking providers sill exist), but I can see the day coming where the Discord intolerance of non-siloed numbers is going to spread and most silos will require non-VOIP phone numbers with the same prejudice, thus locking people who don't comply out in much the same way as it's happening with email now; however, hopefully that day for VoIP is somewhat further off.

08 Jan 2023 7:25pm GMT

07 Jan 2023

feedKernel Planet

Matthew Garrett: Changing firmware config that doesn't want to be changed

Update: There's actually a more detailed writeup of this here that I somehow missed. Original entry follows:

Today I had to deal with a system that had an irritating restriction - a firmware configuration option I really wanted to be able to change appeared as a greyed out entry in the configuration menu. Some emails revealed that this was a deliberate choice on the part of the system vendor, so that seemed to be that. Thankfully in this case there was a way around that.

One of the things UEFI introduced was a mechanism to generically describe firmware configuration options, called Visual Forms Representation (or VFR). At the most straightforward level, this lets you define a set of forms containing questions, with each question associated with a value in a variable. Questions can be made dependent upon the answers to other questions, so you can have options that appear or disappear based on how other questions were answered. An example in this language might be something like:
CheckBox Prompt: "Console Redirection", Help: "Console Redirection Enable or Disable.", QuestionFlags: 0x10, QuestionId: 53, VarStoreId: 1, VarStoreOffset: 0x39, Flags: 0x0
In which question 53 asks whether console redirection should be enabled or disabled. Other questions can then rely on the answer to question 53 to influence whether or not they're relevant (eg, if console redirection is disabled, there's no point in asking which port it should be redirected to). As a checkbox, if it's set then the value will be set to 1, and 0 otherwise. But where's that stored? Earlier we have another declaration:
VarStore GUID: EC87D643-EBA4-4BB5-A1E5-3F3E36B20DA9, VarStoreId: 1, Size: 0xF4, Name: "Setup"
A UEFI variable called "Setup" and with GUID EC87D643-EBA4-4BB5-A1E5-3F3E36B20DA9 is declared as VarStoreId 1 (matching the declaration in the question) and is 0xf4 bytes long. The question indicates that the offset for that variable is 0x39. Rewriting Setup-EC87D643-EBA4-4BB5-A1E5-3F3E36B20DA9 with a modified value in offset 0x39 will allow direct manipulation of the config option.

But how do we get this data in the first place? VFR isn't built into the firmware directly - instead it's turned into something called Intermediate Forms Representation, or IFR. UEFI firmware images are typically in a standardised format, and you can use UEFITool to extract individual components from that firmware. If you use UEFITool to search for "Setup" there's a good chance you'll be able to find the component that implements the setup UI. Running IFRExtractor-RS against it will then pull out any IFR data it finds, and decompile that into something resembling the original VFR. And now you have the list of variables and offsets and the configuration associated with them, even if your firmware has chosen to hide those options from you.

Given that a bunch of these config values may be security relevant, this seems a little concerning - what stops an attacker who has access to the OS from simply modifying these variables directly? UEFI avoids this by having two separate stages of boot, one where the full firmware ("Boot Services") is available, and one where only a subset ("Runtime Services") is available. The transition is triggered by the OS calling ExitBootServices, indicating the handoff from the firmware owning the hardware to the OS owning the hardware. This is also considered a security boundary - before ExitBootServices everything running has been subject to any secure boot restrictions, and afterwards applications can do whatever they want. UEFI variables can be flagged as being visible in both Boot and Runtime Services, or can be flagged as Boot Services only. As long as all the security critical variables are Boot Services only, an attacker should never be able to run untrusted code that could alter them.

In my case, the firmware option I wanted to alter had been enclosed in "GrayOutIf True" blocks. But the questions were still defined and the code that acted on those options was still present, so simply modifying the variables while still inside Boot Services gave me what I wanted. Note that this isn't a given! The presence of configuration options in the IFR data doesn't mean that anything will later actually read and make use of that variable - a vendor may have flagged options as unavailable and then removed the code, but never actually removed the config data. And also please do note that the reason stuff was removed may have been that it doesn't actually work, and altering any of these variables risks bricking your hardware in a way that's extremely difficult to recover. And there's also no requirement that vendors use IFR to describe their configuration, so you may not get any help here anyway.

In summary: if you do this you may break your computer. If you don't break your computer, it might not work anyway. I'm not going to help you try to break your computer. And I didn't come up with any of this, I just didn't find it all written down in one place while I was researching it.

comment count unavailable comments

07 Jan 2023 9:53am GMT

Matthew Garrett: Asking ChatGPT to write my security-sensitive code for me

I have a patchset that makes it safe to use hibernation under Linux even in a secure boot world but it relies on preventing userland from using one of the TPM PCRs[1] that would otherwise be available to it. TPM 2 devices support emulating PCRs in NVRAM, which would obviously be beneficial in not taking away a scare resource, but the docs suggest there's no way to tie a key generation to an NVRAM value which makes this more complicated.

So, why not ask a computer to do this for me? I asked ChatGPT to write me some code that would bind a TPM key to an NVRAM value. After a couple of false starts it told me:To bind an encryption key to the value of a TPM NV index, you can use the TPM2_NV_DefineSpace command to define an NV index with the TPMA_NV_BIND attribute. Woo! That sounds perfect. Only one problem - I can't actually find the TPMA_NV_BIND attribute defined anywhere in the spec. Well, ask the AI again.

Here is the definition of TPMA_NV_BIND from the TPM 2.0 specification:
TPMA_NV_BIND (BIT 17) If SET, the contents of the NV Index are bound to the platform hierarchy.


Ok, that sounds convincing. Where is it in the spec?

The definition of TPMA_NV_BIND is on page 126 of the TPM 2.0 specification, in section 9.7.1 "TPMA_NV".

Huh. Nope. Page 126 says nothing about this. Section 9.7.1 says nothing about this. The TPMA_NV bits are actually described on page 168 of volume 2 of the spec, in section 13.4. Bit 17 is TPMA_NV_OWNERREAD. TPMA_NV_BIND does not appear in the spec anywhere. I told the AI that.

I apologize for the error in my previous response. You are correct that the TPMA_NV_BIND attribute does not appear in the TPM 2.0 specification.

To bind an encryption key to the value of a TPM NV index, you can use the TPM2_NV_DefineSpace command with the TPMA_NV_BIND attribute.


I'm not afraid of being unemployed in the near future.

[1] "Platform Configuration Registers" - a modern TPM has 24 of these. When something security sensitive happens, you generate a hash of that event and pass it to the TPM. The TPM appends that to an existing PCR value and then hashes that concatenated value and sets the PCR to that. This means the PCR value depends not only on the values provided, but also the order they're provided in. Various TPM operations can be made conditional on the PCR values meeting specific criteria.

comment count unavailable comments

07 Jan 2023 9:20am GMT

06 Jan 2023

feedKernel Planet

Linux Plumbers Conference: LPC 2022 Attendee Survey Summary

We had 206 responses to the Linux Plumbers survey in 2022, which, given the total number of in person conference participants of 401, and virtual participants of 320, has provided high confidence in the feedback. Overall there were about 89% of those registered, either showed up as in person or virtual. As this was the first time we've tried to do this type of hybrid event, the feedback has been essential as we start planning for something similar in 2023. One piece of input, we'll definitely be incorporating for next year is to have separate surveys for in person and virtual attendees! So a heartfelt "thank you" to everyone who participated in this survey and waded through the non relevant questions to share their experience!

Overall: 91.8% of respondents were positive about the event, with 6.3% as neutral and 1.9% were dissatisfied. 80.1% indicated that the discussions they participated in helped resolve problems. The BOF track was popular and we're looking to include it again in 2023. Due to the fact we were having our first in person since the pandemic started, we did this event as a hybrid event with reduced in person registration compared to prior years, as we were unsure how many would be willing to travel and our venue's capacity. The conference sold out of regular tickets very quickly after opening up registration though, so we set up a waiting list. With some the travel conditions and cancelations, we were able to work through the daunting waiting list, and offer spots to all of those on the list by the conference date. Venue capacity is something we're looking closely at for next year and will outline the plan when the CFP opens early this year.

Based on feedback from prior years, we videotaped all of the sessions, and the videos are now posted. There are 195 videos from the conference! The committee has also linked them to the detailed schedule and clicking on the video link in the presentation materials section of any given talk or discussion. 72% of respondents plan to watch them to clarify points and another 10% are planning to watch them to catch up on sessions that they were not able to attend.

Venue: In general, 45.6% of respondents considered the venue size to be a good match, but a significant portion would have preferred it to be bigger (47%) as well. The room size was considered effective for participation by 78.6% of the respondents.

Content: In terms of track feedback, Linux Plumbers Refereed track and Kernel Summit track were indicated as very relevant by almost all respondents who attended. The BOFs track was positively received and will continue. The hallway track continues to be regarded as most relevant, and appreciated. We will continue to evaluate options for making private meeting and hack rooms available for groups who need to meet onsite.

Communication: The emails from the committee continue to be positively received. We were able to incorporate some of the suggestions from prior surveys, and are continuing to look for options to make the hybrid event communications between in person and virtual attendees work better.

Events: Our evening events are feeling the pressure from the number of attendees especially with the other factors from the pandemic. The first night event had more issues than the closing event and we appreciate the constructive suggestions in the write-in comments. The survey was still positive about the events overall, so we'll see what we can do make this part of the "hallway track" more effective for everyone next year.

There were lots of great suggestions to the "what one thing would you like to see changed" question, and the program committee has met to discuss them. Once a venue is secured, we'll be reviewing them again to see what is possible to implement this coming year.

Thank you again to the participants for their input and help on improving the Linux Plumbers Conference. The conference is planned to be in North America in the October/November timeframe for 2023. As soon as we secure a venue, dates and location information will be posted in a blog by the committee chair, Christian Brauner.

06 Jan 2023 5:15pm GMT

29 Dec 2022

feedKernel Planet

Dave Airlie (blogspot): vulkan video encoding: radv update

After the video decode stuff was fairly nailed down, Lynne from ffmpeg nerdsniped^Wtalked me into looking at h264 encoding.

The AMD VCN encoder engine is a very different interface to the decode engine and required a lot of code porting from the radeon vaapi driver. Prior to Xmas I burned a few days on typing that all in, and yesterday I finished typing and moved to debugging the pile of trash I'd just typed in.

Lynne meanwhile had written the initial ffmpeg side implementation, and today we threw them at each other, and polished off a lot of sharp edges. We were rewarded with valid encoded frames.

The code at this point is only doing I-frame encoding, we will work on P/B frames when we get a chance.

There are also a bunch of hacks and workarounds for API/hw mismatches, that I need to consult with Vulkan spec and AMD teams about, but we have a good starting point to move forward from. I'll also be offline for a few days on holidays so I'm not sure it will get much further until mid January.

My branch is [1]. Lynne ffmpeg branch is [2].

[1] https://gitlab.freedesktop.org/airlied/mesa/-/commits/radv-vulkan-video-enc-wip

[2] https://github.com/cyanreg/FFmpeg/tree/vulkan_decode

29 Dec 2022 7:22am GMT