02 Nov 2024

feedPlanet Debian

Russell Coker: More About the Yoga Gen3

Two months ago I bought a Thinkpad X1 Yoga Gen3 [1]. I'm still very happy with it, the screen is a great improvement over the FullHD screen on my previous Thinkpad. I have yet to discover what's the best resolution to have on a laptop if price isn't an issue, but it's at least 1440p for a 14″ display, that's 210DPI. The latest Thinkpad X1 Yoga is the 7th gen and has up to 3840*2400 resolution on the internal display for 323DPI. Apple apparently uses the term "Retina Display" to mean something in the range of 250DPI to 300DPI, so my current laptop is below "Retina" while the most expensive new Thinkpads are above it.

I did some tests on external displays and found that this Thinkpad along with a Dell Latitude of the same form factor and about the same age can only handle one 4K display on a Thunderbolt dock and one on HDMI. On Reddit u/Carlioso1234 pointed out this specs page which says it supports a maximum of 3 displays including the built in TFT [2]. The Thunderbolt/USB-C connection has a maximum resolution of 5120*2880 and the HDMI port has a maximum of 4K. The latest Yoga can support four displays total which means 2*5K over Thunderbolt and one 4K over HDMI. It would be nice if someone made a 8000*2880 ultrawide display that looked like 2*5K displays when connected via Thunderbolt. It would also be nice if someone made a 32″ 5K display, currently they all seem to be 27″ and I've found that even for 4K resolution 32″ is better than 27″.

With the typical configuration of Linux and the BIOS the Yoga Gen3 will have it's touch screen stop working after suspend. I have confirmed this for stylus use but as the finger-touch functionality is broken I couldn't confirm that. On r/thinkpad u/p9k told me how to fix this problem [3]. I had to set the BIOS to Win 10 Sleep aka Hybrid sleep and then put the following in /etc/systemd/system/thinkpad-wakeup-config.service :

# https://www.reddit.com/r/thinkpad/comments/1blpy20/comment/kw7se2l/?context=3

[Unit]
Description=Workarounds for sleep wakeup source for Thinkpad X1 Yoga 3
After=sysinit.target
After=systemd-modules-load.service

[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio0/power/wakeup"
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio1/power/wakeup"
ExecStart=/bin/sh -c "echo 'LID' > /proc/acpi/wakeup"

[Install]
WantedBy=multi-user.target

Now it works fine, for stylus at least. I still get kernel error messages like the following which don't seem to cause problems:

wacom 0003:056A:5146.0005: wacom_idleprox_timeout: tool appears to be hung in-prox. forcing it out.

When it wasn't working I got the above but also kernel error messages like:

wacom 0003:056A:5146.0005: wacom_wac_queue_insert: kfifo has filled, starting to drop events

This change affected the way suspend etc operate. Now when I connect the laptop to power it will leave suspend mode. I've configured KDE to suspend when the lid is closed and there's no monitor connected.

02 Nov 2024 8:05am GMT

Russell Coker: Moving Between Devices

I previously wrote about the possibility of transferring work between devices as an alternative to "convergence" (using a phone or tablet as a desktop) [1]. This idea has been implemented in some commercial products already.

MrWhosTheBoss made a good YouTube video reviewing recent Huawei products [2]. At 2:50 in that video he shows how you can link a phone and tablet, control one from the other, drag and drop of running apps and files between phone and tablet, mirror the screen between devices, etc. He describes playing a video on one device and having it appear on the other, I hope that it actually launches a new instance of the player app as the Google Chromecast failed in the market due to remote display being laggy. At 7:30 in that video he starts talking about the features that are available when you have multiple Huawei devices, starting with the ability to move a Bluetooth pairing for earphones to a different device.

At 16:25 he shows what Huawei is doing to get apps going including allowing apk files to be downloaded and creating what they call "Quick Apps" which are instances of a web browser configured to just use one web site and make it look like a discrete app, we need something like this for FOSS phone distributions - does anyone know of a browser that's good for it?

Another thing that we need is to have an easy way of transferring open web pages between systems. Chrome allows sending pages between systems but it's proprietary, limited to Chrome only, and also takes an unreasonable amount of time. KDEConnect allows sharing clipboard contents which can be used to send URLs that can then be pasted into a browser, but the process of copy URL, send via KDEConnect, and paste into other device is unreasonably slow. The design of Chrome with a "Send to your devices" menu option from the tab bar is OK. But ideally we need a "Send to device" for all tabs of a window as well, we need it to run from free software and support using your own server not someone else's server (AKA "the cloud"). Some of the KDEConnect functionality but using a server rather than direct connection over the same Wifi network (or LAN if bridged to Wifi) would be good.

What else do we need?

02 Nov 2024 8:03am GMT

Russell Coker: What is a Workstation?

I recently had someone describe a Mac Mini as a "workstation", which I strongly disagree with. The Wikipedia page for Workstation [1] says that it's a type of computer designed for scientific or technical use, for a single user, and would commonly run a multi-user OS.

The Mac Mini runs a multi-user OS and is designed for a single user. The issue is whether it is for "scientific or technical use". A Mac Mini is a nice little graphical system which could be used for CAD and other engineering work. But I believe that the low capabilities of the system and lack of expansion options make it less of a workstation.

The latest versions of the Mac Mini (to be officially launched next week) have up to 64G of RAM and up to 8T of storage. That is quite decent compute power for a small device. For comparison the HP ML 110 Gen9 workstation I'm currently using was released in 2021 and has 256G of RAM and has 4 * 3.5″ SAS bays so I could easily put a few 4TB NVMe devices and some hard drives larger than 10TB. The HP Z640 workstation I have was released in 2014 and has 128G of RAM and 4*2.5″ SATA drive bays and 2*3.5″ SATA drive bays. Previously I had a Dell PowerEdge T320 which was released in 2012 and had 96G of RAM and 8*3.5″ SAS bays.

In CPU and GPU power the recent Mac Minis will compare well to my latest workstations. But they compare poorly to workstations from as much as 12 years ago for RAM and storage. Which is more important depends on the task, if you have to do calculations on 80G of data with lots of scans through the entire data set then a system with 64G of RAM will perform very poorly and a system with 96G and a CPU less than half as fast will perform better. A Dell PowerEdge T320 from 2012 fully loaded with 192G of RAM will outperform a modern Mac Mini on many tasks due to this and the T420 supported up to 384G.

Another issue is generic expansion options. I expect a workstation to have a number of PCIe slots free for GPUs and other devices. The T320 I used to use had a PCIe power cable for a power hungry GPU and I think all the T320 and T420 models with high power PSUs supported that.

I think that a usable definition of a "workstation" is a system having a feature set that is typical of servers (ECC RAM, lots of storage for RAID, maybe hot-swap storage devices, maybe redundant PSUs, and lots of expansion options) while also being suitable for running on a desktop or under a desk. The Mac Mini is nice for running on a desk but that's the only workstation criteria it fits. I think that ECC RAM should be a mandatory criteria and any system without it isn't a workstation. That excludes most Apple hardware. The Mac Mini is more of a thin-client than a workstation.

My main workstation with ECC RAM could run 3 VMs that each have more RAM than the largest Mac Mini that will be sold next week.

If 32G of non-ECC RAM is considered enough for a "workstation" then you could get an Android phone that counts as a workstation - and it will probably cost less than a Mac Mini.

02 Nov 2024 5:03am GMT

01 Nov 2024

feedPlanet Debian

Colin Watson: Free software activity in October 2024

Almost all of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Ansible

I noticed that Ansible had fallen out of Debian testing due to autopkgtest failures. This seemed like a problem worth fixing: in common with many other people, we use Ansible for configuration management at Freexian, and it probably wouldn't make our sysadmins too happy if they upgraded to trixie after its release and found that Ansible was gone.

The problems here were really just slogging through test failures in both the ansible-core and ansible packages, but their test suites are large and take a while to run so this took some time. I was able to contribute a few small fixes to various upstreams in the process:

This should now get back into testing tomorrow.

OpenSSH

Martin-Éric Racine reported that ssh-audit didn't list the ext-info-s feature as being available in Debian's OpenSSH 9.2 packaging in bookworm, contrary to what OpenSSH upstream said on their specifications page at the time. I spent some time looking into this and realized that upstream was mistakenly saying that implementations of ext-info-c and ext-info-s were added at the same time, while in fact ext-info-s was added rather later. ssh-audit now has clearer output, and the OpenSSH maintainers have corrected their specifications page.

I looked into a report of an ssh failure in certain cases when using GSS-API key exchange (which is a Debian patch). Once again, having integration tests was a huge win here: the affected scenario is quite a fiddly one, but I was able to set it up in the test, and thereby make sure it doesn't regress in future. It still took me a couple of hours to get all the details right, but in the past this sort of thing took me much longer with a much lower degree of confidence that the fix was correct.

On upstream's advice, I cherry-picked some key exchange fixes needed for big-endian architectures.

Python team

I packaged python-evalidate, needed for a new upstream version of buildbot.

The Python 3.13 transition rolls on. I fixed problems related to it in htmlmin, humanfriendly, postgresfixture (contributed upstream), pylint, python-asyncssh (contributed upstream), python-oauthlib, python3-simpletal, quodlibet, zope.exceptions, and zope.interface.

A trickier Python 3.13 issue involved the cgi module. Years ago I ported zope.publisher to the multipart module because cgi.FieldStorage was broken in some situations, and as a result I got a recommendation into Python's "dead batteries" PEP 594. Unfortunately there turns out to be a name conflict between multipart and python-multipart on PyPI; python-multipart upstream has been working to disentangle this, though we still need to work out what to do in Debian. All the same, I needed to fix python-wadllib and multipart seemed like the best fit; I contributed a port upstream and temporarily copied multipart into Debian's python-wadllib source package to allow its tests to pass. I'll come back and fix this properly once we sort out the multipart vs. python-multipart packaging.

tzdata moved some timezone definitions to tzdata-legacy, which has broken a number of packages. I added tzdata-legacy build-dependencies to alembic and python-icalendar to deal with this in those packages, though there are still some other instances of this left.

I tracked down an nltk regression that caused build failures in many other packages.

I fixed Rust crate versioning issues in pydantic-core, python-bcrypt, and python-maturin (mostly fixed by Peter Michael Green and Jelmer Vernooij, but it needed a little extra work).

I fixed other build failures in entrypoints, mayavi2, python-pyvmomi (mostly fixed by Alexandre Detiste, but it needed a little extra work), and python-testing.postgresql (ditto).

I fixed python3-simpletal to tolerate future versions of dh-python that will drop their dependency on python3-setuptools.

I fixed broken symlinks in python-treq.

I removed (build-)depends on python3-pkg-resources from alembic, autopep8, buildbot, celery, flufl.enum, flufl.lock, python-public, python-wadllib (contributed upstream), pyvisa, routes, vulture, and zodbpickle (contributed upstream).

I upgraded astroid, asyncpg (fixing a Python 3.13 failure and a build failure), buildbot (noticing an upstream test bug in the process), dnsdiag, frozenlist, netmiko (fixing a Python 3.13 failure), psycopg3, pydantic-settings, pylint, python-asyncssh, python-bleach, python-btrees, python-cytoolz, python-django-pgtrigger, python-django-test-migrations, python-gssapi, python-icalendar, python-json-log-formatter, python-pgbouncer, python-pkginfo, python-plumbum, python-stdlib-list, python-tokenize-rt, python-treq (fixing a Python 3.13 failure), python-typeguard, python-webargs (fixing a build failure), pyupgrade, pyvisa, pyvisa-py (fixing a Python 3.13 failure), toolz, twisted, vulture, waitress (fixing CVE-2024-49768 and CVE-2024-49769), wtf-peewee, wtforms, zodbpickle, zope.exceptions, zope.interface, zope.proxy, zope.security, and zope.testrunner to new upstream versions.

I tried to fix a regression in python-scruffy, but I need testing feedback.

I requested removal of python-testing.mysqld.

01 Nov 2024 12:19pm GMT

Russ Allbery: Review: Overdue and Returns

Review: Overdue and Returns, by Mark Lawrence

Publisher: Mark Lawrence
Copyright: June 2023
Copyright: February 2024
ASIN: B0C9N51M6Y
ASIN: B0CTYNQGBX
Format: Kindle
Pages: 99

Overdue is a stand-alone novelette in the Library Trilogy universe. Returns is a collection of two stories, the novelette "Returns" and the short story "About Pain." All of them together are about the length of a novella, so I'm combining them into a single review.

These are ancillary stories in the same universe as the novels, but not necessarily in the same timeline. (Trying to fit "About Pain" into the novel timeline will give you a headache and I am choosing to read it as author's fan fiction.) I'm guessing they're part of the new fad for releasing short fiction on Amazon to tide readers over and maintain interest between books in a series, a fad about which I have mixed feelings. Given the total lack of publisher metadata in either the stories or on Amazon, I'm assuming they were self-published even though the novels are published by Ace, but I don't know that for certain.

There are spoilers for The Book That Wouldn't Burn, so don't read these before that novel. There are no spoilers for The Book That Broke the World, and I don't think the reading order would matter.

I found all three of these stories irritating and thuddingly trite. "Returns" is probably the best of the lot in terms of quality of storytelling, but I intensely dislike the structural implications of the nature of the book at its center and am therefore hoping that it's non-canonical.

I would not waste your time with these even if you are enjoying the novels.

"Overdue": Three owners of the same bookstore at different points in time have encounters with an albino man named Yute who is on a quest. One of the owners is trying to write a book, one of them is older, depressed, and closed off, and one of them has regular conversations with her sister's ghost. The nature of the relationship between the three is too much of a spoiler, but it involves similar shenanigans as The Book That Wouldn't Burn.

Lawrence uses my least favorite resolution of benign ghost stories. The story tries very hard to sell it as a good thing, but I thought it was cruel and prefer fantasy that rejects both branches of that dilemma. Other than that, it was fine, I guess, although the moral was delivered with all of the subtlety of the last two minutes of a Saturday morning cartoon. (5)

"Returns": Livira returns a book deep inside the library and finds that she can decipher it, which leads her to a story about Yute going on a trip to recover another library book. This had a lot of great Yute lines, plus I always like seeing Livira in exploration mode. The book itself is paradoxical in a causality-destroying way, which is handwaved away as literal magic. I liked this one the best of the three stories, but I hope the world-building of the main series does not go in this direction and I'm a little afraid it might. (6)

"About Pain": A man named Holden runs into a woman named Clovis at the gym while carrying a book titled Catcher that his dog found and that he's returning to the library. I thoroughly enjoy Clovis and was happy to read a few more scenes about her. Other than that, this was fine, I guess, although it is a story designed to deliver a point and that point is one that appears in every discussion of classics and re-reading that has ever happened on the Internet. Also, I know I'm being grumpy, but Lawrence's puns with authors and character names are chapter-epigraph amusing but not short-story-length funny. Yes, yes, his name is Holden, we get it. (5)

Rating: 5 out of 10

01 Nov 2024 4:11am GMT

Paul Wise: FLOSS Activities October 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors

All work was done on a volunteer basis.

01 Nov 2024 12:57am GMT

Taavi Väänänen: Custom domains on the Wikimedia Cloud VPS web proxy

The shared web proxy used on Wikimedia Cloud VPS now has technical support for using arbitrary domains (and not just wmcloud.org subdomains) in proxy names. I think this is a good example of how software slowly evolves over time as new requirements emerge, with each new addition building on top of the previous ones.

According to the edit history on Wikitech, the web proxy service has its origins in 2012, although the current idea where you create a proxy and map it to a specific instance and port was only introduced a a year later. (Before that, it just directly mapped the subdomain to the VPS instance with the same name).

There were some smaller changes in the coming years like the migration to acme-chief for TLS certificate management, but the overall logic stayed very similar until 2020 when the wmcloud.org domain was introduced. That was implemented by adding a config option listing all possible domains, so future domain additions would be as simple as adding the new domain to that list in the configuration.

Then the changes start becoming more frequent:

01 Nov 2024 12:00am GMT

31 Oct 2024

feedPlanet Debian

Gunnar Wolf: Do you have a minute..?

Do you have a minute...?

…to talk about the so-called "Intellectual Property"?

31 Oct 2024 10:07pm GMT

30 Oct 2024

feedPlanet Debian

Russell Coker: Links October 2024

Dacid Brin wrote an interesting article about AI ecosystems and how humans might work with machines on creative projects [1]. Also he's right about "influencers" being like funghi.

Cory Doctorow wrote an interesting post about DRM, coalitions, and cheating [2]. It seems that people like me who want "trusted computing" to secure their own computers don't fit well in any of the coalitions.

The CHERI capability system for using extra hardware to validate jump addresses is an interesting advance in computer science [3]. The lecture is froim the seL4 Summit, this sort of advance in security goes well with a formally proven microkernel. I hope that this becomes a checkbox when ordering a custom RISC-V design.

Bunnie wrote an insightful blog post about how the Mossad might have gone about implementing the exploding pager attack [4]. I guess we will see a lot more of this in future, it seems easy to do.

Interesting blog post about Control Flow Integrity in the V8 engine of Chrome [5].

Interesting blog post about the new mseal() syscall which can be used by CFI among other things [6].

This is the Linux kernel documentation about the Control-flow Enforcement Technology (CET) Shadow Stack [7]. Unfortunately not enabled in Debian/Unstable yet.

ARM added support for Branch Target Identification in version 8.5 of the architecture [8].

The CEO of Automatic has taken his dispute with WPEngine to an epic level, this video catalogues it, I wonder what is wrong with him [9].

NuShell is an interesting development in shell technology which runs on Linux and Windows [10].

Interesting article about making a computer game without coding using ML [11]. I doubt that it would be a good game, but maybe educational for kids.

Krebs has an insightful article about location tracking by phones which is surprisingly accurate [12]. He has provided information on how to opt out of some of it on Android, but we need legislative action!

Interesting YouTube video about how to make a 20kW microwave oven and what it can do [13]. Don't do this at home, or anywhere else!

The Void editor is an interesting project, a fork of VSCode that supports DIRECT connections to LLM systems where you don't have their server acting as a middle-man and potentially snooping [14].

30 Oct 2024 10:04am GMT

Dirk Eddelbuettel: gcbd 0.2.7 on CRAN: More Mere Maintenance

Another pure maintenance release 0.2.7 of the gcbd package is now on CRAN. The gcbd proposes a benchmarking framework for LAPACK and BLAS operations (as the library can exchanged in a plug-and-play sense on suitable OSs) and records result in local database. Its original motivation was to also compare to GPU-based operations. However, as it is both challenging to keep CUDA working packages on CRAN providing the basic functionality appear to come and go so testing the GPU feature can be challenging. The main point of gcbd is now to actually demonstrate that 'yes indeed' we can just swap BLAS/LAPACK libraries without any change to R, or R packages. The 'configure / rebuild R for xyz' often seen with 'xyz' being Goto or MKL is simply plain wrong: you really can just swap them (on proper operating systems, and R configs - see the package vignette for more). But nomatter how often we aim to correct this record, it invariably raises its head another time.

This release accommodates a CRAN change request as we were referencing the (now only suggested) package gputools. As hinted in the previous paragraph, it was once on CRAN but is not right now so we adjusted our reference.

CRANberries also provides a diffstat report for the latest release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 Oct 2024 1:10am GMT

28 Oct 2024

feedPlanet Debian

Sven Hoexter: GKE version 1.31.1-gke.1678000+ is a baddy

Just a "warn your brothers" for people foolish enough to use GKE and run on the Rapid release channel.

Update from version 1.31.1-gke.1146000 to 1.31.1-gke.1678000 is causing trouble whenever NetworkPolicy resources and a readinessProbe (or health check) are configured. As a workaround we started to remove the NetworkPolicy resources. E.g. when kustomize is involved with a patch like this:

- patch: |-
    $patch: delete
    apiVersion: "networking.k8s.io/v1"
    kind: NetworkPolicy
    metadata:
        name: dummy
  target:
    kind: NetworkPolicy

We tried to update to the latest version - right now 1.31.1-gke.2008000 - which did not change anything. Behaviour is pretty much erratic, sometimes it still works and sometimes the traffic is denied. It also seems that there is some relevant fix in 1.31.1-gke.1678000 because that is now the oldest release of 1.31.1 which I can find in the regular and rapid release channels. The last known good version 1.31.1-gke.1146000 is not available to try a downgrade.

28 Oct 2024 4:43pm GMT

Thomas Lange: 30.000 FAIme jobs created in 7 years

The number of FAIme jobs has reached 30.000. Yeah!
At the end of this November the FAIme web service for building customized ISOs turns 7 years old. It had reached 10.000 jobs in March 2021 and 20.000 jobs were reached in June 2023. A nice increase of the usage.

Here are some statistics for the jobs processed in 2024:

Type of jobs

3% cloud image
11% live ISO
86% install ISO

Distribution

2% bullseye
8% trixie
12% ubuntu 24.04
78% bookworm

Misc

Execution Times

The cloud and live ISOs need more time for their creation because the FAIme server needs to unpack and install all packages. For the install ISO the packages are only downloaded. The amount of software packages also affects the build time. Every ISO is build in a VM on an old 6-core E5-1650 v2. Times given are calculated from the jobs of the past two weeks.

Job type Avg Max
install no desktop 1 min 2 min
install GNOME 2 min 5 min

The times for Ubuntu without and with desktop are one minute higher than those mentioned above.

Job type Avg Max
live no desktop 4 min 6 min
live GNOME 8 min 11 min

The times for cloud images are similar to live images.

A New Feature

For a few weeks now, the system has been showing the number of jobs ahead of you in the queue when you submit a job that cannot be processed immediately.

The Next Milestone

At the end of this years the FAI project will be 25 years old. If you have a success story of your FAI usage to share please post it to the linux-fai mailing list or send it to me. Do you know the FAI questionnaire ? A lot of reports are already available.

Here's an overview what happened in the past 20 years in the FAI project.

About FAIme

FAIme is the service for building your own customized ISO via a web interface. You can create an installation or live ISO or a cloud image. Several Debian releases can be selected and also Ubuntu server or Ubuntu desktop installation ISOs can be customized. Multiple options are available like selecting a desktop and the language, adding your own package list, choosing a partition layout, adding a user, choosing a backports kernel, adding a postinst script and some more.

28 Oct 2024 9:32am GMT

27 Oct 2024

feedPlanet Debian

Enrico Zini: Typing decorators for class members with optional arguments

This looks straightforward and is far from it. I expect tool support will improve in the future. Meanwhile, this blog post serves as a step by step explanation for what is going on in code that I'm about to push to my team.

Let's take this relatively straightforward python code. It has a function printing an int, and a decorator that makes it argument optional, taking it from a global default if missing:

from unittest import mock

default = 42


def with_default(f):
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value):
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

It works nicely as expected:

$ python3 test0.py
Answer: 12
Answer: 42
Mocked answer: 12
Mocked answer: None

It lacks functools.wraps and typing, though. Let's add them.

Adding functools.wraps

Adding a simple @functools.wraps, mock unexpectedly stops working:

# python3 test1.py
Answer: 12
Answer: 42
Mocked answer: 12
Traceback (most recent call last):
  File "/home/enrico/lavori/freexian/tt/test1.py", line 42, in <module>
    fiddle.print()
  File "<string>", line 2, in print
  File "/usr/lib/python3.11/unittest/mock.py", line 186, in checksig
    sig.bind(*args, **kwargs)
  File "/usr/lib/python3.11/inspect.py", line 3211, in bind
    return self._bind(args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/inspect.py", line 3126, in _bind
    raise TypeError(msg) from None
TypeError: missing a required argument: 'value'

This is the new code, with explanations and a fix:

# Introduce functools
import functools
from unittest import mock

default = 42


def with_default(f):
    @functools.wraps(f)
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)

    # Fix:
    # del wrapped.__wrapped__

    return wrapped


class Fiddle:
    @with_default
    def print(self, value):
        assert value is not None
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    # mock's autospec uses inspect.getsignature, which follows __wrapped__ set
    # by functools.wraps, which points to a wrong signature: the idea that
    # value is optional is now lost
    fiddle.print()

Adding typing

For simplicity, from now on let's change Fiddle.print to match its wrapped signature:

      # Give up with making value not optional, to simplify things :(
      def print(self, value: int | None = None) -> None:
          assert value is not None
          print("Answer:", value)

Typing with ParamSpec

# Introduce typing, try with ParamSpec
import functools
from typing import TYPE_CHECKING, ParamSpec, Callable
from unittest import mock

default = 42

P = ParamSpec("P")


def with_default(f: Callable[P, None]) -> Callable[P, None]:
    # Using ParamSpec we forward arguments, but we cannot use them!
    @functools.wraps(f)
    def wrapped(self, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)

mypy complains inside the wrapper, because while we forward arguments we don't constrain them, so we can't be sure there is a value in there:

test2.py:17: error: Argument 2 has incompatible type "int"; expected "P.args"  [arg-type]
test2.py:19: error: Incompatible return value type (got "_Wrapped[P, None, [Any, int | None], None]", expected "Callable[P, None]")  [return-value]
test2.py:19: note: "_Wrapped[P, None, [Any, int | None], None].__call__" has type "Callable[[Arg(Any, 'self'), DefaultArg(int | None, 'value')], None]"

Typing with Callable

We can use explicit Callable argument lists:

# Introduce typing, try with Callable
import functools
from typing import TYPE_CHECKING, Callable, TypeVar
from unittest import mock

default = 42

A = TypeVar("A")


# Callable cannot represent the fact that the argument is optional, so now mypy
# complains if we try to omit it
def with_default(f: Callable[[A, int | None], None]) -> Callable[[A, int | None], None]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


if TYPE_CHECKING:
    reveal_type(Fiddle.print)

fiddle = Fiddle()
fiddle.print(12)
# !! Too few arguments for "print" of "Fiddle"  [call-arg]
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

Now mypy complains when we try to omit the optional argument, because Callable cannot represent optional arguments:

test3.py:32: note: Revealed type is "def (test3.Fiddle, Union[builtins.int, None])"
test3.py:37: error: Too few arguments for "print" of "Fiddle"  [call-arg]
test3.py:46: error: Too few arguments for "print" of "Fiddle"  [call-arg]

typing's documentation says:

Callable cannot express complex signatures such as functions that take a variadic number of arguments, overloaded functions, or functions that have keyword-only parameters. However, these signatures can be expressed by defining a Protocol class with a call() method:

Let's do that!

Typing with Protocol, take 1

# Introduce typing, try with Protocol
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast
from unittest import mock

default = 42

A = TypeVar("A", contravariant=True)


class Printer(Protocol, Generic[A]):
    def __call__(_, self: A, value: int | None = None) -> None:
        ...


def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return cast(Printer, wrapped)


class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


if TYPE_CHECKING:
    reveal_type(Fiddle.print)

fiddle = Fiddle()
# !! Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

New mypy complaints:

test4.py:41: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:42: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]
test4.py:50: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:51: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]

What happens with class methods, is that the function object has a __get__ method that generates a bound versions of itself. Our Printer protocol does not define it, so mypy is now unable to type the bound method correctly.

Typing with Protocol, take 2

So... we add the function descriptor methos to our Protocol!

A lot of this is taken from this discussion.

# Introduce typing, try with Protocol, harder!
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast, overload, Union
from unittest import mock

default = 42

A = TypeVar("A", contravariant=True)

# We now produce typing for the whole function descriptor protocol
#
# See https://github.com/python/typing/discussions/1040


class BoundPrinter(Protocol):
    """Protocol typing for bound printer methods."""

    def __call__(_, value: int | None = None) -> None:
        """Bound signature."""


class Printer(Protocol, Generic[A]):
    """Protocol typing for printer methods."""

    # noqa annotations are overrides for flake8 being confused, giving either D418:
    # Function/ Method decorated with @overload shouldn't contain a docstring
    # or D105:
    # Missing docstring in magic method
    #
    # F841 is for vulture being confused:
    #   unused variable 'objtype' (100% confidence)

    @overload
    def __get__(  # noqa: D105
        self, obj: A, objtype: type[A] | None = None  # noqa: F841
    ) -> BoundPrinter:
        ...

    @overload
    def __get__(  # noqa: D105
        self, obj: None, objtype: type[A] | None = None  # noqa: F841
    ) -> "Printer[A]":
        ...

    def __get__(
        self, obj: A | None, objtype: type[A] | None = None  # noqa: F841
    ) -> Union[BoundPrinter, "Printer[A]"]:
        """Implement function descriptor protocol for class methods."""

    def __call__(_, self: A, value: int | None = None) -> None:
        """Unbound signature."""


def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return cast(Printer, wrapped)


class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

It works! It's typed! And mypy is happy!

27 Oct 2024 3:46pm GMT

26 Oct 2024

feedPlanet Debian

Steve McIntyre: Mini-Debconf in Cambridge, October 10-13 2024

Group photo

Again this year, Arm offered to host us for a mini-debconf in Cambridge. Roughly 60 people turned up on 10-13 October to the Arm campus, where they made us really welcome. They even had some Debian-themed treats made to spoil us!

Cakes

Hacking together

minicamp

For the first two days, we had a "mini-debcamp" with disparate group of people working on all sorts of things: Arm support, live images, browser stuff, package uploads, etc. And (as is traditional) lots of people doing last-minute work to prepare slides for their talks.

Sessions and talks

Secure Boot talk

Saturday and Sunday were two days devoted to more traditional conference sessions. Our talks covered a typical range of Debian subjects: a DPL "Bits" talk, an update from the Release Team, live images. We also had some wider topics: handling your own data, what to look for in the upcoming Post-Quantum Crypto world, and even me talking about the ups and downs of Secure Boot. Plus a random set of lightning talks too! :-)

Video team awesomeness

Video team in action

Lots of volunteers from the DebConf video team were on hand too (both on-site and remotely!), so our talks were both streamed live and recorded for posterity - see the links from the individual talk pages in the wiki, or http://meetings-archive.debian.net/pub/debian-meetings/2024/MiniDebConf-Cambridge/ for the full set if you'd like to see more.

A great time for all

Again, the mini-conf went well and feedback from attendees was very positive. Thanks to all our helpers, and of course to our sponsor: Arm for providing the venue and infrastructure for the event, and all the food and drink too!

Photo credits: Andy Simpkins, Mark Brown, Jonathan Wiltshire. Thanks!

26 Oct 2024 8:54pm GMT

Russell Coker: The CUPS Vulnerability

The Announcement

Late last month there was an announcement of a "severity 9.9 vulnerability" allowing remote code execution that affects "all GNU/Linux systems (plus others)" [1]. For something to affect all Linux systems that would have to be either a kernel issue or a sshd issue. The announcement included complaints about the lack of response of vendors and "And YES: I LOVE hyping the sh1t out of this stuff because apparently sensationalism is the only language that forces these people to fix".

He seems to have a different experience to me of reporting bugs, I have had plenty of success getting bugs fixed without hyping them. I just report the bug, wait a while, and it gets fixed. I have reported potential security bugs without even bothering to try and prove that they were exploitable (any situation where you can make a program crash is potentially exploitable), I just report it and it gets fixed. I was very dubious about his ability to determine how serious a bug is and to accurately report it so this wasn't a situation where I was waiting for it to be disclosed to discover if it affected me. I was quite confident that my systems wouldn't be at any risk.

Analysis

Not All Linux Systems Run CUPS

When it was published my opinion was proven to be correct, it turned out to be a series of CUPS bugs [2]. To describe that as "all GNU/Linux systems (plus others)" seems like a vast overstatement, maybe a good thing to say if you want to be a TikTok influencer but not if you want to be known for computer security work.

For the Debian distribution the cups-browsed package (which seems to be the main exploitable one) is recommended by cups-daemon, as I have my Debian systems configured to not install recommended packages by default that means that it wasn't installed on any of my systems. Also the vast majority of my systems don't do printing and therefore don't have any part of CUPS installed.

CUPS vs NAT

The next issue is that in Australia most home ISPs don't have IPv6 enabled and CUPS doesn't do the things needed to allow receiving connections from the outside world via NAT with IPv4. If inbound port 631 is blocked on both TCP and USP as is the default on Australian home Internet or if there is a correctly configured firewall in place then the network is safe from attack. There is a feature called uPnP port forwarding [3] to allow server programs to ask a router to send inbound connections to them, this is apparently usually turned off by default in router configuration. If it is enabled then there are Debian packages of software to manage this, the miniupnpc package has the client (which can request NAT changes on the router) [4]. That package is not installed on any of my systems and for my home network I don't use a router that runs uPnP.

The only program I knowingly run that uses uPnP is Warzone2100 and as I don't play network games that doesn't happen. Also as an aside in version 4.4.2-1 of warzone2100 in Debian and Ubuntu I made it use Bubblewrap to run the game in a container. So a Remote Code Execution bug in Warzone 2100 won't be an immediate win for an attacker (exploits via X11 or Wayland are another issue).

MAC Systems

Debian has had AppArmor enabled by default since Buster was released in 2019 [5]. There are claims that AppArmor will stop this exploit from doing anything bad.

To check SE Linux access I first use the "semanage fcontext" command to check the context of the binary, cupsd_exec_t means that the daemon runs as cupsd_t. Then I checked what file access is granted with the sesearch program, mostly just access to temporary files, cupsd config files, the faillog, the Kerberos cache files (not used on the Kerberos client systems I run), Samba run files (might be a possibility of exploiting something there), and the security_t used for interfacing with kernel security infrastructure. I then checked the access to the security class and found that it is permitted to check contexts and access-vectors - not access that can be harmful.

The next test was to use sesearch to discover what capabilities are granted, which unfortunately includes the sys_admin capability, that is a capability that allows many sysadmin tasks that could be harmful (I just checked the Fedora source and Fedora 42 has the same access). Whether the sys_admin capability can be used to do bad things with the limited access cupsd_t has to device nodes etc is not clear. But this access is undesirable.

So the SE Linux policy in Debian and Fedora will stop cupsd_t from writing SETUID programs that can be used by random users for root access and stop it from writing to /etc/shadow etc. But the sys_admin capability might allow it to do hostile things and I have already uploaded a changed policy to Debian/Unstable to remove that. The sys_rawio capability also looked concerning but it's apparently needed to probe for USB printers and as the domain has no access to block devices it is otherwise harmless. Below are the commands I used to discover what the policy allows and the output from them.

# semanage fcontext -l|grep bin/cups-browsed
/usr/bin/cups-browsed                              regular file       system_u:object_r:cupsd_exec_t:s0 
# sesearch -A -s cupsd_t -c file -p write
allow cupsd_t cupsd_interface_t:file { append create execute execute_no_trans getattr ioctl link lock map open read rename setattr unlink write };
allow cupsd_t cupsd_lock_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_log_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_runtime_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_rw_etc_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_tmp_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t faillog_t:file { append getattr ioctl lock open read write };
allow cupsd_t init_tmpfs_t:file { append getattr ioctl lock read write };
allow cupsd_t krb5_host_rcache_t:file { append create getattr ioctl link lock open read rename setattr unlink write }; [ allow_kerberos ]:True
allow cupsd_t print_spool_t:file { append create getattr ioctl link lock open read relabelfrom relabelto rename setattr unlink write };
allow cupsd_t samba_var_t:file { append getattr ioctl lock open read write };
allow cupsd_t security_t:file { append getattr ioctl lock open read write };
allow cupsd_t security_t:file { append getattr ioctl lock open read write }; [ allow_kerberos ]:True
allow cupsd_t usbfs_t:file { append getattr ioctl lock open read write };
# sesearch -A -s cupsd_t -c security
allow cupsd_t security_t:security check_context; [ allow_kerberos ]:True
allow cupsd_t security_t:security { check_context compute_av };
# sesearch -A -s cupsd_t -c capability
allow cupsd_t cupsd_t:capability net_bind_service; [ allow_ypbind ]:True
allow cupsd_t cupsd_t:capability { audit_write chown dac_override dac_read_search fowner fsetid ipc_lock kill net_bind_service setgid setuid sys_admin sys_rawio sys_resource sys_tty_config };
# sesearch -A -s cupsd_t -c capability2
allow cupsd_t cupsd_t:capability2 { block_suspend wake_alarm };
# sesearch -A -s cupsd_t -c blk_file

Conclusion

This is an example of how not to handle security issues. Some degree of promotion is acceptable but this is very excessive and will result in people not taking security announcements seriously in future. I wonder if this is even a good career move by the researcher in question, will enough people believe that they actually did something good in this that it outweighs the number of people who think it's misleading at best?

26 Oct 2024 6:51am GMT

25 Oct 2024

feedPlanet Debian

Jonathan Dowland: Behringer Model-D (synths I didn't buy)

Whilst researching what synth to buy, I learned of the Behringer1 Model-D2: a 2018 clone of the 1970 Moog Minimoog, in a desktop form factor.

Behringer Model-D

Behringer Model-D

In common with the original Minimoog, it's a monophonic analogue synth, featuring three audible oscillators3 , Moog's famous 12-ladder filter and a basic envelope generator. The model-d has lost the keyboard from the original and added some patch points for the different stages, enabling some slight re-routing of the audio components.

1970 Moog Minimoog

1970 Moog Minimoog

Since I was focussing on more fundamental, back-to-basics instruments, this was very appealing to me. I'm very curious to find out what's so compelling about the famous Moog sound. The relative lack of features feels like an advantage: less to master. The additional patch points makes it a little more flexible and offer a potential gateway into the world of modular synthesis. The Model-D is also very affordable: about £ 200 GBP. I'll never own a real Moog.

For this to work, I would need to supplement it with some other equipment. I'd need a keyboard (or press the Micron into service as a controller); I would want some way of recording and overdubbing (same as with any synth). There are no post-mix effects on the Model-D, such as delay, reverb or chorus, so I may also want something to add those.

What stopped me was partly the realisation that there was little chance that a perennial beginner, such as I, could eek anything novel out of a synthesiser design that's 54 years old. Perhaps that shouldn't matter, but it gave me pause. Whilst the Model-D has patch points, I don't have anything to connect to them, and I'm firmly wanting to avoid the Modular Synthesis money pit. The lack of effects, and polyphony could make it hard to live-sculpt a tone.

I started characterizing the Model-D as the "heart" choice, but it seemed wise to instead go for a "head" choice.

Maybe another day!


  1. There's a whole other blog post of material I could write about Behringer and their clones of classic synths, some long out of production, and others, not so much. But, I decided to skip on that for now.
  2. taken from the fact that the Minimoog was a productised version of Moog's fourth internal prototype, the model D.
  3. 2 oscillators is more common in modern synths

25 Oct 2024 3:56pm GMT