13 Jun 2025

feedPlanet Debian

Reproducible Builds (diffoscope): diffoscope 298 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 298. This version includes the following changes:

[ Chris Lamb ]
* Handle RPM's HEADERSIGNATURES and HEADERIMMUTABLE specially to avoid
  unncessarily large diffs. Based almost entirely on code by Daniel Duan.
  (Closes: reproducible-builds/diffoscope#410)
* Update copyright years.

You find out more by visiting the project homepage.

13 Jun 2025 12:00am GMT

12 Jun 2025

feedPlanet Debian

Dirk Eddelbuettel: #50: Introducing ‘almm: Activate-Linux (based) Market Monitor’

Welcome to post 50 in the R4 series.

Today we reconnect to a previous post, namely #36 on pub/sub for live market monitoring with R and Redis. It introduced both Redis as well as the (then fairly recent) extensions to RcppRedis to support the publish-subscibe ("pub/sub") model of Redis. In short, it manages both subscribing clients as well as producer for live, fast and lightweight data transmission. Using pub/sub is generally more efficient than the (conceptually simpler) 'poll-sleep' loops as polling creates cpu and network load. Subscriptions are lighterweight as they get notified, they are also a little (but not much!) more involved as they require a callback function.

We should mention that Redis has a recent fork in Valkey that arose when the former did one of these non-uncommon-among-db-companies licenuse suicides-which, happy to say, they reversed more recently-so that we now have both the original as well as this leading fork (among others). Both work, the latter is now included in several Linux distros, and the C library hiredis used to connect to either is still licensed permissibly as well.

All this came about because Yahoo! Finance recently had another 'hickup' in which they changed something leading to some data clients having hiccups. This includes GNOME applet Stocks Extension I had been running. There is a lively discussion on its issue #120 suggestions for example a curl wrapper (which then makes each access a new system call).

Separating data acquisition and presentation becomes an attractive alternative, especially given how the standard Python and R accessors to the Yahoo! Finance service continued to work (and how per post #36 I already run data acquisition). Moreoever, and somewhat independently, it occurred to me that the cute (and both funny in its pun, and very pretty in its display) ActivateLinux program might offer an easy-enough way to display updates on the desktop.

There were two aspects to address. First, the subscription side needed to be covered in either plain C or C++. That, it turns out, is very straightforward and there are existing documentation and prior examples (e.g. at StackOverflow) as well as the ability to have an LLM generate a quick stanza as I did with Claude. A modified variant is now in the example repo 'redis-pubsub-examples' in file subscriber.c. It is deliberately minimal and the directory does not even have a Makefile: just compile and link against both libevent (for the event loop controlling this) and libhiredis (for the Redis or Valkey connection). This should work on any standard Linux (or macOS) machine with those two (very standard) libraries installed.

The second aspect was trickier. While we can get Claude to modify the program to also display under x11, it still uses a single controlling event loop. It took a little bit of probing on my event to understand how to modify (the x11 use of) ActivateLinux, but as always it was reasonably straightforward in the end: instead of one single while loop awaiting events we now first check for pending events and deal with them if present but otherwise do not idle and wait but continue … in another loop that also checks on the Redis or Valkey "pub/sub" events. So two thumbs up to vibe coding which clearly turned me into an x11-savvy programmer too…

The result is in a new (and currently fairly bare-bones) repo almm. It includes all files needed to build the application, borrowed with love from ActivateLinux (which is GPL-licensed, as is of course our minimal extension) and adds the minimal modifications we made, namely linking with libhiredis and some minimal changes to x11/x11.c. (Supporting wayland as well is on the TODO list, and I also need to release a new RcppRedis version to CRAN as one currently needs the GitHub version.)

We also made a simple mp4 video with a sound overlay which describes the components briefly:

Comments and questions welcome. I will probably add a little bit of command-line support to the almm. Selecting the symbol subscribed to is currently done in the most minimal way via environment variable SYMBOL (NB: not SYM as the video using the default value shows). I also worked out how to show the display only one of my multiple monitors so I may add an explicit screen id selector too. A little bit of discussion (including minimal Docker use around r2u) is also in issue #121 where I first floated the idea of having StocksExtension listen to Redis (or Valkey). Other suggestions are most welcome, please use issue tickets at the almm repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

12 Jun 2025 4:42pm GMT

11 Jun 2025

feedPlanet Debian

Gunnar Wolf: Understanding Misunderstandings - Evaluating LLMs on Networking Questions

This post is a review for Computing Reviews for Understanding Misunderstandings - Evaluating LLMs on Networking Questions , a article published in Association for Computing Machinery (ACM), SIGCOMM Computer Communication Review

Large Language Models have awed the world, emerging as the fastest-growing application of all time - ChatGPT reached 100 million active users in January 2023, just two months after its launch. After an initial cycle, they been gradually mostly accepted and incorporated in various workflows, and their basic mechanics are no longer beyond the understanding of people with moderate computer literacy. Now, given the technology is better understood, we face the question of how convenient LLM chatbots are for different occupations. This article embarks on the question of how much LLMs can be useful for networking applications.

This article systematizes querying three popular LLMs (GPT-3.5, GPT-4 and Claude 3) with questions taken from several network management online courses and certifications, and presents a taxonomy of six axes along which the incorrect responses were classified: Accuracy (correctness of the answers provided by LLMs), Detectability (how easily errors in the LLM output can be identified), Cause (for each incorrect answer, the underlying causes behind the error), Explainability (the quality of explanations with which the LLMs support their answers), Effects (impact of wrong answers on the users) and Stability (whether a minor change, such as the change of the order of prompts, yields vastly different answers for a single query).

The authors also measure four strategies towards improving answers: Self-correction (giving back the LLM the original question and received answer, as well as the expected correct answer, as part of the prompt), One-shot prompting (adding to the prompt, "when answering user questions, follow this example" followed by a similar correct answer), Majority voting (using the answer that most models agree upon) and Fine tuning (further train on a specific dataset to adapt the LLM to the particular task or domain). The authors noted that they observed that, while some of thos strategies were marginally useful, they sometimes resulted in degraded performance.

The authors queried the commercially available instances of Gemini and GPT, reaching quite high results (89.4% for Claude 3, 88.7% for GPT-4 and 76.0% for GPT-3.5), reaching scores over 90% for basic subjects, but faring notably worse in topics that require understanding and converting between different numeric notations, such as working with IP addresses, even if they are trivial (i.e. presenting the subnet mask for a given network address expressed as the typical IPv4 dotted-quad representation).

As a last item in the article, the authors menioned they also compared performance with three popular open source models (Llama3.1, Gemma2 and Mistral with their default settings). They mention that, although those models are almost 20 times smaller than the GPT-3.5 commercial model used, they reached comparable performance levels. Sadly, the article does not delve deeper into these models, that can be deployed locally and adapted to specific scenarios.

The article is easy to read and does not require deep mathematical or AI-related knowledge. It presents a clear comparison along the described axes for the 503 multiple-choice questions presented. This article can be used as a guide for structuring similar studies over different fields.

11 Jun 2025 9:58pm GMT

Sven Hoexter: HaProxy: Two Ways of Activating PROXY Protocol

If you ever face the need to activate the PROXY Protocol in HaProxy (e.g. if you're as unlucky as I'm, and you have to use Google Cloud TCP proxy load balancer), be aware that there are two ways to do that. Both are part of the frontend configuration.

accept-proxy

This one is the big hammer and forces the usage of the PROXY protocol on all connections. Sample:

      frontend vogons
          bind *:2342 accept-proxy ssl crt /etc/haproxy/certs/vogons/tls.crt

tcp-request connection expect-proxy

If you have to, e.g. during a phase of migrations, receive traffic directly, without the PROXY protocol header and from a proxy with the header there is also a more flexible option based on a tcp-request connection action. Sample:

      frontend vogons
          bind *:2342 ssl crt /etc/haproxy/certs/vogons/tls.crt
          tcp-request connection expect-proxy layer4 if { src 35.191.0.0/16 130.211.0.0/22 }

Source addresses here are those of GCP global TCP proxy frontends. Replace with whatever suites your case. Since this is happening just after establishing a TCP connection, there is barely anything else available to match on beside of the source address.

HaProxy Documentation

11 Jun 2025 3:54pm GMT

Iustin Pop: This blog finally goes git-annex!

A long, long time ago…

I have a few pictures on this blog, mostly in earlier years, because even with small pictures, the git repository became 80MiB soon-this is not much in absolute terms, but the actual Markdown/Haskell/CSS/HTML total size is tiny compared to the picture, PDFs and fonts. I realised I need a better solution, probably about ten years ago, and that I should investigate git-annex. Then time passed, and I heard about git-lfs, so I thought that's the way forward.

Now, I recently got interested again into doing something about this repository, and started researching.

Detour: git-lfs

I was sure that git-lfs, being supported by large providers, would be the modern solution. But to my surprise, git-lfs is very server centric, which in hindsight makes sense, but for a home setup, it's not very good. Maybe I misunderstood, but git-lfs is more a protocol/method for a forge to store files, rather than an end-user solution. But then you need to backup those files separately (together with the rest of the forge), or implement another way of safeguarding them.

Further details such as the fact that it keeps two copies of the files (one in the actual checked-out tree, one in internal storage) means it's not a good solution. Well, for my blog yes, but not in general. Then posts on Reddit about horror stories-people being locked out of github due to quota, as an example, or this Stack Overflow post about git-lfs constraining how one uses git, convinced me that's not what I want. To each their own, but not for me-I might want to push this blog's repo to github, but I definitely wouldn't want in that case to pay for github storage for my blog images (which are copies, not originals). And yes, even in 2025, those quotas are real-GitHub limits-and I agree with GitHub, storage and large bandwidth can't be free.

Back to the future: git-annex

So back to git-annex. I thought it's going to be a simple thing, but oh boy, was I wrong. It took me half a week of continuous (well, in free time) reading and discussions with LLMs to understand a bit how it works. I think, honestly, it's a bit too complex, which is why the workflows page lists seven (!) levels of workflow complexity, from fully-managed, to fully-manual. IMHO, respect to the author for the awesome tool, but if you need a web app to help you manage git, it hints that the tool is too complex.

I made the mistake of running git annex sync once, to realise it actually starts pushing to my upstream repo and creating new branches and whatnot, so after enough reading, I settled on workflow 6/7, since I don't want another tool to manage my git history. Maybe I'm an outlier here, but everything "automatic" is a bit too much for me.

Once you do managed yourself how git-annex works (on the surface, at least), it is a pretty cool thing. It uses a git-annex git branch to store metainformation, and that is relatively clean. If you do run git annex sync, it creates some extra branches, which I don't like, but meh.

Trick question: what is a remote?

One of the most confusing things about git-annex was understanding its "remote" concept. I thought a "remote" is a place where you replicate your data. But not, that's a special remote. A normal remote is a git remote, but which is expected to be git/ssh/with command line access. So if you have a git+ssh remote, git-annex will not only try to push it's above-mentioned branch, but also copy the files. If such a remote is on a forge that doesn't support git-annex, then it will complain and get confused.

Of course, if you read the extensive docs, you just do git config remote.<name>.annex-ignore true, and it will understand that it should not "sync" to it.

But, aside, from this case, git-annex expects that all checkouts and clones of the repository are both metadata and data. And if you do any annex commands in them, all other clones will know about them! This can be unexpected, and you find people complaining about it, but nowadays there's a solution:

git clone … dir && cd dir
git config annex.private true
git annex init "temp copy"

This is important. Any "leaf" git clone must be followed by that annex.private true config, especially on CI/CD machines. Honestly, I don't understand why by default clones should be official data stores, but it is what it is.

I settled on not making any of my checkouts "stable", but only the actual storage places. Except those are not git repositories, but just git-annex storage things. I.e., special remotes.

Is it confusing enough yet ? 😄

Special remotes

The special remotes, as said, is what I expected to be the normal git annex remotes, i.e. places where the data is stored. But well, they exist, and while I'm only using a couple simple ones, there is a large number of them. Among the interesting ones: git-lfs, a remote that allows also storing the git repository itself (git-remote-annex), although I'm bit confused about this one, and most of the common storage providers via the rclone remote.

Plus, all of the special remotes support encryption, so this is a really neat way to store your files across a large number of things, and handle replication, number of copies, from which copy to retrieve, etc. as you with.

And many of other features

git-annex has tons of other features, so to some extent, the sky's the limit. Automatic selection of what to add git it vs plain git, encryption handling, number of copies, clusters, computed files, etc. etc. etc. I still think it's cool but too complex, though!

Uses

Aside from my blog post, of course.

I've seen blog posts/comments about people using git-annex to track/store their photo collection, and I could see very well how the remote encrypted repos-any of the services supported by rclone could be an N+2 copy or so. For me, tracking photos would be a bit too tedious, but it could maybe work after more research.

A more practical thing would probably be replicating my local movie collection (all legal, to be clear) better than "just run rsync from time to time" and tracking the large files in it via git-annex. That's an exercise for another day, though, once I get more mileage with it - my blog pictures are copies, so I don't care much if they get lost, but movies are primary online copies, and I don't want to re-dump the discs. Anyway, for later.

Migrating to git-annex

Migrating here means ending in a state where all large files are in git-annex, and the plain git repo is small. Just moving the files to git annex at the current head doesn't remove them from history, so your git repository is still large; it won't grow in the future, but remains with old size (and contains the large files in its history).

In my mind, a nice migration would be: run a custom command, and all the history is migrated to git-annex, so I can go back in time and the still use git-annex. I naïvely expected this would be easy and already available, only to find comments on the git-annex site with unsure git-filter-branch calls and some web discussions. This is the discussion on the git annex website, but it didn't make me confident it would do the right thing.

But that discussion is now 8 years old. Surely in 2025, with git-filter-repo, it's easier? And, maybe I'm missing something, but it is not. Not from the point of view of plain git, that's easy, but because interacting with git-annex, which stores its data in git itself, so doing this properly across successive steps of a repo (when replaying the commits) is, I think, not well defined behaviour.

So I was stuck here for a few days, until I got an epiphany: As I'm going to rewrite the repository, of course I'm keeping a copy of it from before git-annex. If so, I don't need the history, back in time, to be correct in the sense of being able to retrieve the binary files too. It just needs to be correct from the point of view of the actual Markdown and Haskell files that represent the "meat" of the blog.

This simplified the problem a lot. At first, I wanted to just skip these files, but this could also drop commits (git-filter-repo, by default, drops the commits if they're empty), and removing the files loses information - when they were added, what were the paths, etc. So instead I came up with a rather clever idea, if I might say so: since git-annex replaces files with symlinks already, just replace the files with symlinks in the whole history, except symlinks that are dangling (to represent the fact that files are missing). One could also use empty files, but empty files are more "valid" in a sense than dangling symlinks, hence why I settled on those.

Doing this with git-filter-repo is easy, in newer versions, with the new --file-info-callback. Here is the simple code I used:

import os
import os.path
import pathlib

SKIP_EXTENSIONS={'jpg', 'jpeg', 'png', 'pdf', 'woff', 'woff2'}
FILE_MODES = {b"100644", b"100755"}
SYMLINK_MODE = b"120000"

fas_string = filename.decode()
path = pathlib.PurePosixPath(fas_string)
ext = path.suffix.removeprefix('.')

if ext not in SKIP_EXTENSIONS:
  return (filename, mode, blob_id)

if mode not in FILE_MODES:
  return (filename, mode, blob_id)

print(f"Replacing '{filename}' (extension '.{ext}') in {os.getcwd()}")

symlink_target = '/none/binary-file-removed-from-git-history'.encode()
new_blob_id = value.insert_file_with_contents(symlink_target)
return (filename, SYMLINK_MODE, new_blob_id)

This goes and replaces files with a symlink to nowhere, but the symlink should explain why it's dangling. Then later renames or moving the files around work "naturally", as the rename/mv doesn't care about file contents. Then, when the filtering is done via:

git-filter-repo --file-info-callback <(cat ~/filter-big.py ) --force

It is easy to onboard to git annex:

For me it was easy as all such files were in a few directories, so just copying those directories back, a few git-annex add commands, and done.

Of course, then adding a few rsync remotes, git annex copy --to, and the repository was ready.

Well, I also found a bug in my own Hakyll setup: on a fresh clone, when the large files are just dangling symlinks, the builder doesn't complain, just ignores the images. Will have to fix.

Other resources

This is a blog that I read at the beginning, and I found it very useful as an intro: https://switowski.com/blog/git-annex/. It didn't help me understand how it works under the covers, but it is well written. The author does use the 'sync' command though, which is too magic for me, but also agrees about its complexity 😅

The proof is in the pudding

And now, for the actual first image to be added that never lived in the old plain git repository. It's not full-res/full-size, it's cropped a bit on the bottom.

Earlier in the year, I went to Paris for a very brief work trip, and I walked around a bit-it was more beautiful than what I remembered from way way back. So a bit random selection of a picture, but here it is:

Un bateau sur la Seine Un bateau sur la Seine

Enjoy!

11 Jun 2025 2:41pm GMT

John Goerzen: I Learned We All Have Linux Seats, and I’m Not Entirely Pleased

I recently wrote about How to Use SSH with FIDO2/U2F Security Keys, which I now use on almost all of my machines.

The last one that needed this was my Raspberry Pi hooked up to my DEC vt510 terminal and IBM mechanical keyboard. Yes I do still use that setup!

To my surprise, generating a key on it failed. I very quickly saw that /dev/hidraw0 had incorrect permissions, accessible only to root.

On other machines, it looks like this:

crw-rw----+ 1 root root 243, 16 May 24 16:47 /dev/hidraw16

And, if I run getfacl on it, I see:

# file: dev/hidraw16
# owner: root
# group: root
user::rw-
user:jgoerzen:rw-
group::---
mask::rw-
other::---

Yes, something was setting an ACL on it. Thus began to saga to figure out what was doing that.

Firing up inotifywatch, I saw it was systemd-udevd or its udev-worker. But cranking up logging on that to maximum only showed me that uaccess was somehow doing this.

I started digging. uaccess turned out to be almost entirely undocumented. People say to use it, but there's no description of what it does or how. Its purpose appears to be to grant access to devices to those logged in to a machine by dynamically adding them to ACLs for devices. OK, that's a nice goal, but why was machine A doing this and not machine B?

I dug some more. I came across a hint that uaccess may only do that for a "seat". A seat? I've not heard of that in Linux before.

Turns out there's some information (older and newer) about this out there. Sure enough, on the machine with KDE, loginctl list-sessions shows me on seat0, but on the machine where I log in from ttyUSB0, it shows an empty seat.

But how to make myself part of the seat? I tried various udev rules to add the "seat" or "master-of-seat" tags, but nothing made any difference.

I finally gave up and did the old-fashioned rule to just make it work already:

TAG=="security-device",SUBSYSTEM=="hidraw",GROUP="mygroup"

I still don't know how to teach logind to add a seat for ttyUSB0, but oh well. At least I learned something. An annoying something, but hey.

This all had a laudable goal, but when there are so many layers of indirection, poorly documented, with poor logging, it gets pretty annoying.

11 Jun 2025 2:12pm GMT

Scarlett Gately Moore: KDE Application snaps 25.04.2 released!

KDE MascotKDE Mascot

Release notes: https://kde.org/announcements/gear/25.04.2/

Now available in the snap store!

Along with that, I have fixed some outstanding bugs:

Ark: now can open/save files in removable media

Kasts: Once again has sound

WIP: Updating Qt6 to 6.9 and frameworks to 6.14

Enjoy everyone!

Unlike our software, life is not free. Please consider a donation, thanks!

11 Jun 2025 1:14pm GMT

Freexian Collaborators: Monthly report about Debian Long Term Support, May 2025 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian's Debian LTS offering.

Debian LTS contributors

In May, 22 contributors have been paid to work on Debian LTS, their reports are available:

Evolution of the situation

In May, we released 54 DLAs.

The LTS Team was particularly active in May, publishing a higher than normal number of advisories, as well as helping with a wide range of updates to packages in stable and unstable, plus some other interesting work. We are also pleased to welcome several updates from contributors outside the regular team.

This month's contributions from outside the regular team include the libapache2-mod-auth-openidc update mentioned above, prepared by Moritz Schlarb (the maintainer of the package); the update of request-tracker4, prepared by Andrew Ruthven (the maintainer of the package); and the updates of openjdk-17 and openjdk-11, also noted above, prepared by Thorsten Glaser.

Additionally, LTS Team members contributed stable updates of the following packages:

Other contributions were also made by LTS Team members to packages in unstable:

Freexian, the entity behind the management of the Debian LTS project, has been working for some time now on the development of an advanced CI platform for Debian-based distributions, called Debusine. Recently, Debusine has reached a level of feature implementation that makes it very usable. Some members of the LTS Team have been using Debusine informally, and during May LTS coordinator Santiago Ruano Rincón has made a call for the team to help with testing of Debusine, and to help evaluate its suitability for the LTS Team to eventually begin using as the primary mechanism for uploading packages into Debian. Team members who have started using Debusine are providing valuable feedback to the Debusine development team, thus helping to improve the platform for all users. Actually, a number of updates, for both bullseye and bookworm, made during the month of May were handled using Debusine, e.g. rubygems's DLA-4163-1.

By the way, if you are a Debian Developer, you can easily test Debusine following the instructions found at https://wiki.debian.org/DebusineDebianNet.

DebConf, the annual Debian Conference, is coming up in July and, as is customary each year, the week preceding the conference will feature an event called DebCamp. The DebCamp week provides an opportunity for teams and other interested groups/individuals to meet together in person in the same venue as the conference itself, with the purpose of doing focused work, often called "sprints". LTS coordinator Roberto C. Sánchez has announced that the LTS Team is planning to hold a sprint primarily focused on the Debian security tracker and the associated tooling used by the LTS Team and the Debian Security Team.

Thanks to our sponsors

Sponsors that joined recently are in bold.

11 Jun 2025 12:00am GMT

Freexian Collaborators: Debian Contributions: Updated Austin, DebConf 25 preparations continue and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-05

Contributing to Debian is part of Freexian's mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Updated Austin, by Colin Watson and Helmut Grohne

Austin is a frame stack sampling profiler for Python. It allows profiling Python applications without instrumenting them while losing some accuracy in the process, and is the only one of its kind presently packaged for Debian. Unfortunately, it hadn't been uploaded in a while and hence the last Python version it worked with was 3.8. We updated it to a current version and also dealt with a number of architecture-specific problems (such as unintended sign promotion, 64bit time_t fallout and strictness due to -Wformat-security ) in cooperation with upstream. With luck, it will migrate in time for trixie.

Preparing for DebConf 25, by Stefano Rivera and Santiago Ruano Rincón

DebConf 25 is quickly approaching, and the organization work doesn't stop. In May, Stefano continued supporting the different teams. Just to give a couple of examples, Stefano made changes in DebConf 25 website to make BoF and sprints submissions public, so interested people can already know if a BoF or sprint for a given subject is planned, allowing coordination with the proposer; or to enhance how statistics are made public to help the work of the local team.

Santiago has participated in different tasks, including the logistics of the conference, like preparing more information about the public transportation that will be available. Santiago has also taken part in activities related to fundraising and reviewing more event proposals.

Miscellaneous contributions

11 Jun 2025 12:00am GMT

08 Jun 2025

feedPlanet Debian

Thorsten Alteholz: My Debian Activities in May 2025

Debian LTS

This was my hundred-thirty-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

I also continued my to work on libxmltok and suricata. This month I also had to do some support on seger, for example to inject packages newly needed for builds.

Debian ELTS

This month was the eighty-second ELTS month. During my allocated time I uploaded or worked on:

All packages I worked on have been on the list of longstanding packages. For example espeak-ng has been on this list for more than nine month. I now understood that there is a reason why packages are on this list. Some parts of the software have been almost completely reworked, so that the patches need a "reverse" rework. For some packages this is easy, but for others this rework needs quite some time. I also continued to work on libxmltok and suricata.

Debian Printing

Unfortunately I didn't found any time to work on this topic.

Debian Astro

This month I uploaded bugfix versions of:

Debian Mobcom

This month I uploaded bugfix versions of:

misc

This month I uploaded bugfix versions of:

Thanks a lot to the Release Team who quickly handled all my unblock bugs!

FTP master

It is this time of the year when just a few packages arrive in NEW: it is Hard Freeze. So I enjoy this period and basically just take care of kernels or other important packages. As people seem to be more interested in discussions than in fixing RC bugs, my period of rest seems to continue for a while. So thanks for all this valuable discussions and really thanks to the few people who still take care of Trixie. This month I accepted 146 and rejected 10 packages. The overall number of packages that got accepted was 147.

08 Jun 2025 5:48pm GMT

Colin Watson: Free software activity in May 2025

My Debian contributions this month were all sponsored by Freexian. Things were a bit quieter than usual, as for the most part I was sticking to things that seemed urgent for the upcoming trixie release.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

After my appeal for help last month to debug intermittent sshd crashes, Michel Casabona helped me put together an environment where I could reproduce it, which allowed me to track it down to a root cause and fix it. (I also found a misuse of strlcpy affecting at least glibc-based systems in passing, though I think that was unrelated.)

I worked with Daniel Kahn Gillmor to fix a regression in ssh-agent socket handling.

I fixed a reproducibility bug depending on whether passwd is installed on the build system, which would have affected security updates during the lifetime of trixie.

I backported openssh 1:10.0p1-5 to bookworm-backports.

I issued bookworm and bullseye updates for CVE-2025-32728.

groff

I backported a fix for incorrect output when formatting multiple documents as PDF/PostScript at once.

debmirror

I added a simple autopkgtest.

Python team

I upgraded these packages to new upstream versions:

In bookworm-backports, I updated these packages:

I fixed problems building these packages reproducibly:

I backported fixes for some security vulnerabilities to unstable (since we're in freeze now so it's not always appropriate to upgrade to new upstream versions):

I fixed various other build/test failures:

I added non-superficial autopkgtests to these packages:

I packaged python-django-hashids and python-django-pgbulk, needed for new upstream versions of python-django-pgtrigger.

I ported storm to Python 3.14.

Science team

I fixed a build failure in apertium-oci-fra.

08 Jun 2025 12:20am GMT

07 Jun 2025

feedPlanet Debian

Evgeni Golov: show your desk - 2025 edition

Back in 2020 I posted about my desk setup at home.

Recently someone in our #remotees channel at work asked about WFH setups and given quite a few things changed in mine, I thought it's time to post an update.

But first, a picture! standing desk with a monitor, laptop etc (Yes, it's cleaner than usual, how could you tell?!)

desk

It's still the same Flexispot E5B, no change here. After 7 years (I bought mine in 2018) it still works fine. If I'd have to buy a new one, I'd probably get a four-legged one for more stability (they got quite affordable now), but there is no immediate need for that.

chair

It's still the IKEA Volmar. Again, no complaints here.

hardware

Now here we finally have some updates!

laptop

A Lenovo ThinkPad X1 Carbon Gen 12, Intel Core Ultra 7 165U, 32GB RAM, running Fedora (42 at the moment).

It's connected to a Lenovo ThinkPad Thunderbolt 4 Dock. It just works™.

workstation

It's still the P410, but mostly unused these days.

monitor

An AOC U2790PQU 27" 4K. I'm running it at 150% scaling, which works quite decently these days (no comparison to when I got it).

speakers

As the new monitor didn't want to take the old Dell soundbar, I have upgraded to a pair of Alesis M1Active 330 USB.

They sound good and were not too expensive.

I had to fix the volume control after some time though.

webcam

It's still the Logitech C920 Pro.

microphone

The built in mic of the C920 is really fine, but to do conference-grade talks (and some podcasts 😅), I decided to get something better.

I got a FIFINE K669B, with a nice arm.

It's not a Shure, for sure, but does the job well and Christian was quite satisfied with the results when we recorded the Debian and Foreman specials of Focus on Linux.

keyboard

It's still the ThinkPad Compact USB Keyboard with TrackPoint.

I had to print a few fixes and replacement parts for it, but otherwise it's doing great.

Seems Lenovo stopped making those, so I really shouldn't break it any further.

mouse

Logitech MX Master 3S. The surface of the old MX Master 2 got very sticky at some point and it had to be replaced.

other

notepad

I'm still terrible at remembering things, so I still write them down in an A5 notepad.

whiteboard

I've also added a (small) whiteboard on the wall right of the desk, mostly used for long term todo lists.

coaster

Turns out Xeon-based coasters are super stable, so it lives on!

yubikey

Yepp, still a thing. Still USB-A because... reasons.

headphones

Still the Bose QC25, by now on the third set of ear cushions, but otherwise working great and the odd 15€ cushion replacement does not justify buying anything newer (which would have the same problem after some time, I guess).

I did add a cheap (~10€) Bluetooth-to-Headphonejack dongle, so I can use them with my phone too (shakes fist at modern phones).

And I do use the headphones more in meetings, as the Alesis speakers fill the room more with sound and thus sometimes produce a bit of an echo.

charger

The Bose need AAA batteries, and so do some other gadgets in the house, so there is a technoline BC 700 charger for AA and AAA on my desk these days.

light

Yepp, I've added an IKEA Tertial and an ALDI "face" light. No, I don't use them much.

KVM switch

I've "built" a KVM switch out of an USB switch, but given I don't use the workstation that often these days, the switch is also mostly unused.

07 Jun 2025 3:17pm GMT

06 Jun 2025

feedPlanet Debian

Reproducible Builds: Reproducible Builds in May 2025

Welcome to our 5th report from the Reproducible Builds project in 2025! Our monthly reports outline what we've been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please do visit the Contribute page on our website.

In this report:

  1. Security audit of Reproducible Builds tools published
  2. When good pseudorandom numbers go bad
  3. Academic articles
  4. Distribution work
  5. diffoscope and disorderfs
  6. Website updates
  7. Reproducibility testing framework
  8. Upstream patches

Security audit of Reproducible Builds tools published

The Open Technology Fund's (OTF) security partner Security Research Labs recently an conducted audit of some specific parts of tools developed by Reproducible Builds. This form of security audit, sometimes called a "whitebox" audit, is a form testing in which auditors have complete knowledge of the item being tested. They auditors assessed the various codebases for resilience against hacking, with key areas including differential report formats in diffoscope, common client web attacks, command injection, privilege management, hidden modifications in the build process and attack vectors that might enable denials of service.

The audit focused on three core Reproducible Builds tools: diffoscope, a Python application that unpacks archives of files and directories and transforms their binary formats into human-readable form in order to compare them; strip-nondeterminism, a Perl program that improves reproducibility by stripping out non-deterministic information such as timestamps or other elements introduced during packaging; and reprotest, a Python application that builds source code multiple times in various environments in order to to test reproducibility.

OTF's announcement contains more of an overview of the audit, and the full 24-page report is available in PDF form as well.


"When good pseudorandom numbers go bad"

Danielle Navarro published an interesting and amusing article on their blog on When good pseudorandom numbers go bad. Danielle sets the stage as follows:

[Colleagues] approached me to talk about a reproducibility issue they'd been having with some R code. They'd been running simulations that rely on generating samples from a multivariate normal distribution, and despite doing the prudent thing and using set.seed() to control the state of the random number generator (RNG), the results were not computationally reproducible. The same code, executed on different machines, would produce different random numbers. The numbers weren't "just a little bit different" in the way that we've all wearily learned to expect when you try to force computers to do mathematics. They were painfully, brutally, catastrophically, irreproducible different. Somewhere, somehow, something broke.

Thanks to David Wheeler for posting about this article on our mailing list


Academic articles

There were two scholarly articles published this month that related to reproducibility:

Daniel Hugenroth and Alastair R. Beresford of the University of Cambridge in the United Kingdom and Mario Lins and René Mayrhofer of Johannes Kepler University in Linz, Austria published an article titled Attestable builds: compiling verifiable binaries on untrusted systems using trusted execution environments. In their paper, they:

present attestable builds, a new paradigm to provide strong source-to-binary correspondence in software artifacts. We tackle the challenge of opaque build pipelines that disconnect the trust between source code, which can be understood and audited, and the final binary artifact, which is difficult to inspect. Our system uses modern trusted execution environments (TEEs) and sandboxed build containers to provide strong guarantees that a given artifact was correctly built from a specific source code snapshot. As such it complements existing approaches like reproducible builds which typically require time-intensive modifications to existing build configurations and dependencies, and require independent parties to continuously build and verify artifacts.

The authors compare "attestable builds" with reproducible builds by noting an attestable build requires "only minimal changes to an existing project, and offers nearly instantaneous verification of the correspondence between a given binary and the source code and build pipeline used to construct it", and proceed by determining that t"he overhead (42 seconds start-up latency and 14% increase in build duration) is small in comparison to the overall build time."


Timo Pohl, Pavel Novák, Marc Ohm and Michael Meier have published a paper called Towards Reproducibility for Software Packages in Scripting Language Ecosystems. The authors note that past research into Reproducible Builds has focused primarily on compiled languages and their ecosystems, with a further emphasis on Linux distribution packages:

However, the popular scripting language ecosystems potentially face unique issues given the systematic difference in distributed artifacts. This Systemization of Knowledge (SoK) [paper] provides an overview of existing research, aiming to highlight future directions, as well as chances to transfer existing knowledge from compiled language ecosystems. To that end, we work out key aspects in current research, systematize identified challenges for software reproducibility, and map them between the ecosystems.

Ultimately, the three authors find that the literature is "sparse", focusing on few individual problems and ecosystems, and therefore identify space for more critical research.


Distribution work

In Debian this month:


Hans-Christoph Steiner of the F-Droid catalogue of open source applications for the Android platform published a blog post on Making reproducible builds visible. Noting that "Reproducible builds are essential in order to have trustworthy software", Hans also mentions that "F-Droid has been delivering reproducible builds since 2015". However:

There is now a "Reproducibility Status" link for each app on f-droid.org, listed on every app's page. Our verification server shows ✔️️ or 💔 based on its build results, where ✔️️ means our rebuilder reproduced the same APK file and 💔 means it did not. The IzzyOnDroid repository has developed a more elaborate system of badges which displays a ✅ for each rebuilder. Additionally, there is a sketch of a five-level graph to represent some aspects about which processes were run.

Hans compares the approach with projects such as Arch Linux and Debian that "provide developer-facing tools to give feedback about reproducible builds, but do not display information about reproducible builds in the user-facing interfaces like the package management GUIs."


Arnout Engelen of the NixOS project has been working on reproducing the minimal installation ISO image. This month, Arnout has successfully reproduced the build of the minimal image for the 25.05 release without relying on the binary cache. Work on also reproducing the graphical installer image is ongoing.


In openSUSE news, Bernhard M. Wiedemann posted another monthly update for their work there.


Lastly in Fedora news, Jelle van der Waa opened issues tracking reproducible issues in Haskell documentation, Qt6 recording the host kernel and R packages recording the current date. The R packages can be made reproducible with packaging changes in Fedora.


diffoscope & disorderfs

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 295, 296 and 297 to Debian:

Chris also merged an impressive changeset from Siva Mahadevan to make disorderfs more portable, especially on FreeBSD. disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues []. This was then uploaded to Debian as version 0.6.0-1.

Lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 296 [][] and 297 [][], and disorderfs to version 0.6.0 [][].


Website updates

Once again, there were a number of improvements made to our website this month including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility.

However, Holger Levsen posted to our mailing list this month in order to bring a wider awareness to funding issues faced by the Oregon State University (OSU) Open Source Lab (OSL). As mentioned on OSL's public post, "recent changes in university funding makes our current funding model no longer sustainable [and that] unless we secure $250,000 in committed funds, the OSL will shut down later this year". As Holger notes in his post to our mailing list, the Reproducible Builds project relies on hardware nodes hosted there. Nevertheless, Lance Albertson of OSL posted an update to the funding situation later in the month with broadly positive news.


Separate to this, there were various changes to the Jenkins setup this month, which is used as the backend driver of for both tests.reproducible-builds.org and reproduce.debian.net, including:


Outside of this, a number of smaller changes were also made by Holger Levsen:

In addition, Jochen Sprickerhof made a series of changes related to reproduce.debian.net:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 Jun 2025 9:17pm GMT

Dirk Eddelbuettel: #49: The Two Cultures of Deploying Statistical Software

Welcome to post 49 in the R4 series.

The Two Cultures is a term first used by C.P. Snow in a 1959 speech and monograph focused on the split between humanities and the sciences. Decades later, the term was (quite famously) re-used by Leo Breiman in a (somewhat prophetic) 2001 article about the split between 'data models' and 'algorithmic models'. In this note, we argue that statistical computing practice and deployment can also be described via this Two Cultures moniker.

Referring to the term linking these foundational pieces is of course headline bait. Yet when preparing for the discussion of r2u in the invited talk in Mons (video, slides), it occurred to me that there is in fact a wide gulf between two alternative approaches of using R and, specifically, deploying packages.

On the one hand we have the approach described by my friend Jeff as "you go to the Apple store, buy the nicest machine you can afford, install what you need and then never ever touch it". A computer / workstation / laptop is seen as an immutable object where every attempt at change may lead to breakage, instability, and general chaos-and is hence best avoided. If you know Jeff, you know he exaggerates. Maybe only slightly though.

Similarly, an entire sub-culture of users striving for "reproducibility" (and sometimes also "replicability") does the same. This is for example evidenced by the popularity of package renv by Rcpp collaborator and pal Kevin. The expressed hope is that by nailing down a (sub)set of packages, outcomes are constrained to be unchanged. Hope springs eternal, clearly. (Personally, if need be, I do the same with Docker containers and their respective Dockerfile.)

On the other hand, 'rolling' is fundamentally different approach. One (well known) example is Google building "everything at @HEAD". The entire (ginormous) code base is considered as a mono-repo which at any point in time is expected to be buildable as is. All changes made are pre-tested to be free of side effects to other parts. This sounds hard, and likely is more involved than an alternative of a 'whatever works' approach of independent changes and just hoping for the best.

Another example is a rolling (Linux) distribution as for example Debian. Changes are first committed to a 'staging' place (Debian calls this the 'unstable' distribution) and, if no side effects are seen, propagated after a fixed number of days to the rolling distribution (called 'testing'). With this mechanism, 'testing' should always be installable too. And based on the rolling distribution, at certain times (for Debian roughly every two years) a release is made from 'testing' into 'stable' (following more elaborate testing). The released 'stable' version is then immutable (apart from fixes for seriously grave bugs and of course security updates). So this provides the connection between frequent and rolling updates, and produces immutable fixed set: a release.

This Debian approach has been influential for any other projects-including CRAN as can be seen in aspects of its system providing a rolling set of curated packages. Instead of a staging area for all packages, extensive tests are made for candidate packages before adding an update. This aims to ensure quality and consistence-and has worked remarkably well. We argue that it has clearly contributed to the success and renown of CRAN.

Now, when accessing CRAN from R, we fundamentally have two accessor functions. But seemingly only one is widely known and used. In what we may call 'the Jeff model', everybody is happy to deploy install.packages() for initial installations.

That sentiment is clearly expressed by this bsky post:

One of my #rstats coding rituals is that every time I load a @vincentab.bsky.social package I go check for a new version because invariably it's been updated with 18 new major features 😆

And that is why we have two cultures.

Because some of us, yours truly included, also use update.packages() at recurring (frequent !!) intervals: daily or near-daily for me. The goodness and, dare I say, gift of packages is not limited to those by my pal Vincent. CRAN updates all the time, and updates are (generally) full of (usually excellent) changes, fixes, or new features. So update frequently! Doing (many but small) updates (frequently) is less invasive than (large, infrequent) 'waterfall'-style changes!

But the fear of change, or disruption, is clearly pervasive. One can only speculate why. Is the experience of updating so painful on other operating systems? Is it maybe a lack of exposure / tutorials on best practices?

These 'Two Cultures' coexist. When I delivered the talk in Mons, I briefly asked for a show of hands among all the R users in the audience to see who in fact does use update.packages() regularly. And maybe a handful of hands went up: surprisingly few!

Now back to the context of installing packages: Clearly 'only installing' has its uses. For continuous integration checks we generally install into ephemeral temporary setups. Some debugging work may be with one-off container or virtual machine setups. But all other uses may well be under 'maintained' setups. So consider calling update.packages() once in while. Or even weekly or daily. The rolling feature of CRAN is a real benefit, and it is there for the taking and enrichment of your statistical computing experience.

So to sum up, the real power is to use

For both tasks, relying on binary installations accelerates and eases the process. And where available, using binary installation with system-dependency support as r2u does makes it easier still, following the r2u slogan of 'Fast. Easy. Reliable. Pick All Three.' Give it a try!

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

06 Jun 2025 1:35am GMT

05 Jun 2025

feedPlanet Debian

Matthew Garrett: How Twitter could (somewhat) fix their encrypted DMs

As I wrote in my last post, Twitter's new encrypted DM infrastructure is pretty awful. But the amount of work required to make it somewhat better isn't large.

When Juicebox is used with HSMs, it supports encrypting the communication between the client and the backend. This is handled by generating a unique keypair for each HSM. The public key is provided to the client, while the private key remains within the HSM. Even if you can see the traffic sent to the HSM, it's encrypted using the Noise protocol and so the user's encrypted secret data can't be retrieved.

But this is only useful if you know that the public key corresponds to a private key in the HSM! Right now there's no way to know this, but there's worse - the client doesn't have the public key built into it, it's supplied as a response to an API request made to Twitter's servers. Even if the current keys are associated with the HSMs, Twitter could swap them out with ones that aren't, terminate the encrypted connection at their endpoint, and then fake your query to the HSM and get the encrypted data that way. Worse, this could be done for specific targeted users, without any indication to the user that this has happened, making it almost impossible to detect in general.

This is at least partially fixable. Twitter could prove to a third party that their Juicebox keys were generated in an HSM, and the key material could be moved into clients. This makes attacking individual users more difficult (the backdoor code would need to be shipped in the public client), but can't easily help with the website version[1] even if a framework exists to analyse the clients and verify that the correct public keys are in use.

It's still worse than Signal. Use Signal.

[1] Since they could still just serve backdoored Javascript to specific users. This is, unfortunately, kind of an inherent problem when it comes to web-based clients - we don't have good frameworks to detect whether the site itself is malicious.

comment count unavailable comments

05 Jun 2025 1:18pm GMT

Matthew Garrett: Twitter's new encrypted DMs aren't better than the old ones

(Edit: Twitter could improve this significantly with very few changes - I wrote about that here. It's unclear why they'd launch without doing that, since it entirely defeats the point of using HSMs)

When Twitter[1] launched encrypted DMs a couple
of years ago, it was the worst kind of end-to-end
encrypted - technically e2ee, but in a way that made it relatively easy for Twitter to inject new encryption keys and get everyone's messages anyway. It was also lacking a whole bunch of features such as "sending pictures", so the entire thing was largely a waste of time. But a couple of days ago, Elon announced the arrival of "XChat", a new encrypted message platform built on Rust with (Bitcoin style) encryption, whole new architecture. Maybe this time they've got it right?

tl;dr - no. Use Signal. Twitter can probably obtain your private keys, and admit that they can MITM you and have full access to your metadata.

The new approach is pretty similar to the old one in that it's based on pretty straightforward and well tested cryptographic primitives, but merely using good cryptography doesn't mean you end up with a good solution. This time they've pivoted away from using the underlying cryptographic primitives directly and into higher level abstractions, which is probably a good thing. They're using Libsodium's boxes for message encryption, which is, well, fine? It doesn't offer forward secrecy (if someone's private key is leaked then all existing messages can be decrypted) so it's a long way from the state of the art for a messaging client (Signal's had forward secrecy for over a decade!), but it's not inherently broken or anything. It is, however, written in C, not Rust[2].

That's about the extent of the good news. Twitter's old implementation involved clients generating keypairs and pushing the public key to Twitter. Each client (a physical device or a browser instance) had its own private key, and messages were simply encrypted to every public key associated with an account. This meant that new devices couldn't decrypt old messages, and also meant there was a maximum number of supported devices and terrible scaling issues and it was pretty bad. The new approach generates a keypair and then stores the private key using the Juicebox protocol. Other devices can then retrieve the private key.

Doesn't this mean Twitter has the private key? Well, no. There's a PIN involved, and the PIN is used to generate an encryption key. The stored copy of the private key is encrypted with that key, so if you don't know the PIN you can't decrypt the key. So we brute force the PIN, right? Juicebox actually protects against that - before the backend will hand over the encrypted key, you have to prove knowledge of the PIN to it (this is done in a clever way that doesn't directly reveal the PIN to the backend). If you ask for the key too many times while providing the wrong PIN, access is locked down.

But this is true only if the Juicebox backend is trustworthy. If the backend is controlled by someone untrustworthy[3] then they're going to be able to obtain the encrypted key material (even if it's in an HSM, they can simply watch what comes out of the HSM when the user authenticates if there's no validation of the HSM's keys). And now all they need is the PIN. Turning the PIN into an encryption key is done using the Argon2id key derivation function, using 32 iterations and a memory cost of 16MB (the Juicebox white paper says 16KB, but (a) that's laughably small and (b) the code says 16 * 1024 in an argument that takes kilobytes), which makes it computationally and moderately memory expensive to generate the encryption key used to decrypt the private key. How expensive? Well, on my (not very fast) laptop, that takes less than 0.2 seconds. How many attempts to I need to crack the PIN? Twitter's chosen to fix that to 4 digits, so a maximum of 10,000. You aren't going to need many machines running in parallel to bring this down to a very small amount of time, at which point private keys can, to a first approximation, be extracted at will.

Juicebox attempts to defend against this by supporting sharding your key over multiple backends, and only requiring a subset of those to recover the original. I can't find any evidence that Twitter's does seem to be making use of this,Twitter uses three backends and requires data from at least two, but all the backends used are under x.com so are presumably under Twitter's direct control. Trusting the keystore without needing to trust whoever's hosting it requires a trustworthy communications mechanism between the client and the keystore. If the device you're talking to can prove that it's an HSM that implements the attempt limiting protocol and has no other mechanism to export the data, this can be made to work. Signal makes use of something along these lines using Intel SGX for contact list and settings storage and recovery, and Google and Apple also have documentation about how they handle this in ways that make it difficult for them to obtain backed up key material. Twitter has no documentation of this, and as far as I can tell does nothing to prove that the backend is in any way trustworthy. (Edit to add: The Juicebox API does support authenticated communication between the client and the HSM, but that relies on you having some way to prove that the public key you're presented with corresponds to a private key that only exists in the HSM. Twitter gives you the public key whenever you communicate with them, so even if they've implemented this properly you can't prove they haven't made up a new key and MITMed you the next time you retrieve your key)

On the plus side, Juicebox is written in Rust, so Elon's not 100% wrong. Just mostly wrong.

But ok, at least you've got viable end-to-end encryption even if someone can put in some (not all that much, really) effort to obtain your private key and render it all pointless? Actually no, since you're still relying on the Twitter server to give you the public key of the other party and there's no out of band mechanism to do that or verify the authenticity of that public key at present. Twitter can simply give you a public key where they control the private key, decrypt the message, and then reencrypt it with the intended recipient's key and pass it on. The support page makes it clear that this is a known shortcoming and that it'll be fixed at some point, but they said that about the original encrypted DM support and it never was, so that's probably dependent on whether Elon gets distracted by something else again. And the server knows who and when you're messaging even if they haven't bothered to break your private key, so there's a lot of metadata leakage.

Signal doesn't have these shortcomings. Use Signal.

[1] I'll respect their name change once Elon respects his daughter

[2] There are implementations written in Rust, but Twitter's using the C one with these JNI bindings

[3] Or someone nominally trustworthy but who's been compelled to act against your interests - even if Elon were absolutely committed to protecting all his users, his overarching goals for Twitter require him to have legal presence in multiple jurisdictions that are not necessarily above placing employees in physical danger if there's a perception that they could obtain someone's encryption keys

comment count unavailable comments

05 Jun 2025 11:02am GMT