16 May 2025
Planet Debian
Michael Prokop: Grml 2025.05 – codename Nudlaug
Debian hard freeze on 2025-05-15? We bring you a new Grml release on top of that! 2025.05 🚀 - codename Nudlaug.
There's plenty of new stuff, check out our official release announcement for all the details. But I'd like to highlight one feature that I particularly like: SSH service announcement with Avahi. The grml-full flavor ships Avahi, and when you enable SSH, it automatically announces the SSH service on your local network. So when f.e. booting Grml with boot option `ssh=debian`, you should be able to login on your Grml live system with `ssh grml@grml.local` and password 'debian':
% insecssh grml@grml.local Warning: Permanently added 'grml.local' (ED25519) to the list of known hosts. grml@grml.local's password: Linux grml 6.12.27-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.27-1 (2025-05-06) x86_64 Grml - Linux for geeks grml@grml ~ %
Hint: grml-zshrc provides that useful shell alias `insecssh`, which is aliased to `ssh -o "StrictHostKeyChecking=no" -o "UserKnownHostsFile=/dev/null"`. Using those options, you aren't storing the SSH host key of the (temporary) Grml live system (permanently) in your UserKnownHostsFile.
BTW, you can run `avahi-browse -d local _ssh._tcp -resolve -t` to discover the SSH services on your local network. 🤓
Happy Grml-ing!
16 May 2025 4:42pm GMT
Reproducible Builds (diffoscope): diffoscope 296 released
The diffoscope maintainers are pleased to announce the release of diffoscope version 296
. This version includes the following changes:
[ Chris Lamb ]
* Don't rely on zipdetails' --walk functionality to be available; only add
that argument after testing for a new enough versions.
(Closes: reproducible-builds/diffoscope#408)
* Disable and then re-enable failing on stable-bpo.
* Update copyright years.
[ Omair Majid ]
* Add NuGet package support.
You find out more by visiting the project homepage.
16 May 2025 12:00am GMT
15 May 2025
Planet Debian
Yves-Alexis Perez: New laptop: Lenovo Thinkpad X13 Gen 5
After more than ten years on my trusted X250, and with a lot of financial help for Debian (which I really thank, more on that later), I finally jumped on a new ThinkPad, an X13 Gen 5.
The migration path was really easy: I'm doing daily backups with borg of the whole filesystems on an encrypted USB drive, so I just had to boot a live USB key on the new laptop, plug the USB drive, create the partitioning (encryption, LVM etc.) and then run borg extract. Since I'm using LABEL in the various fstab I didn't have much to change.
I actually had a small hiccup because my daily backup scripts used ProtectKernelModules and besides preventing loading modules into the running kernel, it also prevents access to /usr/lib/modules. So when restoring I didn't have any modules for the installed kernels. No big deal, I reinstalled the kernel package from the chroot and it did work just fine.
All in all it was pretty smooth.
I've started a similar page as the X250 for the X13G5 but honestly I don't think I'll have to document a lot of stuff because everything basically works out of the box. It's not really a surprise because we went a long way since 2015 and Linux kernels are really tested on a lot of hardware, including laptops these days, and Intel laptops are the most standard stuff you can find. I guess it's still rocky for ARM64 laptops (and especially Apple hardware) but the point was less to do porting work for Debian and rather beeing more efficient for the current stuff I maintain (and sometimes struggle with).
As said above, the laptop has been funded by Debian and I really thank the DPL and the Debian France treasurer for authorizing it and beeing really fast on the reimbursement.
I had already posted a long time ago about hardware funding for Debian developers. It took me quite a while but I finally managed to ask for help because I couldn't afford the hardware at this point and it was becoming problematic. This is not something which should be done lightly (Debian wouldn't have the funds) but this is definitely something which should be done if needed. Don't hesitate to ask your fellow Debian developpers about advice on this.
15 May 2025 8:19pm GMT
14 May 2025
Planet Debian
Jonathan McDowell: Local Voice Assistant Step 3: A Detour into Tensorflow
To build our local voice satellite on a Debian system rather than using the ATOM Echo device we need something that can handle the wake word component; the piece that means we only send audio to the Home Assistant server for processing by whisper.cpp when we've detected someone is trying to talk to us.
openWakeWord seems to be one of the better ways to do this, and is well supported. However. It relies on TensorFlow Lite (now LiteRT) which is a complicated mess of machine learning code. tflite-runtime is available from PyPI, but that's prebuilt and we're trying to avoid that.
Despite, on initial impressions, it looking quite complicated to deal with building TensorFlow - Bazel is an immediate warning - it turns out to be incredibly simple to build your own .deb
:
$ wget -O tensorflow-v2.15.1.tar.gz https://github.com/tensorflow/tensorflow/archive/refs/tags/v2.15.1.tar.gz
…
$ tar -axf tensorflow-v2.15.1.tar.gz
$ cd tensorflow-2.15.1/
$ BUILD_NUM_JOBS=$(nproc) BUILD_DEB=y tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh
…
$ find . -name *.deb
./tensorflow/lite/tools/pip_package/gen/tflite_pip/python3-tflite-runtime-dbgsym_2.15.1-1_amd64.deb
./tensorflow/lite/tools/pip_package/gen/tflite_pip/python3-tflite-runtime_2.15.1-1_amd64.deb
This is hiding an awful lot of complexity, however. In particular the number of 3rd party projects that are being downloaded in the background (and compiled, to be fair, rather than using binary artefacts).
We can build the main C++ wrapper .so
directly with cmake, allowing us to investigate a bit more:
mkdir tf-build
cd tf-build/
cmake \
-DCMAKE_C_FLAGS="-I/usr/include/python3.11" \
-DCMAKE_CXX_FLAGS="-I/usr/include/python3.11" \
../tensorflow-2.15.1/tensorflow/lite/
cmake --build . -t _pywrap_tensorflow_interpreter_wrapper
…
[100%] Built target _pywrap_tensorflow_interpreter_wrapper
$ ldd _pywrap_tensorflow_interpreter_wrapper.so
linux-vdso.so.1 (0x00007ffec9588000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f22d00d0000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f22cf600000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f22d00b0000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f22cf81f000)
/lib64/ld-linux-x86-64.so.2 (0x00007f22d01d1000)
Looking at the output we can see that pthreadpool, FXdiv, FP16 + PSimd are all downloaded, and seem to have ways to point to a local copy. That seems positive.
However, there are even more hidden dependencies, which we can see if we look in the _deps/
subdirectory of the build tree. These don't appear to be as easy to override, and not all of them have packages already in Debian.
First, the ones that seem to be available: abseil-cpp, cpuinfo, eigen, farmhash, flatbuffers, gemmlowp, ruy + xnnpack
(lots of credit to the Debian Deep Learning Team for these, and in particular Mo Zhou)
Dependencies I couldn't see existing packages for are: OouraFFT, ml_dtypes & neon2sse.
At this point I just used the package I built with the initial steps above. I live in hope someone will eventually package this properly for Debian, or that I'll find the time to try and help out, but that's not going to be today.
I wish upstream developers made it easier to use system copies of their library dependencies. I wish library developers made it easier to build and install system copies of their work. pkgconf is not new tech these days (pkg-config appears to date back to 2000), and has decent support in CMake. I get that there can be issues with incompatibilities even in minor releases, or awkwardness in doing builds of multiple connected projects, but at least give me the option to do so.
14 May 2025 5:39pm GMT
Sven Hoexter: Disable Firefox DRM Plugin Infobar
... or how I spent my lunch break today.
An increasing amount of news outlets (hello heise.de) start to embed bullshit which requires DRM playback. Since I keep that disabled I now get an infobar that tells me that I need to enable it for this page. Pretty useless and a pain in the back because it takes up screen space. Here's the quick way how to get rid of it:
- Go to
about:config
and turn ontoolkit.legacyUserProfileCustomizations.stylesheets
. - Go to your Firefox profile folder (e.g.
~/.mozilla/firefox/<random-value>.default/
) andmkdir chrome && touch chrome/userChrome.css
. -
Add the following to your
userChrome.css
file:.infobar[value="drmContentDisabled"] { display: none !important; }
-
Restart Firefox and read news again with full screen space.
14 May 2025 10:59am GMT
Jonathan Dowland: Orbital
I'm on a bit of an Orbital kick at the moment. Last year they re-issued their 1991 debut album with 43 extra tracks. Later this month they're doing the same for their 1993 sophomore album.
I thought I'd try to narrow down some tracks to recommend. I seem to have settled on roughly 5 in previous posts (for Underworld, The Cure, Coil and Gazelle Twin). This time I've done 6 (I borrowed one from Underworld)
As always it's a hard choice. I've tried to select some tracks I really enjoy that don't often come up on best-of compilation albums. For a more conventional choice of best-of tracks, I recommend the recent-ish 30 something "compilation" (of sorts, previously written about)
-
The Naked and the Dead (1992)
The Naked and the Dead by OrbitalFrom an early EP Radiccio, which is being re-issued this month. Digital versions of the re-issue will feature a new recording "Deepest" featuring Tilda Swinton. Sadly this isn't making it onto the pressed version. She performed with them live at Glastonbury 2024. That entire performance was a real pick-me-up during my convolescence, and is recommended.
Anyway I've now written more about a song I haven't recommended than the one I did…
-
Remind (1993)
Remind by OrbitalFrom the Brown Album, I first heard this as the Encore from their "final show", for John Peel, when they split up in 2004. "Remind" wasn't broadcast, but an audience recording was circulated on fan site Loopz. Remarkably, 21 years on, it's still there.
In writing this I discovered that it's a re-working of a remix Orbital did for Meat Beat Manifesto: MindStream (Mind The Bend The Mind)
-
You Lot (2004)
From the unfairly-maligned "final" Blue album. Featuring a sample of pre-Doctor Who Christoper Eccleston, from another Russell T Davies production, Second Coming.
-
Beached (2000)
Beached (Long version) by Orbital, Angelo BadalamentiCo-written by Angelo Badalamenti, it's built around a sample of Badalamenti's score for the movie "The Beach". Orbital's re-work adds some grit to the orchestral instrumentation and opens with a monologue, delivered by Leonardo Di Caprio, sampled from the movie.
-
Spare Parts Express (1999)
Spare Parts Express by OrbitalCritics had started to be quite unfair to Orbital by this point. The band themselves said that they'd ran out of ideas (pointing at album closer "Style", built around a Stylophone melody, as proof). Their malaise continued right up to the Blue Album, at which point the split up; ostensibly for good, before regrouping 8 years later.
Spare Parts Express is a hatchet job of various bits that they didn't develop into full songs on their own. Despite this I think it works. I love long-form electronica, and this clocks in at 10:07. My favourite segment (06:37) is adjacent to a reference (05:05) to John Baker's theme for the BBC children's program Newsround (sadly they aren't using it today. Here's a rundown of Newsround themes over time)
-
Attached (1994)
Attached by OrbitalThis originally debuted on a Peel session before appearing on the subsequent album Snivilisation a few months later. An album closer, and a good come-down song to close this list.
14 May 2025 10:41am GMT
Evgeni Golov: running modified containers with podman
Everybody (who runs containers) knows this situation: you've been running happycontainer:stable
for a while and it's been great but now something external changed and you need to adjust the code while there is still no release with the patch.
I've encountered exactly this when our Home-Assistant stopped showing the presence of our cat correctly, but we've also been discussing this at work recently.
Now the most obvious (to me?) solution would be to build a new container, based on the original one, and perform the modifications at build time. Something like this:
FROM happycontainer:stable RUN curl … | patch -p1
But that's not interactive, and if you don't have a patch readily available, that's not what you want. (And I'll save you the idea of RUN
ing sed
and friends to alter files!)
You could run vim
inside the container, but that requires vim
to be installed there in the first place. And a reasonable configuration. And…
Well, turns out podman can mount the root fs of a running container.
[root@sai ~]# podman mount homeassistant /var/lib/containers/storage/overlay/f3ac502d97b5681989dff
And if you're running as non-root, you'll get an error:
[container@sai ~]$ podman mount homeassistant Error: cannot run command "podman mount" in rootless mode, must execute `podman unshare` first
Luckily the solution is in the error message - use podman unshare
[container@sai ~]$ podman unshare [root@sai ~]# podman mount homeassistant /home/container/.local/share/containers/storage/overlay/95d3809d53125e4d40ad05e52efaa5a48e6e61fe9b8a8478416ff44459c7eb31/merged
So in both cases (root and rootless) we get a path, which is the mounted root fs and we can edit things in there as we like.
[root@sai ~]# vi /home/container/.local/share/containers/storage/overlay/95d3809d53125e4d40ad05e52efaa5a48e6e61fe9b8a8478416ff44459c7eb31/merged/usr/src/homeassistant/homeassistant/components/surepetcare/binary_sensor.py
Once done, the container can be unmounted again, and the namespace left
[root@sai ~]# podman umount homeassistant homeassistant [root@sai ~]# exit [container@sai ~]$
At this point we have modified the code inside the container, but the running process is still using the old code. If we restart the container now to restart the process, our changes will be lost.
Instead, we can commit the changes as a new layer and tag the result.
[container@sai ~]$ podman commit homeassistant docker.io/homeassistant/home-assistant:stable
And now, when we restart the container, it will use the new code with our changes 🎉
[container@sai ~]$ systemctl --user restart homeassistant
Is this the best workflow you can get? Probably not. Does it work? Hell yeah!
14 May 2025 8:54am GMT
13 May 2025
Planet Debian
Ben Hutchings: Report for Debian BSP near Leuven in April 2025
On 26th and 27th April we held a Debian bug-squashing party near Leuven, Belgium. Several longstanding and new Debian contributors gathered to work through some of the highest priority bugs affecting the upcoming release of Debian 13 "trixie".
We were hosted by the Familia community centre in Tildonk. As this venue currently does not have an Internet connection, we brought a mobile hotspot and a local Debian mirror.
In attendance were:
- Debian Developers: Ben Hutchings, Nattie Mayer-Hutchings, Kurt Roeckx, and Geert Stappers
- New contributors: Yüce Kürüm, Louis Renuart, Arnout Vandecappelle
The new contributors were variously using Arch, Fedora, and Ubuntu, and the DDs spent some some time setting them up with Debian dvelopment environments.
The bugs we worked on included:
- #994510: libunwind8 abuses setcontext() causing SIGSEGV on i386 with glibc >= 2.32: Kurt added a patch, though it seems that a different change is needed.
- #1016936: dwz: Unknown debugging section .debug_addr causes some builds to fail: Ben reduced the severity.
- #1063664: gcc-13-cross: file conflicts between gnat-13-<triplet> and gnat-{9,10}-<triplet>: Kurt looked at this and questioned the explanation for reopening this bug.
- #1060960: libslf4j-java: FTBFS: make: *** [debian/rules:4: build] Error 25: Ben identified that there have been 2 separate test regressions, and added a patch for the first of them.
- #1064003: src:cross-toolchain-base: unsatisfied build dependency in testing: linux-source-6.5 (>= 6.5.8): Ben closed this as already fixed.
- #1072167: grub-pc-dbg: newly-added symbol file "…" does not provide any symbols: Ben reproduced this and added a patch.
- #1076350: Segfault with shared libuv on x86 (CVE-2025-47153): Kurt asked the maintainer for clarification of the status.
- #1078608: apt update silently leaves old index data: Ben reproduced this (again), and identified that it may be triggered by PackageKit.
- #1089192: golang-golang-x-net: FTBFS: FAIL: TestNumICANNRules: Yüce looked at this and wrote a patch. However, the maintainers took a different approach to fix this.
- #1089432: shim: Supporting rootless builds by default: Kurt asked the maintainer for a status update.
- #1091668: debian-installer: Explicitly declare requirement for root: Louis and Kurt looked at this. Kurt asked the maintainer for clarification of the status.
- #1084066: amdgcn-tools: Please upgrade build-dep to llvm/clang 18 or 19 and #1092643: llvm-toolchain-19: readd gcn targets: Kurt and Arnout looked at these. Arnout added information about the build failure, the relevant upstream changes, and an upstream bug report.
- #1095376: refpolicy: FTBFS: make[2]: *** [Rules.modular:230: validate] Segmentation fault: Arnout looked at this and was not able to reproduce it, but noted that it may actually be caused by a bug in swig.
- #1093160: rsync: failed verification - update discarded: Ben reduced the severity.
- #1099013: emacs-gtk: hangs compositor when used under Wayland: Louis did a lot of work to reproduce this, but the bug was not reproducible.
- #1104169: wish: adduser _radvd on new installs and #1104229: wish: add systemd unit: Geert worked on and committed fixes for these bugs.
- #1100699: screen: hardcopy and screen-exchange are insecure by default: Yüce and Ben looked at this. Ben started a discussion of what changes would be appropriate to fix it.
- #1100981: libmlir-19 fails to coinstall: Arnout and Kurt looked at this, and pointed out an existing MR that should fix it and some other packages that are similarly affected.
13 May 2025 8:19pm GMT
Ravi Dwivedi: KDE India Conference 2025
Last month, I attended the KDE India conference in Gandhinagar, Gujarat from the 4th to the 6th of April. I made my mind to attend when Sahil told me about his plans to attend and giving a talk.
A day after my talk submission, the organizer Bhushan contacted me on Matrix and informed me that my talk had been accepted. I was also informed that KDE will cover my travel and accommodation expenses. So, I planned to attend the conference at this point. I am a longtime KDE user, so why not ;)
I arrived in Ahmedabad, the twin city of Gandhinagar, a day before the conference. The first thing that struck me as soon as I came out of the Ahmedabad airport was the heat. I felt as if I was being cooked-exactly how Bhushan put it earlier in the group chat. I took a taxi to get to my hotel, which was close to the conference venue.
Later that afternoon, I met Bhushan and Joseph. Joseph lived in Germany. Bhushan was taking him to get a SIM card, so I tagged along and got to roam around. Joseph was unsure about where to go after the conference, so I asked him what he wanted out of his trip and had conversations along that line.
Later, Vishal convinced him to go to Lucknow. Since he was adamant about taking the train, I booked a Tatkal train ticket for him to Lucknow. He was curious about how Tatkal booking works and watched me in amusement while I was booking the ticket.
The 4th of April marked the first day of the conference, with around 25 attendees. Bhushan started the day with an overview of KDE conferences in India, followed by Vishal, who discussed FOSS United's activities. After the lunch, Joseph gave an overview of his campaign to help people switch from Windows to GNU/Linux due to environmental and security reasons. He continued his session in detail the next day.

Conference hall
A key takeaway for me from Joseph's session was the idea pointed out by Adwaith: marketing GNU/Linux as a cheap alternative may not attract as much attention as marketing it as a status symbol. He gave the example of how the Tata Nano didn't do well in the Indian market due to being perceived as a poor person's car.
My talk was scheduled for the evening of the first day. I hadn't prepared any slides because I wanted to make my session interactive. During my talk, I did an activity with the attendees to demonstrate the federated nature of XMPP messaging, of which Prav is a part. After the talk, I got a lot of questions, signalling engagement. The audience was cooperative (just like Prav ;)), contrary to my expectations (I thought they will be tired and sleepy).
On the third day, I did a demo on editing OpenStreetMap (referred to as "OSM" in short) using the iD editor. It involved adding points to OSM based on the students' suggestions. Since my computer didn't have an HDMI port, I used Subin's computer, and he logged into his OSM account for my session. Therefore, any mistakes I made will be under Subin's name. :)
On the third day, I attended Aaruni's talk about backing up a GNU/Linux system. This was the talk that resonated with me the most. He suggested formatting the system with the btrfs file system during the installation, which helps in taking snapshots of the system and provides an easy way to roll back to a previous version if, for example, a file is accidentally deleted. I have tried many backup techniques, including this one, but I never tried backing up on the internal disk. I'll certainly give this a try.
A conference is not only about the talks, that's why we had a Prav table as well ;) Just kidding. What I really mean is that a conference is more about interactions than talks. Since the conference was a three-day affair, attendees got plenty of time to bond and share ideas.

Prav stall at the conference

Conference group photo
After the conference, Bhushan took us to Adalaj Stepwell, an attraction near Gandhinagar. Upon entering the complex, we saw a park where there were many langurs. Going further, there were stairs that led down to a well. I guess this is why it is called a stepwell.

Adalaj Stepwell
Later that day, we had Gujarati Thali for dinner. It was an all-you-can-eat buffet and was reasonably priced at 300 rupees per plate. Aamras (Mango juice) was the highlight for me. This was the only time we had Gujarati food during this visit. After the dinner, Aaruni dropped Sahil and I off at the airport. The hospitality was superb - for instance, in addition to Aaruni dropping us, Bhushan also picked up some of the attendees from the airport.
Finally, I would like to thank KDE for sponsoring my travel and accommodation costs.
Let's wrap up this post here and meet you in the next one.
Thanks to contrapunctus and Joseph for proofreading.
13 May 2025 5:58pm GMT
Sergio Talens-Oliag: Running dind with sysbox
When I configured forgejo-actions
I used a docker-compose.yaml
file to execute the runner
and a dind
container configured to run using privileged mode to be able to build images with it; as mentioned on my post about my setup, the use of the privileged mode is not a big issue for my use case, but reduces the overall security of the installation.
On a work chat the other day someone mentioned that the GitLab documentation about using kaniko says it is no longer maintained (see the kaniko
issue #3348) so we should look into alternatives for kubernetes clusters.
I never liked kaniko
too much, but it works without privileged mode and does not need a daemon, which is a good reason to use it, but if it is deprecated it makes sense to look into alternatives, and today I looked into some of them to use with my forgejo-actions
setup.
I was going to try buildah and podman but it seems that they need to adjust things on the systems running them:
- When I tried to use
buildah
inside adocker
container in Ubuntu I found the problems described on thebuildah
issue #1901 so I moved on. - Reading the
podman
documentation I saw that I need to export thefuse
device to run it inside a container and, as I found other option, I also skipped it.
As my runner
was already configured to use dind
I decided to look into sysbox as a way of removing the privileged
flag to make things more secure but have the same functionality.
Installing the sysbox
package
As I use Debian and Ubuntu systems I used the .deb
packages distributed from the sysbox
release page to install it (in my case I used the one from the 0.6.7 version).
On the machine running forgejo
(a Debian 12 server) I downloaded the package, stopped the running containers (it is needed to install the package and the only ones running where the ones started by the docker-compose.yaml
file) and installed the sysbox-ce_0.6.7.linux_amd64.deb
package using dpkg
.
Updating the docker-compose.yaml
file
To run the dind
container without setting the privileged
mode we set sysbox-runc
as the runtime
on the dind
container definition and set the privileged flag to false
(it is the same as removing the key, as it defaults to false
):
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -2,7 +2,9 @@ services:
dind:
image: docker:dind
container_name: 'dind'
- privileged: 'true'
+ # use sysbox-runc instead of using privileged mode
+ runtime: 'sysbox-runc'
+ privileged: 'false'
command: ['dockerd', '-H', 'unix:///dind/docker.sock', '-G', '$RUNNER_GID']
restart: 'unless-stopped'
volumes:
Testing the changes
After applying the changes to the docker-compose.yaml
file we start the containers and to test things we re-run previously executed jobs to see if things work as before.
In my case I re-executed the build-image-from-tag
workflow #18 from the oci
project and everything worked as expected.
Conclusion
For my current use case (docker
+ dind
) seems that sysbox
is a good solution but I'm not sure if I'll be installing it on kubernetes anytime soon unless I find a valid reason to do it (last time we talked about it my co workers said that they are evaluating buildah
and podman
for kubernetes and probably we will use them to replace kaniko
in our gitlab-ci
pipelines and for those tools the use of sysbox
seems an overkill).
13 May 2025 5:45pm GMT
12 May 2025
Planet Debian
Reproducible Builds: Reproducible Builds in April 2025
Welcome to our fourth report from the Reproducible Builds project in 2025. These monthly reports outline what we've been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. Lastly, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.
Table of contents:
- reproduce.debian.net
- Fifty Years of Open Source Software Supply Chain Security
- 4th CHAINS Software Supply Chain Workshop
- Mailing list updates
- Canonicalization for Unreproducible Builds in Java
- OSS Rebuild adds new TUI features
- Distribution roundup
- diffoscope & strip-nondeterminism
- Website updates
- Reproducibility testing framework
- Upstream patches
reproduce.debian.net
The last few months have seen the introduction, development and deployment of reproduce.debian.net. In technical terms, this is an instance of rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there.
This month, however, we are pleased to announce that reproduce.debian.net now tests all the Debian trixie architectures except s390x
and mips64el
.
The ppc64el
architecture was added through the generous support of Oregon State University Open Source Laboratory (OSUOSL), and we can support the armel
architecture thanks to CodeThink.
Fifty Years of Open Source Software Supply Chain Security
Russ Cox has published a must-read article in ACM Queue on Fifty Years of Open Source Software Supply Chain Security. Subtitled, "For decades, software reuse was only a lofty goal. Now it's very real.", Russ' article goes on to outline the history and original goals of software supply-chain security in the US military in the early 1970s, all the way to the XZ Utils backdoor of 2024. Through that lens, Russ explores the problem and how it has changed, and hasn't changed, over time.
He concludes as follows:
We are all struggling with a massive shift that has happened in the past 10 or 20 years in the software industry. For decades, software reuse was only a lofty goal. Now it's very real. Modern programming environments such as Go, Node and Rust have made it trivial to reuse work by others, but our instincts about responsible behaviors have not yet adapted to this new reality.
We all have more work to do.
4th CHAINS Software Supply Chain Workshop
Convened as part of the CHAINS research project at the KTH Royal Institute of Technology in Stockholm, Sweden, the 4th CHAINS Software Supply Chain Workshop occurred during April. During the workshop, there were a number of relevant workshops, including:
- Signature, Attestations and Reproducible Builds
- Does Functional Package Management Enable Reproducible Builds at Scale?
- Causes and Mitigations of Unreproducible Builds in Java [paper]
- Fixing Breaking Dependency Updates Using LLMs
- The caveats of vulnerability analysis
maven-lockfile
(Lockfiles for Java and Maven)observer
(Generating SBOMs for C/C++)dirty-waters
(Transparency checks for software supply chains)- A supply chain competition. Martin Schwaighofer, the winner, created a recap video (20m43s).
- Finally, 8 posters on dependency introspection, diverse double compilation, dependency management, VEX and SBOM.
The full listing of the agenda is available on the workshop's website.
Mailing list updates
On our mailing list this month:
-
Luca DiMaio of Chainguard posted to the list reporting that they had successfully implemented reproducible filesystem images with both
ext4
and an EFI system partition. They go on to list the various methods, and the thread generated at least fifteen replies. -
David Wheeler announced that the OpenSSF is building a "glossary" of sorts in order that they "consistently use the same meaning for the same term" and, moreover, that they have drafted a definition for 'reproducible build'. The thread generated a significant number of replies on the definition, leading to a potential update to the Reproducible Build's own definition.
-
Lastly, kpcyrd posted to the list with a timely reminder and update on their
repro-env
" tool. As first reported in our July 2023 report, kpcyrd mentions that:My initial interest in reproducible builds was "how do I distribute pre-compiled binaries on GitHub without people raising security concerns about them". I've cycled back to this original problem about 5 years later and built a tool that is meant to address this. […]
Canonicalization for Unreproducible Builds in Java
Aman Sharma, Benoit Baudry and Martin Monperrus have published a new scholarly study related to reproducible builds within Java. Titled Canonicalization for Unreproducible Builds in Java, the article's abstract is as follows:
[…] Achieving reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central and we develop a novel taxonomy of six root causes of unreproducibility. We study actionable mitigations: artifact and bytecode canonicalization using OSS-Rebuild and jNorm respectively. Finally, we present Chains-Rebuild, a tool that raises reproducibility success from 9.48% to 26.89% on 12,283 unreproducible artifacts. To sum up, our contributions are the first large-scale taxonomy of build unreproducibility causes in Java, a publicly available dataset of unreproducible builds, and Chains-Rebuild, a canonicalization tool for mitigating unreproducible builds in Java.
A full PDF of their article is available from arXiv.
OSS Rebuild adds new TUI features
OSS Rebuild aims to automate rebuilding upstream language packages (e.g. from PyPI, crates.io and npm registries) and publish signed attestations and build definitions for public use.
OSS Rebuild ships a text-based user interface (TUI) for viewing, launching, and debugging rebuilds. While previously requiring ownership of a full instance of OSS Rebuild's hosted infrastructure, the TUI now supports a fully local mode of build execution and artifact storage. Thanks to Giacomo Benedetti for his usage feedback and work to extend the local-only development toolkit.
Another feature added to the TUI was an experimental chatbot integration that provides interactive feedback on rebuild failure root causes and suggests fixes.
Distribution roundup
In Debian this month:
-
Roland Clobus posted another status report on reproducible ISO images on our mailing list this month, with the summary that "all live images build reproducibly from the online Debian archive".
-
Debian developer Simon Josefsson published another two reproducibility-related blog posts this month, the first on the topic of Verified Reproducible Tarballs. Simon sardonically challenges the reader as follows: "Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days?" After that, they also published a blog post on Building Debian in a GitLab Pipeline using their multi-stage rebuild approach.
-
Roland also posted to our mailing list to highlight that "there is now another tool in Debian that generates reproducible output,
equivs
". This is a tool to create trivial Debian packages that mightDepend
on other packages. As Roland writes, "building the [equivs
] package has been reproducible for a while, [but] now the output of the [tool] has become reproducible as well". -
Lastly, 9 reviews of Debian packages were added, 10 were updated and 10 were removed this month adding to our extensive knowledge about identified issues.
The IzzyOnDroid Android APK repository made more progress in April. Thanks to funding by NLnet and Mobifree, the project was also to put more time into their tooling. For instance, developers can now easily run their own verification builder in "less than 5 minutes". This currently supports Debian-based systems, but support for RPM-based systems is incoming.
-
The
rbuilder_setup
tool can now setup the entire framework within less than five minutes. The process is configurable, too, so everything from "just the basics to verify builds" up to a fully-fledged RB environment is also possible. -
This tool works on Debian, RedHat and Arch Linux, as well as their derivates. The project has received successful reports from Debian, Ubuntu, Fedora and some Arch Linux derivates so far.
-
Documentation on how to work with reproducible builds (making apps reproducible, debugging unreproducible packages, etc) is available in the project's wiki page.
-
Future work is also in the pipeline, including documentation, guidelines and helpers for debugging.
NixOS defined an Outreachy project for improving build reproducibility. In the application phase, NixOS saw some strong candidates providing contributions, both on the NixOS side and upstream: guider-le-ecit analyzed a libpinyin
issue. Tessy James fixed an issue in arandr
and helped analyze one in libvlc
that led to a proposed upstream fix. Finally, 3pleX fixed an issue which was accepted in upstream kitty
, one in upstream maturin
, one in upstream python-sip
and one in the Nix packaging of python-libbytesize
. Sadly, the funding for this internship fell through, so NixOS were forced to abandon their search.
Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for their work there.
diffoscope & strip-nondeterminism
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading a number of versions to Debian:
- Use the
--walk
argument over the potentially dangerous alternative--scan
when calling out tozipdetails(1)
. […] - Correct a longstanding issue where many
>
-based version tests used in conditional fixtures were broken. This was used to ensure that specific tests were only run when the version on the system was newer than a particular number. Thanks to Colin Watson for the report (Debian bug #1102658) […] - Address a long-hidden issue in the
test_versions
testsuite as well, where we weren't actually testing the greater-than comparisons mentioned above, as it was masked by the tests for equality. […] - Update copyright years. […]
In strip-nondeterminism, however, Holger Levsen updated the Continuous Integration (CI) configuration in order to use the standard Debian pipelines via debian/salsa-ci.yml
instead of using .gitlab-ci.yml
. […]
Website updates
Once again, there were a number of improvements made to our website this month including:
-
Aman Sharma added OSS-Rebuild's
stabilize
tool to the Tools page. […][…] -
Chris Lamb added a
configure.ac
(GNU Autotools) example for usingSOURCE_DATE_EPOCH
. […]. Chris also updated theSOURCE_DATE_EPOCH
snippet and move the archive metadata to a more suitable location. […] -
Denis Carikli added GNU Boot to our ever-evolving Projects page.
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In April, a number of changes were made by Holger Levsen, including:
-
reproduce.debian.net-related:
- Add armel.reproduce.debian.net to support the
armel
architecture. […][…] - Add a new ARM node,
codethink05
. […][…] - Add ppc64el.reproduce.debian.net to support testing of the
ppc64el
architecture. […][…][…] - Improve the reproduce.debian.net front page. […][…]
- Make various changes to the
ppc64el
nodes. […][…]9[…][…] - Make various changes to the
arm64
andarmhf
nodes. […][…][…][…] - Various changes related to the
rebuilderd-worker
entry point. […][…][…] - Create and deploy a
pkgsync
script. […][…][…][…][…][…][…][…] - Fix the monitoring of the
riscv64
architecture. […][…] - Make a number of changes related to starting the
rebuilderd
service. […][…][…][…]
- Add armel.reproduce.debian.net to support the
-
Backup-related:
-
Misc:
In addition:
-
Jochen Sprickerhof fixed the
risvc64
host names […] and requested access to all therebuilderd
nodes […]. -
Mattia Rizzolo updated the self-serve rebuild scheduling tool, replacing the deprecated "SSO"-style authentication with OpenIDC which authenticates against salsa.debian.org. […][…][…]
-
Roland Clobus updated the configuration for the
osuosl3
node to designate 4 workers for bigger builds. […]
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
-
Bernhard M. Wiedemann
-
Chris Hofstaedtler:
- #1104512 filed against
command-not-found
. - #1104517 filed against
command-not-found
. - #1104535 filed against
cc65
.
- #1104512 filed against
-
Chris Lamb:
- #1102659 filed against
vcsh
. - #1103797 filed against
schism
. - #1103798 filed against
magic-wormhole-mailbox-server
. - #1103800 filed against
openvpn3-client
.
- #1102659 filed against
-
James Addison:
-
Jochen Sprickerhof:
- #1103288 filed against
courier
. - #1103563 filed against
cross-toolchain-base
.
- #1103288 filed against
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
-
IRC:
#reproducible-builds
onirc.oftc.net
. -
Mastodon: @reproducible_builds@fosstodon.org
-
Mailing list:
rb-general@lists.reproducible-builds.org
12 May 2025 7:00pm GMT
Sergio Talens-Oliag: Playing with vCluster
After my previous posts related to Argo CD (one about argocd-autopilot and another with some usage examples) I started to look into Kluctl (I also plan to review Flux, but I'm more interested on the kluctl
approach right now).
While reading an entry on the project blog about Cluster API somehow I ended up on the vCluster site and decided to give it a try, as it can be a valid way of providing developers with on demand clusters for debugging or run CI/CD tests before deploying things on common clusters or even to have multiple debugging virtual clusters on a local machine with only one of them running at any given time.
On this post I will deploy a vcluster
using the k3d_argocd
kubernetes cluster (the one we created on the posts about argocd
) as the host and will show how to:
- use its ingress (in our case
traefik
) to access the API of the virtual one (removes the need of having to use thevcluster connect
command to access it withkubectl
), - publish the ingress objects deployed on the virtual cluster on the host ingress, and
- use the
sealed-secrets
of the host cluster to manage the virtual cluster secrets.
Creating the virtual cluster
Installing the vcluster
application
To create the virtual clusters we need the vcluster
command, we can install it with arkade
:
❯ arkade get vcluster
The vcluster.yaml
file
To create the cluster we are going to use the following vcluster.yaml
file (you can find the documentation about all its options here):
controlPlane:
proxy:
# Extra hostnames to sign the vCluster proxy certificate for
extraSANs:
- my-vcluster-api.lo.mixinet.net
exportKubeConfig:
context: my-vcluster_k3d-argocd
server: https://my-vcluster-api.lo.mixinet.net:8443
secret:
name: my-vcluster-kubeconfig
sync:
toHost:
ingresses:
enabled: true
serviceAccounts:
enabled: true
fromHost:
ingressClasses:
enabled: true
nodes:
enabled: true
clearImageStatus: true
secrets:
enabled: true
mappings:
byName:
# sync all Secrets from the 'my-vcluster-default' namespace to the
# virtual "default" namespace.
"my-vcluster-default/*": "default/*"
# We could add other namespace mappings if needed, i.e.:
# "my-vcluster-kube-system/*": "kube-system/*"
On the controlPlane
section we've added the proxy.extraSANs
entry to add an extra host name to make sure it is added to the cluster certificates if we use it from an ingress.
The exportKubeConfig
section creates a kubeconfig
secret on the virtual cluster namespace
using the provided host name; the secret can be used by GitOps tools or we can dump it to a file to connect from our machine.
On the sync
section we enable the synchronization of Ingress
objects and ServiceAccounts
from the virtual to the host cluster:
- We copy the ingress definitions to use the ingress server that runs on the host to make them work from the outside world.
- The service account synchronization is not really needed, but we enable it because if we test this configuration with EKS it would be useful if we use IAM roles for the service accounts.
On the opposite direction (from the host to the virtual cluster) we synchronize:
- The
IngressClass
objects, to be able to use the host ingress server(s). - The
Nodes
(we are not using the info right now, but it could be interesting if we want to have the real information of the nodes running pods of the virtual cluster). - The
Secrets
from themy-vcluster-default
hostnamespace
to thedefault
of the virtual cluster; that synchronization allows us to deploySealedSecrets
on the host that generate secrets that are copied automatically to the virtual one. Initially we only copy secrets for onenamespace
but if the virtual cluster needs others we can addnamespaces
on the host and their mappings to the virtual one on thevcluster.yaml
file.
Creating the virtual cluster
To create the virtual cluster we run the following command:
vcluster create my-vcluster --namespace my-vcluster --upgrade --connect=false \
--values vcluster.yaml
It creates the virtual cluster on the my-vcluster
namespace using the vcluster.yaml
file shown before without connecting to the cluster from our local machine (if we don't pass that option the command adds an entry on our kubeconfig and launches a proxy to connect to the virtual cluster that we don't plan to use).
Adding an ingress TCP route to connect to the vcluster api
As explained before, we need to create an IngressTcpRoute
object to be able to connect to the vcluster API, we use the following definition:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: my-vcluster-api
namespace: my-vcluster
spec:
entryPoints:
- websecure
routes:
- match: HostSNI(`my-vcluster-api.lo.mixinet.net`)
services:
- name: my-vcluster
port: 443
tls:
passthrough: true
Once we apply those changes the cluster API will be available on the https://my-cluster-api.lo.mixinet.net:8443 URL using its own self signed certificate (we have enabled TLS passthrough) that includes the hostname we use (we adjusted it on the vcluster.yaml
file, as explained before).
Getting the kubeconfig for the vcluster
Once the vcluster is running we will have its kubeconfig available on the my-vcluster-kubeconfig
secret on its namespace on the host cluster.
To dump it to the ~/.kube/my-vcluster-config
we can do the following:
❯ kubectl get -n my-vcluster secret/my-vcluster-kubeconfig \
--template="{{.data.config}}" | base64 -d > ~/.kube/my-vcluster-config
Once available we can define the vkubectl
alias to adjust the KUBECONFIG
variable to access it:
alias vkubectl="KUBECONFIG=~/.kube/my-vcluster-config kubectl"
Or we can merge the configuration with the one on the KUBECONFIG variable and use kubectx
or a similar tool to change the context (for our vcluster the context will be my-vcluster_k3d-argocd
). If the KUBECONFIG
variable is defined and only has the PATH to a single file the merge can be done running the following:
KUBECONFIG="$KUBECONFIG:~/.kube/my-vcluster-config" kubectl config view \
--flatten >"$KUBECONFIG.new"
mv "$KUBECONFIG.new" "$KUBECONFIG"
On the rest of this post we will use the vkubectl
alias when connecting to the virtual cluster, i.e. to check that it works we can run the cluster-info
subcommand:
❯ vkubectl cluster-info
Kubernetes control plane is running at https://my-vcluster-api.lo.mixinet.net:8443
CoreDNS is running at https://my-vcluster-api.lo.mixinet.net:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Installing the dummyhttpd
application
To test the virtual cluster we are going to install the dummyhttpd
application using the following kustomization.yaml
file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0
# Add the config map
configMapGenerator:
- name: dummyhttp-configmap
literals:
- CM_VAR="Vcluster Test Value"
behavior: create
options:
disableNameSuffixHash: true
patches:
# Change the ingress host name
- target:
kind: Ingress
name: dummyhttp
patch: |-
- op: replace
path: /spec/rules/0/host
value: vcluster-dummyhttp.lo.mixinet.net
# Add reloader annotations -- it will only work if we install reloader on the
# virtual cluster, as the one on the host cluster doesn't see the vcluster
# deployment objects
- target:
kind: Deployment
name: dummyhttp
patch: |-
- op: add
path: /metadata/annotations
value:
reloader.stakater.com/auto: "true"
reloader.stakater.com/rollout-strategy: "restart"
It is quite similar to the one we used on the Argo CD examples but uses a different DNS entry; to deploy it we run kustomize
and vkubectl
:
❯ kustomize build . | vkubectl apply -f -
configmap/dummyhttp-configmap created
service/dummyhttp created
deployment.apps/dummyhttp created
ingress.networking.k8s.io/dummyhttp created
We can check that everything worked using curl
:
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c": "Vcluster Test Value","s": ""}
The objects available on the vcluster
now are:
❯ vkubectl get all,configmap,ingress
NAME READY STATUS RESTARTS AGE
pod/dummyhttp-55569589bc-9zl7t 1/1 Running 0 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dummyhttp ClusterIP 10.43.51.39 <none> 80/TCP 24s
service/kubernetes ClusterIP 10.43.153.12 <none> 443/TCP 14m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dummyhttp 1/1 1 1 24s
NAME DESIRED CURRENT READY AGE
replicaset.apps/dummyhttp-55569589bc 1 1 1 24s
NAME DATA AGE
configmap/dummyhttp-configmap 1 24s
configmap/kube-root-ca.crt 1 14m
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/dummyhttp traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80 24s
While we have the following ones on the my-vcluster
namespace of the host cluster:
❯ kubectl get all,configmap,ingress -n my-vcluster
NAME READY STATUS RESTARTS AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster 1/1 Running 0 18m
pod/dummyhttp-55569589bc-9zl7t-x-default-x-my-vcluster 1/1 Running 0 45s
pod/my-vcluster-0 1/1 Running 0 19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dummyhttp-x-default-x-my-vcluster ClusterIP 10.43.51.39 <none> 80/TCP 45s
service/kube-dns-x-kube-system-x-my-vcluster ClusterIP 10.43.91.198 <none> 53/UDP,53/TCP,9153/TCP 18m
service/my-vcluster ClusterIP 10.43.153.12 <none> 443/TCP,10250/TCP 19m
service/my-vcluster-headless ClusterIP None <none> 443/TCP 19m
service/my-vcluster-node-k3d-argocd-agent-1 ClusterIP 10.43.189.188 <none> 10250/TCP 18m
NAME READY AGE
statefulset.apps/my-vcluster 1/1 19m
NAME DATA AGE
configmap/coredns-x-kube-system-x-my-vcluster 2 18m
configmap/dummyhttp-configmap-x-default-x-my-vcluster 1 45s
configmap/kube-root-ca.crt 1 19m
configmap/kube-root-ca.crt-x-default-x-my-vcluster 1 11m
configmap/kube-root-ca.crt-x-kube-system-x-my-vcluster 1 18m
configmap/vc-coredns-my-vcluster 1 19m
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/dummyhttp-x-default-x-my-vcluster traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80 45s
As shown, we have copies of the Service
, Pod
, Configmap
and Ingress
objects, but there is no copy of the Deployment
or ReplicaSet
.
Creating a sealed secret for dummyhttpd
To use the hosts sealed secrets controller with the virtual cluster we will create the my-vcluster-default
namespace and add there the sealed secrets we want to have available as secrets on the default
namespace of the virtual cluster:
❯ kubectl create namespace my-vcluster-default
❯ echo -n "Vcluster Boo" | kubectl create secret generic "dummyhttp-secret" \
--namespace "my-vcluster-default" --dry-run=client \
--from-file=SECRET_VAR=/dev/stdin -o yaml >dummyhttp-secret.yaml
❯ kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
❯ kubectl apply -f dummyhttp-sealed-secret.yaml
❯ rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml
After running the previous commands we have the following objects available on the host cluster:
❯ kubectl get sealedsecrets.bitnami.com,secrets -n my-vcluster-default
NAME STATUS SYNCED AGE
sealedsecret.bitnami.com/dummyhttp-secret True 34s
NAME TYPE DATA AGE
secret/dummyhttp-secret Opaque 1 34s
And we can see that the secret is also available on the virtual cluster with the content we expected:
❯ vkubectl get secrets
NAME TYPE DATA AGE
dummyhttp-secret Opaque 1 34s
❯ vkubectl get secret/dummyhttp-secret --template="{{.data.SECRET_VAR}}" \
| base64 -d
Vcluster Boo
But the output of the curl
command has not changed because, although we have the reloader
controller deployed on the host cluster, it does not see the Deployment
object of the virtual one and the pods are not touched:
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c": "Vcluster Test Value","s": ""}
Installing the reloader
application
To make reloader
work on the virtual cluster we just need to install it as we did on the host using the following kustomization.yaml
file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
resources:
- github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2
patches:
# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted
- target:
kind: Deployment
name: reloader-reloader
patch: |-
- op: add
path: /spec/template/spec/containers/0/args
value:
- '--reload-on-create=true'
- '--reload-on-delete=true'
- '--reload-strategy=annotations'
We deploy it with kustomize
and vkubectl
:
❯ kustomize build . | vkubectl apply -f -
serviceaccount/reloader-reloader created
clusterrole.rbac.authorization.k8s.io/reloader-reloader-role created
clusterrolebinding.rbac.authorization.k8s.io/reloader-reloader-role-binding created
deployment.apps/reloader-reloader created
As the controller was not available when the secret was created the pods linked to the Deployment
are not updated, but we can force things removing the secret on the host system; after we do that the secret is re-created from the sealed version and copied to the virtual cluster where the reloader controller updates the pod and the curl
command shows the new output:
❯ kubectl delete -n my-vcluster-default secrets dummyhttp-secret
secret "dummyhttp-secret" deleted
❯ sleep 2
❯ vkubectl get pods
NAME READY STATUS RESTARTS AGE
dummyhttp-78bf5fb885-fmsvs 1/1 Terminating 0 6m33s
dummyhttp-c68684bbf-nx8f9 1/1 Running 0 6s
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c":"Vcluster Test Value","s":"Vcluster Boo"}
If we change the secret on the host systems things get updated pretty quickly now:
❯ echo -n "New secret" | kubectl create secret generic "dummyhttp-secret" \
--namespace "my-vcluster-default" --dry-run=client \
--from-file=SECRET_VAR=/dev/stdin -o yaml >dummyhttp-secret.yaml
❯ kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
❯ kubectl apply -f dummyhttp-sealed-secret.yaml
❯ rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c":"Vcluster Test Value","s":"New secret"}
Pause and restore the vcluster
The status of pods and statefulsets while the virtual cluster is active can be seen using kubectl
:
❯ kubectl get pods,statefulsets -n my-vcluster
NAME READY STATUS RESTARTS AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster 1/1 Running 0 127m
pod/dummyhttp-587c7855d7-pt9b8-x-default-x-my-vcluster 1/1 Running 0 4m39s
pod/my-vcluster-0 1/1 Running 0 128m
pod/reloader-reloader-7f56c54d75-544gd-x-kube-system-x-my-vcluster 1/1 Running 0 60m
NAME READY AGE
statefulset.apps/my-vcluster 1/1 128m
Pausing the vcluster
If we don't need to use the virtual cluster we can pause it and after a small amount of time all Pods
are gone because the statefulSet
is scaled down to 0 (note that other resources like volumes are not removed, but all the objects that have to be scheduled and consume CPU cycles are not running, which can translate in a lot of savings when running on clusters from cloud platforms or, in a local cluster like the one we are using, frees resources like CPU and memory that now can be used for other things):
❯ vcluster pause my-vcluster
11:20:47 info Scale down statefulSet my-vcluster/my-vcluster...
11:20:48 done Successfully paused vcluster my-vcluster/my-vcluster
❯ kubectl get pods,statefulsets -n my-vcluster
NAME READY AGE
statefulset.apps/my-vcluster 0/0 130m
Now the curl
command fails:
❯ curl -s https://vcluster-dummyhttp.localhost.mixinet.net:8443
404 page not found
Although the ingress is still available (it returns a 404
because there is no pod behind the service):
❯ kubectl get ingress -n my-vcluster
NAME CLASS HOSTS ADDRESS PORTS AGE
dummyhttp-x-default-x-my-vcluster traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80 120m
In fact, the same problem happens when we try to connect to the vcluster
API; the error shown by kubectl
is related to the TLS certificate because the 404
page uses the wildcard certificate instead of the self signed one:
❯ vkubectl get pods
Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
❯ curl -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/
404 page not found
❯ curl -v -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/ 2>&1 | grep subject
* subject: CN=lo.mixinet.net
* subjectAltName: host "my-vcluster-api.lo.mixinet.net" matched cert's "*.lo.mixinet.net"
Resuming the vcluster
When we want to use the virtual cluster again we just need to use the resume
command:
❯ vcluster resume my-vcluster
12:03:14 done Successfully resumed vcluster my-vcluster in namespace my-vcluster
Once all the pods are running the virtual cluster goes back to its previous state, although all of them were started, of course.
Cleaning up
The virtual cluster can be removed using the delete
command:
❯ vcluster delete my-vcluster
12:09:18 info Delete vcluster my-vcluster...
12:09:18 done Successfully deleted virtual cluster my-vcluster in namespace my-vcluster
12:09:18 done Successfully deleted virtual cluster namespace my-vcluster
12:09:18 info Waiting for virtual cluster to be deleted...
12:09:50 done Virtual Cluster is deleted
That removes everything we used on this post except the sealed secrets and secrets that we put on the my-vcluster-default
namespace because it was created by us.
If we delete the namespace all the secrets and sealed secrets on it are also removed:
❯ kubectl delete namespace my-vcluster-default
namespace "my-vcluster-default" deleted
Conclusions
I believe that the use of virtual clusters can be a good option for two of the proposed use cases that I've encountered in real projects in the past:
- need of short lived clusters for developers or teams,
- execution of integration tests from CI pipelines that require a complete cluster (the tests can be run on virtual clusters that are created on demand or paused and resumed when needed).
For both cases things can be set up using the Apache licensed product, although maybe evaluating the vCluster Platform offering could be interesting.
In any case when everything is not done inside kubernetes we will also have to check how to manage the external services (i.e. if we use databases or message buses as SaaS instead of deploying them inside our clusters we need to have a way of creating, deleting or pause and resume those services).
12 May 2025 11:00am GMT
Taavi Väänänen: lua entry thread aborted: runtime error: bad request
The Wikimedia Cloud VPS shared web proxy has an interesting architecture: the management API writes an entry for each proxy to a Redis database, and the web server in use (Nginx with Lua support from ngx_http_lua_module
) looks up the backend server URL from Redis for each request. This is maybe not how I would design this today, but the basic design dates back to 2013 and has served us well ever since.
However, with a recent operating system upgrade to Debian 12 (we run Nginx from the packages in Debian's repositories), we started seeing mysterious errors that looked like this:
2025/04/30 07:24:25 [error] 82656#82656: *5612 lua entry thread aborted: runtime error: /etc/nginx/lua/domainproxy.lua:32: bad request
stack traceback:
coroutine 0:
[C]: in function 'set_keepalive'
/etc/nginx/lua/domainproxy.lua:32: in function 'redis_shutdown'
/etc/nginx/lua/domainproxy.lua:48: in main chunk, client: [redacted], server: *.wmcloud.org, request: "GET [redacted] HTTP/2.0", host: "codesearch.wmcloud.org", referrer: "https://codesearch.wmcloud.org/search/"
The code in question seems straightforward enough:
function redis_shutdown()
-- Use a connection pool of 256 connections with a 32s idle timeout
-- This also closes the current redis connection.
red:set_keepalive(1000 * 32, 256) -- line 32
end
When searching for this error online, you'll end up finding advice like "the resty.redis
object instance cannot be stored in a Lua variable at the Lua module level". However, our code already stores it as a local
variable:
local redis = require 'nginx.redis'
local red = redis:new()
red:set_timeout(1000)
red:connect('127.0.0.1', 6379)
Turns out the issue was with the function definition: functions can also be defined as local
. Without that, something somewhere in some situations seems to reference the variables from other requests, instead of using the Redis connection for the current request. (Don't ask me what changed between Debian 12 and 13 making this only break now.) So we needed to change our function definition to this instead:
local function redis_shutdown()
-- Use a connection pool of 256 connections with a 32s idle timeout
-- This also closes the current redis connection.
red:set_keepalive(1000 * 32, 256)
end
I spent almost an entire workday looking for this, ultimately making a two-line patch to fix the issue. Hopefully by publishing this post I can save that time from everyone else stumbling upon the same problem after myself.
12 May 2025 12:00am GMT
Freexian Collaborators: Debian Contributions: DebConf 25 preparations, PyPA tools updates, Removing libcrypt-dev from build-essential and more! (by Anupa Ann Joseph)
Debian Contributions: 2025-04
Contributing to Debian is part of Freexian's mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.
DebConf 25 Preparations, by Stefano Rivera and Santiago Ruano Rincón
DebConf 25 preparations continue. In April, the bursary team reviewed and ranked bursary applications. Santiago Ruano Rincón examined the current state of the conference's finances, to see if we could allocate any more money to bursaries. Stefano Rivera supported the bursary team's work with infrastructure and advice and added some metrics to assist Santiago's budget review. Santiago was also involved in different parts of the organization, including Content team matters, as reviewing the first of proposals, preparing public information about the new Academic Track; or coordinating different aspects of the Day trip activities and the Conference Dinner.
PyPA tools updates, by Stefano Rivera
Around the beginning of the freeze (in retrospect, definitely too late) Stefano looked at updating setuptools
in the archive to 78.1.0. This brings support for more comprehensive license expressions (PEP-639), that people are expected to adopt soon upstream. While the reverse-autopkgtests all passed, it all came with some unexpected complications, and turned into a mini-transition. The new setuptools
broke shebangs for scripts (pypa/setuptools#4952).
It also required a bump of wheel
to 0.46 and wheel
0.46 now has a dependency outside the standard library (it de-vendored packaging
). This meant it was no longer suitable to distribute a standalone wheel.whl
file to seed into new virtualenvs, as virtualenv
does by default. The good news here is that setuptools
doesn't need wheel
any more, it included its own implementation of the bdist_wheel
command, in 70.1. But the world hadn't adapted to take advantage of this, yet. Stefano scrambled to get all of these issues resolved upstream and in Debian:
pip
: Don't check for wheel when invoked with--no-use-pep517
(pypa/pip#13330), automatically do--no-use-pep517
builds without wheel (pypa/pip#13358, rejected).virtualenv
: Don't include wheel (pypa/virtualenv#2868) except on Python 3.8 (pypa/virtualenv#2876) aspip
dropped Python 3.8 support in the same release that included #13330.python3.13
: Update bundled setuptools in test.wheeldata (python/cpython#132415).python-cffi
: No need to install wheel any more (python-cffi/cffi#165).
We're now at the point where python3-wheel-whl
is no longer needed in Debian unstable, and it should migrate to trixie.
Removing libcrypt-dev
from build-essential
, by Helmut Grohne
The crypt
function was originally part of glibc
, but it got separated to libxcrypt
. As a result, libc6-dev
now depends on libcrypt-dev
. This poses a cycle during architecture cross bootstrap. As the number of packages actually using crypt
is relatively small, Helmut proposed removing the dependency. He analyzed an archive rebuild kindly performed by Santiago Vila (not affiliated with Freexian) and estimated the necessary changes. It looks like we may complete this with modifications to less than 300 source packages in the forky
cycle. Half of the bugs have been filed at this time. They are tracked with libcrypt-*
usertags.
Miscellaneous contributions
- Carles uploaded a new version of simplemonitor.
- Carles improved the documentation of salsa-ci-team/pipeline regarding piuparts arguments.
- Carles closed an FTBFS on gcc-15 on qnetload.
- Carles worked on Catalan translations using po-debconf-manager: reviewed 57 translations and created their merge requests in salsa, created 59 bug reports for packages that didn't merge in more than 30 days. Followed-up merge requests and comments in bug reports. Managed some translations manually for packages that are not in Salsa.
- Lucas did some work on the DebConf Content and Bursary teams.
- Lucas fixed multiple CVEs and bugs involving the upgrade from bookworm to trixie in ruby3.3.
- Lucas fixed a CVE in valkey in unstable.
- Stefano updated beautifulsoup4, python-authlib, python-html2text, python-packaging, python-pip, python-soupsieve, and unidecode.
- Stefano packaged python-dependency-groups, a new vendored library in python-pip.
- During an afternoon Bug Squashing Party in Montevideo, Santiago uploaded a couple of packages fixing RC bugs #1057226 and #1102487. The latter was a sponsored upload.
- Thorsten uploaded new upstream versions of brlaser, ptouch-driver and sane-airscan to get the latest upstream bug fixes into Trixie.
- Raphaël filed an upstream bug on zim for a graphical glitch that he has been experiencing.
- Colin Watson upgraded openssh to 10.0p1 (also known as 10.0p2), and debugged various follow-up bugs. This included adding riscv64 support to vmdb2 in passing, and enabling native wtmpdb support so that
wtmpdb last
now reports the correct tty for SSH connections. - Colin fixed dput-ng's -override option, which had never previously worked.
- Colin fixed a security bug in debmirror.
- Colin did his usual routine work on the Python team: 21 packages upgraded to new upstream versions, 8 CVEs fixed, and about 25 release-critical bugs fixed.
- Helmut filed patches for 21 cross build failures.
- Helmut uploaded a new version of debvm featuring a new tool
debefivm-create
to generate EFI-bootable disk images compatible with other tools such aslibvirt
orVirtualBox
. Much of the work was prototyped in earlier months. This generalizesmmdebstrap-autopkgtest-build-qemu
. - Helmut continued reporting undeclared file conflicts and suggested package removals from
unstable
. - Helmut proposed build profiles for libftdi1 and gnupg2. To deal with recently added dependencies in the architecture cross bootstrap package set.
- Helmut managed the /usr-move transition. He worked on ensuring that
systemd
would comply with Debian's policy. Dumat continues to locate problems here and there yielding discussion occasionally. He sent a patch for an upgrade problem in zutils. - Anupa worked with the Debian publicity team to publish Micronews and Bits posts.
- Anupa worked with the DebConf 25 content team to review talk and event proposals for DebConf 25.
12 May 2025 12:00am GMT
11 May 2025
Planet Debian
Sergio Durigan Junior: Debian Bug Squashing Party Brazil 2025
With the trixie release approaching, I had the idea back in April to organize a bug squashing party with the Debian Brasil community. I believe the outcome was very positive, and we were able to tackle and fix quite a number of release-critical bugs. This is a brief report of what we did.
A remote BSP
It's not the first time I organize a BSP: back in 2019, I helped throw another similar party in Toronto. The difference this time is that, because Brazil is a big country and (perhaps most importantly) because I'm not currently living there, the BSP had to be done online.
I'm a fan of social interactions (especially with the Brazilian community), and in my experience we usually can achieve much more when we get together in a physical place, but hey, you gotta do what you gotta do…
Most (if not all) of the folks interested in participating had busy weekdays, so it was decided that we would meet during the weekends and try to work on a few bugs over Jitsi. Nothing stopped people from working on bugs during the week as well, of course.
A tag to rule them all
We used the bsp-2025-04-brazil
usertag to mark those bugs that were touched by us. You can see the full list of bugs here, although the current list (as of 2025-05-11) is smaller than the one we had by the end of April. I don't know what happened; maybe it's some glitch with the BTS, or maybe someone removed the usertag by mistake.
Stats
In total, we had:
- 7 participants
- 37 bugs handled. Of those,
- 35 bugs fixed
The BSP officially started on 04 April 2025, and ended on 30 April 2025. I was able to attend meetings during two weekends; other people participated more sporadically.
Outcome
As I said above, the Debian Brasil community is great and very engaged in the project. Speaking more specifically about the Debian Brasil Devel group, I can say that We have contributors with strong technical skills, and I really love that we have this inclusive, extremely technical culture where debugging and understanding things is really core to pretty much all our discussions.
We already meet weekly on Thursdays to talk shop and help newcomers, so having a remote BSP with this group seemed like a logical thing to do. I'm really glad to see our results and even happier to hear positive feedback from the community during the last MiniDebConf in Maceió.
There's some interest in organizing another BSP, this time face-to-face and during the next DebConf. I'm all for it, as I love fixing bugs and having a great time with friends. If you're interested in attending, let me know.
Thanks, and until next time.
11 May 2025 10:00pm GMT
Bits from Debian: Bits from the DPL
Dear Debian community,
This is bits from the DPL for April.
End of 10
I am sure I was speaking in the interest of the whole project when joining the "End of 10" campaign. Here is what I wrote to the initiators:
Hi Joseph and all drivers of the "End of 10" campaign, On behalf of the entire Debian project, I would like to say that we proudly join your great campaign. We stand with you in promoting Free Software, defending users' freedoms, and protecting our planet by avoiding unnecessary hardware waste. Thank you for leading this important initiative.
Andreas Tille Debian Project Leader
I have some goals I would like to share with you for my second term.
Ftpmaster delegation
This splits up into tasks that can be done before and after Trixie release.
Before Trixie:
1. Reducing Barriers to DFSG Compliance Checks
Back in 2002, Debian established a way to distribute cryptographic software in the main archive, whereas such software had previously been restricted to the non-US archive. One result of this arrangement which influences our workflow is that all packages uploaded to the NEW queue must remain on the server that hosts it. This requirement means that members of the ftpmaster team must log in to that specific machine, where they are limited to a restricted set of tools for reviewing uploaded code.
This setup may act as a barrier to participation--particularly for contributors who might otherwise assist with reviewing packages for DFSG compliance. I believe it is time to reassess this limitation and work toward removing such hurdles.
In October last year, we had some initial contact with SPI's legal counsel, who noted that US regulations around cryptography have been relaxed somewhat in recent years (as of 2021). This suggests it may now be possible to revisit and potentially revise the conditions under which we manage cryptographic software in the NEW queue.
I plan to investigate this further. If you have expertise in software or export control law and are interested in helping with this topic, please get in touch with me.
The ultimate goal is to make it easier for more people to contribute to ensuring that code in the NEW queue complies with the DFSG.
2. Discussing Alternatives
My chances to reach out to other distributions remained limited. However, regarding the processing of new software, I learned that OpenSUSE uses a Git-based workflow that requires five "LGTM" approvals from a group of trusted developers. As far as I know, Fedora follows a similar approach.
Inspired by this, a recent community initiative--the Gateway to NEW project--enables peer review of new packages for DFSG compliance before they enter the NEW queue. This effort allows anyone to contribute by reviewing packages and flagging potential issues in advance via Git. I particularly appreciate that the DFSG review is coupled with CI, allowing for both license and technical evaluation.
While this process currently results in some duplication of work--since final reviews are still performed by the ftpmaster team--it offers a valuable opportunity to catch issues early and improve the overall quality of uploads. If the community sees long-term value in this approach, it could serve as a basis for evolving our workflows. Integrating it more closely into DAK could streamline the process, and we've recently seen that merge requests reflecting community suggestions can be accepted promptly.
For now, I would like to gather opinions about how such initiatives could best complement the current NEW processing, and whether greater consensus on trusted peer review could help reduce the burden on the team doing DFSG compliance checks. Submitting packages for review and automated testing before uploading can improve quality and encourage broader participation in safeguarding Debian's Free Software principles.
My explicit thanks go out to the Gateway to NEW team for their valuable and forward-looking contribution to Debian.
3. Documenting Critical Workflows
Past ftpmaster trainees have told me that understanding the full set of ftpmaster workflows can be quite difficult. While there is some useful documentation − thanks in particular to Sean Whitton for his work on documenting NEW processing rules - many other important tasks carried out by the ftpmaster team remain undocumented or only partially so.
Comprehensive and accessible documentation would greatly benefit current and future team members, especially those onboarding or assisting in specific workflows. It would also help ensure continuity and transparency in how critical parts of the archive are managed.
If such documentation already exists and I have simply overlooked it, I would be happy to be corrected. Otherwise, I believe this is an area where we need to improve significantly. Volunteers with a talent for writing technical documentation are warmly invited to contact me--I'd be happy to help establish connections with ftpmaster team members who are willing to share their knowledge so that it can be written down and preserved.
Once Trixie is released (hopefully before DebConf):
4. Split of the Ftpmaster Team into DFSG and Archive Teams
As discussed during the "Meet the ftpteam" BoF at DebConf24, I would like to propose a structural refinement of the current Ftpmaster team by introducing two different delegated teams:
- DFSG Team
- Archive Team (responsible for DAK maintenance and process tooling, including releases)
(Alternative name suggestions are, of course, welcome.) The primary task of the DFSG team would be the processing of the NEW queue and ensuring that packages comply with the DFSG. The Archive team would focus on maintaining DAK and handling the technical aspects of archive management.
I am aware that, in the recent past, the ftpmaster team has decided not to actively seek new members. While I respect the autonomy of each team, the resulting lack of a recruitment pipeline has led to some friction and concern within the wider community, including myself. As Debian Project Leader, it is my responsibility to ensure the long-term sustainability and resilience of our project, which includes fostering an environment where new contributors can join and existing teams remain effective and well-supported. Therefore, even if the current team does not prioritize recruitment, I will actively seek and encourage new contributors for both teams, with the aim of supporting openness and collaboration.
This proposal is not intended as criticism of the current team's dedication or achievements--on the contrary, I am grateful for the hard work and commitment shown, often under challenging circumstances. My intention is to help address the structural issues that have made onboarding and specialization difficult and to ensure that both teams are well-supported for the future.
I also believe that both teams should regularly inform the Debian community about the policies and procedures they apply. I welcome any suggestions for a more detailed description of the tasks involved, as well as feedback on how best to implement this change in a way that supports collaboration and transparency.
My intention with this proposal is to foster a more open and effective working environment, and I am committed to working with all involved to ensure that any changes are made collaboratively and with respect for the important work already being done.
I'm aware that the ideas outlined above touch on core parts of how Debian operates and involve responsibilities across multiple teams. These are not small changes, and implementing them will require thoughtful discussion and collaboration.
To move this forward, I've registered a dedicated BoF for DebConf. To make the most of that opportunity, I'm looking for volunteers who feel committed to improving our workflows and processes. With your help, we can prepare concrete and sensible proposals in advance--so the limited time of the BoF can be used effectively for decision-making and consensus-building.
In short: I need your help to bring these changes to life. From my experience in my last term, I know that when it truly matters, the Debian community comes together--and I trust that spirit will guide us again.
Please also note: we had a "Call for volunteers" five years ago, and much of what was written there still holds true today. I've been told that the response back then was overwhelming--but that training such a large number of volunteers didn't scale well. This time, I hope we can find a more sustainable approach: training a few dedicated people first, and then enabling them to pass on their knowledge. This will also be a topic at the DebCamp sprint.
Dealing with Dormant Packages
Debian was founded on the principle that each piece of software should be maintained by someone with expertise in it--typically a single, responsible maintainer. This model formed the historical foundation of Debian's packaging system and helped establish high standards of quality and accountability. However, as the project has grown and the number of packages has expanded, this model no longer scales well in all areas. Team maintenance has since emerged as a practical complement, allowing multiple contributors to share responsibility and reduce bottlenecks--depending on each team's internal policy.
While working on the Bug of the Day initiative, I observed a significant number of packages that have not been updated in a long time. In the case of team-maintained packages, addressing this is often straightforward: team uploads can be made, or the team can be asked whether the package should be removed. We've also identified many packages that would fit well under the umbrella of active teams, such as language teams like Debian Perl and Debian Python, or blends like Debian Games and Debian Multimedia. Often, no one has taken action--not because of disagreement, but simply due to inattention or a lack of initiative.
In addition, we've found several packages that probably should be removed entirely. In those cases, we've filed bugs with pre-removal warnings, which can later be escalated to removal requests.
When a package is still formally maintained by an individual, but shows signs of neglect (e.g., no uploads for years, unfixed RC bugs, failing autopkgtests), we currently have three main tools:
- The MIA process, which handles inactive or unreachable maintainers.
- Package Salvaging, which allows contributors to take over maintenance if conditions are met.
- Non-Maintainer Uploads (NMUs), which are limited to specific, well-defined fixes (which do not include things like migration to Salsa).
These mechanisms are important and valuable, but they don't always allow us to react swiftly or comprehensively enough. Our tools for identifying packages that are effectively unmaintained are relatively weak, and the thresholds for taking action are often high.
The Package Salvage team is currently trialing a process we've provisionally called "Intend to NMU" (ITN). The name is admittedly questionable--some have suggested alternatives like "Intent to Orphan"--and discussion about this is ongoing on debian-devel. The mechanism is intended for situations where packages appear inactive but aren't yet formally orphaned, introducing a clear 21-day notice period before NMUs, similar in spirit to the existing ITS process. The discussion has sparked suggestions for expanding NMU rules.
While it is crucial not to undermine the autonomy of maintainers who remain actively involved, we also must not allow a strict interpretation of this autonomy to block needed improvements to obviously neglected packages.
To be clear: I do not propose to change the rights of maintainers who are clearly active and invested in their packages. That model has served us well. However, we must also be honest that, in some cases, maintainers stop contributing--quietly and without transition plans. In those situations, we need more agile and scalable procedures to uphold Debian's high standards.
To that end, I've registered a BoF session for DebConf25 to discuss potential improvements in how we handle dormant packages. These discussions will be prepared during a sprint at DebCamp, where I hope to work with others on concrete ideas.
Among the topics I want to revisit is my proposal from last November on debian-devel, titled "Barriers between packages and other people". While the thread prompted substantial discussion, it understandably didn't lead to consensus. I intend to ensure the various viewpoints are fairly summarised--ideally by someone with a more neutral stance than myself--and, if possible, work toward a formal proposal during the DebCamp sprint to present at the DebConf BoF.
My hope is that we can agree on mechanisms that allow us to act more effectively in situations where formerly very active volunteers have, for whatever reason, moved on. That way, we can protect both Debian's quality and its collaborative spirit.
Building Sustainable Funding for Debian
Debian incurs ongoing expenses to support its infrastructure--particularly hardware maintenance and upgrades--as well as to fund in-person meetings like sprints and mini-DebConfs. These investments are essential to our continued success: they enable productive collaboration and ensure the robustness of the operating system we provide to users and derivative distributions around the world.
While DebConf benefits from generous sponsorship, and we regularly receive donated hardware, there is still considerable room to grow our financial base--especially to support less visible but equally critical activities. One key goal is to establish a more constant and predictable stream of income, helping Debian plan ahead and respond more flexibly to emerging needs.
This presents an excellent opportunity for contributors who may not be involved in packaging or technical development. Many of us in Debian are engineers first--and fundraising is not something we've been trained to do. But just like technical work, building sustainable funding requires expertise and long-term engagement.
If you're someone who's passionate about Free Software and has experience with fundraising, donor outreach, sponsorship acquisition, or nonprofit development strategy, we would deeply value your help. Supporting Debian doesn't have to mean writing code. Helping us build a steady and reliable financial foundation is just as important--and could make a lasting impact.
Kind regards Andreas.
PS: In April I also planted my 5000th tree and while this is off-topic here I'm proud to share this information with my fellow Debian friends.
11 May 2025 10:00pm GMT