10 Apr 2026
Planet Debian
Reproducible Builds: Reproducible Builds in March 2026
Welcome to the March 2026 report from the Reproducible Builds project!
These reports outline what we've been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
- Linux kernel hash-based integrity checking proposed
- Distribution work
- Tool development
- Upstream patches
- Documentation updates
- Two new academic papers
- Misc news
Linux kernel hash-based integrity checking proposed
Eric Biggers posted to the Linux Kernel Mailing List in response to a patch series posted by Thomas Weißschuh to introduce a calculated hash-based system of integrity checking to complement the existing signature-based approach. Thomas' original post mentions:
The current signature-based module integrity checking has some drawbacks in combination with reproducible builds. Either the module signing key is generated at build time, which makes the build unreproducible, or a static signing key is used, which precludes rebuilds by third parties and makes the whole build and packaging process much more complicated.
However, Eric's followup message goes further:
I think this actually undersells the feature. It's also much simpler than the signature-based module authentication. The latter relies on PKCS#7, X.509, ASN.1, OID registry,
crypto_sigAPI, etc in addition to the implementations of the actual signature algorithm (RSA / ECDSA / ML-DSA) and at least one hash algorithm.
Distribution work
In Debian this month,
-
Lucas Nussbaum announced Debaudit, a "new service to verify the reproducibility of Debian source packages":
debaudit complements the work of the Reproducible Builds project. While reproduce.debian.net focuses on ensuring that binary packages can be bit-for-bit reproduced from their source packages, debaudit focuses on the preceding step: ensuring that the source package itself is a faithful and reproducible representation of its upstream source or
Vcs-Gitrepository. -
kpcyrd filed a bug against the
librust-const-random-devpackage reporting that thecompile-time-rngfeature of theahashcrate uses theconst-randomcrate in turn, which uses a macro to read/generate a random number generator during the build. This issue was also filed upstream. -
60 reviews of Debian packages were added, 4 were updated and 16 were removed this month adding to our knowledge about identified issues. One new issue types was added,
pkgjs_lock_json_file_issue.
Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.
Tool development
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including preparing and uploading versions, 314 and 315 to Debian.
-
Chris Lamb:
-
Jelle van der Waa:
-
Michael R. Crusoe:
In addition, Vagrant Cascadian updated diffoscope in GNU Guix to version 315.
rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there; it powers, amongst other things, reproduce.debian.net.
A new version, 0.26.0, was released this month, with the following improvements:
- Much smoother onboarding/installation.
- Complete database redesign with many improvements.
- New REST HTTP API.
- It's now possible to artificially delay the first reproduce attempt. This gives archive infrastructure more time to catch up.
- And many, many other changes.
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
-
Bernhard M. Wiedemann:
minify(rust random HashMap) / (alternative by kpcyrd)rpm-config-SUSE(toolchain)
-
Chris Lamb:
- #1129544 filed against
python-nxtomomill. - #1130622 filed against
dh-fortran. - #1130623 filed against
python-discovery. - #1130666 filed against
kanboard. - #1131168 filed against
moltemplate. - #1131384 filed against
stacer. - #1131385 filed against
libcupsfilters. - #1131395 filed against
django-ninja. - #1131403 filed against
python-agate. - #1132074 filed against
aetos. - #1132508 filed against
python-bayespy.
- #1129544 filed against
-
kpcyrd:
Documentation updates
Once again, there were a number of improvements made to our website this month including:
-
kpcyrd:
-
Robin Candau:
-
Timo Pohl:
- Add new From Constrictor to Serpent: Investigating the Threat of Cache Poisoning in the Python Ecosystem paper to the Academic publications page. […]
- Add GitLab registration confirmation to How to join the Salsa group page. […]
Two new academic papers
Marc Ohm, Timo Pohl, Ben Swierzy and Michael Meier published a paper on the threat of cache poisoning in the Python ecosystem:
Attacks on software supply chains are on the rise, and attackers are becoming increasingly creative in how they inject malicious code into software components. This paper is the first to investigate Python cache poisoning, which manipulates bytecode cache files to execute malicious code without altering the human-readable source code. We demonstrate a proof of concept, showing that an attacker can inject malicious bytecode into a cache file without failing the Python interpreter's integrity checks. In a large-scale analysis of the Python Package Index, we find that about 12,500 packages are distributed with cache files. Through manual investigation of cache files that cannot be reproduced automatically from the corresponding source files, we identify classes of reasons for irreproducibility to locate malicious cache files. While we did not identify any malware leveraging this attack vector, we demonstrate that several widespread package managers are vulnerable to such attacks.
A PDF of the paper is available online.
Mario Lins of the University of Linz, Austria, has published their PhD doctoral thesis on the topic of Software supply chain transparency:
We begin by examining threats to the software distribution stage - the point at which artifacts (e.g., mobile apps) are delivered to end users - with an emphasis on mobile ecosystems [and] we next focus on the operating system on mobile devices, with an emphasis on mitigating bootloader-targeted attacks. We demonstrate how to compensate lost security guarantees on devices with an unlocked bootloader. This allows users to flash custom operating systems on devices that no longer receive security updates from the original manufacturer without compromising security. We then move to the source code stage. [Also,] we introduce a new architecture to ensure strong source-to-binary correspondence by leveraging the security guarantees of Confidential Computing technology. Finally, we present The Supply Chain Game, an organizational security approach that enhances standard risk-management methods. We demonstrate how game-theoretic techniques, combined with common risk management practices, can derive new criteria to better support decision makers.
A PDF of the paper is available online.
Misc news
On our mailing list this month:
-
Holger Levsen announced that this year's Reproducible Builds summit will almost certainly be held in Gothenburg, Sweden, from September 22 until 24, followed by two days of hacking. However, these dates are preliminary and not 100% final - an official announcement is forthcoming.
-
Mark Wielaard posted to our list asking a question on the difference between
debugeditand relative debug paths based on a comment on the Build path page: "Have people tried more modern versions ofdebugeditto get deterministic (absolute) DWARF paths and found issues with it?
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
-
IRC:
#reproducible-buildsonirc.oftc.net. -
Mastodon: @reproducible_builds@fosstodon.org
-
Mailing list:
rb-general@lists.reproducible-builds.org
10 Apr 2026 4:13pm GMT
Jamie McClelland: AI Hacking the Planet
A colleague asked me if we should move all our money to our pillow cases after reading the latest AI editorial from Thomas Friedman. The article reads like a press release from Anthropic, repeating the claim that their latest AI model is so good at finding software vulnerabilities that it is a danger to the world.
I think I now know what it's like to be a doctor who is forced to watch Gray's Anatomy.
By now every journalist should be able to recognize the AI publicity playbook:
Step 1: Start with a wildly unsubstantiated claim about how dangerous your product is:
AI will cause human extinction before we have a chance to colonize mars (remember that one? Even Kim Stanley Robinson, author of perhaps the most compelling science fiction on colonizing mars calls bull shit on it).
AI will eliminate all of our jobs (this one was extremely effective at providing cover for software companies laying off staff but it has quickly dawned on people that the companies that did this are living in chaos not humming along happily with functional robots)
AI will discover massive software vulnerabilities allowing bad actors to "hack pretty much every major software system in the world". (Did Friedman pull that directly from Anthropic's press release or was that his contribution?)
Step 2: To help stave off human collapse, only release the new version to a vetted group of software companies and developers, preferably ones with big social media followings
Step 3: Wait for the limited release developers to spew unbridled enthusiasm and shocking examples that seem to suggest this new AI produce is truly unbelievable
Step 4: Watch stock prices and valuations soar
Step 5: Release to the world, and experience a steady stream of mockery as people discover how wrong you are
Step 6: Start over
Even if Friedman missed the text book example of the playbook, I have to ask: if you think bad actors compromising software resulting in massive loss of private data, major outages and wasted resources needs to be reported on, then where have you been for the last 10 years? This literally happens on a daily basis due to the fundamentally flawed way capitalism has been writing software even before the invention of AI. A small part of me wonders - maybe AI writing software is not so bad, because how could it be any worse than it is now?
Also, let's keep in mind that AI's super ability at finding vulnerable software depends on having access to the software's source code, which most companies keep locked up tight. That means the owners of the software can use AI to find vulnerabilities and fix them but bad actors can't.
Oh, but wait, what if a company is so incompetent that they accidentally release their proprietary software to the Internet?
Surely that would allow AI bots to discover their vulnerabilities and destroy the company right? I'm not sure if anyone has discovered world ending vulnerabilities in Anthropic's Claude code since it was accidentally released, but it is fun to watch people mock software that is clearly written by AI (and spoiler alert, it seems way worse that software written now).
Well… we probably should all be keeping our money in a pillow case anyway.
10 Apr 2026 12:27pm GMT
Reproducible Builds (diffoscope): diffoscope 317 released
The diffoscope maintainers are pleased to announce the release of diffoscope version 317. This version includes the following changes:
[ Chris Lamb ]
* Limit python3-guestfs Build-Dependency to !i386. (Closes: #1132974)
* Try to fix PYPI_ID_TOKEN debugging.
[ Holger Levsen ]
* Add ppc64el to the list of architectures for python3-guestfs.
You find out more by visiting the project homepage.
10 Apr 2026 12:00am GMT








