11 Jan 2026

feedPlanet Debian

Patryk Cisek: Choosing Secrets Manager for Homelab

Secrets Manager for Homelab

For a few years, I've been managing the configuration of a bunch of self-hosted services using Ansible Playbooks. Each playbook needed at least one secret - the sudo password. Many of them needed to manage more (e.g. SMTP credentials for email notifications). Because I've always been paranoid about security, I stored most of those secrets in Ansible Vault, the password for which is stored in only one location - my memory. Therefore, each time I ran any of those playbooks, I'd have to enter two passwords interactively: the sudo password and the Ansible Vault password.

11 Jan 2026 11:57pm GMT

Dirk Eddelbuettel: RApiDatetime 0.0.10 on CRAN: Maintenance

A new maintenance release of our RApiDatetime package is now on CRAN, coming just about two years after the previous maintenance release.

RApiDatetime provides a number of entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. Lastly, asDatePOSIXct converts to a date type. All these functions are rather useful, but were not previously exported by R for C-level use by other packages. Which this package aims to change.

This release avoids use of and which are now outlawed under R-devel, and makes a number of other smaller maintenance updates. Just like the previous release, we are at OS_type: unix meaning there will not be any Windows builds at CRAN. If you would like that to change, and ideally can work in the Windows portion, do not hesitate to get in touch.

Details of the release follow based on the NEWS file.

Changes in RApiDatetime version 0.0.10 (2026-01-11)

  • Minor maintenance for continuous integration files, README.md

  • Switch to Authors@R in DESCRIPTION

  • Use Rf_setAttrib with R 4.5.0 or later

Courtesy of my CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

11 Jan 2026 10:57pm GMT

Dirk Eddelbuettel: RProtoBuf 0.4.25 on CRAN: Mostly Maintenance

A new maintenance release 0.4.25 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers ("ProtoBuf") data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release brings an update to a header use force by R-devel, the usual set of continunous integration updates, and a large overhaul of URLs as CRAN is now running more powerful checks. As a benefit the three vignettes have all been refreshed. they are now also delivered via the new Rcpp::asis() vignette builder that permits pre-made pdf files to be used easily.

The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.25 (2026-01-11)

  • Several routine updates to continuous integration script

  • Include ObjectTable.h instead of Callback.h to accommodate R 4.6.0

  • Switch vignettes to Rcpp::asis driver, update references

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the 'quick' overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

11 Jan 2026 8:36pm GMT

Russell Coker: Terminal Emulator Security

I just read this informative article on ANSI terminal security [1]. The author has written a tool named vt-houdini for testing for these issues [2]. They used to host an instance on their server but appear to have stopped it. When you run that tool you can ssh to the system in question and without needing a password you are connected and the server probes your terminal emulator for vulnerabilities. The versions of Kitty and Konsole in Debian/Trixie have just passed those tests on my system.

This will always be a potential security problem due to the purpose of a terminal emulator. A terminal emulator will often display untrusted data and often data which is known to come from hostile sources (EG logs of attempted attacks). So what could be done in this regard?

Memory Protection

Due to the complexity of terminal emulation there is the possibility of buffer overflows and other memory management issues that could be used to compromise the emulator.

The Fil-C compiler is an interesting project [3], it compiles existing C/C++ code with memory checks. It is reported to have no noticeable impact on the performance of the bash shell which sounds like a useful option to address some of these issues as shell security issues are connected to terminal security issues. The performance impact on a terminal emulator would be likely to be more noticeable. Also note that Fil-C compilation apparently requires compiling all libraries with it, this isn't a problem for bash as the only libraries it uses nowadays are libtinfo and libc. The kitty terminal emulator doesn't have many libraries but libpython is one of them, it's an essential part of Kitty and it is a complex library to compile in a different way. Konsole has about 160 libraries and it isn't plausible to recompile so many libraries at this time.

Choosing a terminal emulator that has a simpler design might help in this regard. Emulators that call libraries for 3D effects etc and native support for displaying in-line graphics have a much greater attack surface.

Access Control

A terminal emulator could be run in a container to prevent it from doing any damage if it is compromised. But the terminal emulator will have full control over the shell it runs and if the shell has access needed to allow commands like scp/rsync to do what is expected of them then that means that no useful level of containment is possible.

It would be possible to run a terminal emulator in a container for the purpose of connecting to an insecure or hostile system and not allow scp/rsync to/from any directory other than /tmp (or other directories to use for sharing files). You could run "exec ssh $SERVER" so the terminal emulator session ends when the ssh connection ends.

Conclusion

There aren't good solutions to the problems of terminal emulation security. But testing every terminal emulator with vt-houdini and fuzzing the popular ones would be a good start.

Qubes level isolation will help things in some situations, but if you need to connect to a server with privileged access to read log files containing potentially hostile data (which is a common sysadmin use case) then there aren't good options.

11 Jan 2026 3:46am GMT

Otto Kekäläinen: Stop using MySQL in 2026, it is not true open source

Featured image of post Stop using MySQL in 2026, it is not true open source

If you care about supporting open source software, and still use MySQL in 2026, you should switch to MariaDB like so many others have already done.

The number of git commits on github.com/mysql/mysql-server has been significantly declining in 2025. The screenshot below shows the state of git commits as of writing this in January 2026, and the picture should be alarming to anyone who cares about software being open source.

MySQL GitHub commit activity decreasing drastically

This is not surprising - Oracle should not be trusted as the steward for open source projects

When Oracle acquired Sun Microsystems and MySQL along with it back in 2009, the European Commission almost blocked the deal due to concerns that Oracle's goal was just to stifle competition. The deal went through as Oracle made a commitment to keep MySQL going and not kill it, but (to nobody's surprise) Oracle has not been a good steward of MySQL as an open source project and the community around it has been withering away for years now. All development is done behind closed doors. The publicly visible bug tracker is not the real one Oracle staff actually uses for MySQL development, and the few people who try to contribute to MySQL just see their Pull Requests and patch submissions marked as received with mostly no feedback and then those changes may or may not be in the next MySQL release, often rewritten, and with only Oracle staff in the git author/committer fields. The real author only gets a small mention in a blog post. When I was the engineering manager for the core team working on RDS MySQL and RDS MariaDB at Amazon Web Services, I oversaw my engineers' contributions to both MySQL and MariaDB (the latter being a fork of MySQL by the original MySQL author, Michael Widenius). All the software developers in my org disliked submitting code to MySQL due to how bad the reception by Oracle was to their contributions.

MariaDB is the stark opposite with all development taking place in real-time on github.com/mariadb/server, anyone being able to submit a Pull Request and get a review, all bugs being openly discussed at jira.mariadb.org and so forth, just like one would expect from a true open source project. MySQL is open source only by license (GPL v2), but not as a project.

MySQL's technical decline in recent years

Despite not being a good open source steward, Oracle should be given credit that it did keep the MySQL organization alive and allowed it to exist fairly independently and continue developing and releasing new MySQL versions well over a decade after the acquisition. I have no insight into how many customers they had, but I assume the MySQL business was fairly profitable and financially useful to Oracle, at least as long as it didn't gain too many features to threaten Oracle's own main database business.

I don't know why, perhaps because too many talented people had left the organization, but it seems that from a technical point of view MySQL clearly started to deteriorate from 2022 onward.

When MySQL 8.0.29 was released with the default ALTER TABLE method switched to run in-place, it had a lot of corner cases that didn't work, causing the database to crash and data to corrupt for many users. The issue wasn't fully fixed until a year later in MySQL 8.0.32. To many users annoyance Oracle announced the 8.0 series as "evergreen" and introduced features and changes in the minor releases, instead of just doing bugfixes and security fixes like users historically had learnt to expect from these x.y.Z maintenance releases.

There was no new major MySQL version for six years. After MySQL 8.0 in 2018 it wasn't until 2023 when MySQL 8.1 was released, and it was just a short-term preview release. The first actual new major release MySQL 8.4 LTS was released in 2024. Even though it was a new major release, many users got disappointed as it had barely any new features.

Many also reported degraded performance with newer MySQL versions, for example the benchmark by famous MySQL performance expert Mark Callaghan below shows that on write-heavy workloads MySQL 9.5 throughput is typically 15% less than in 8.0.

Benchmark showing new MySQL versions being slower than the old

Due to newer MySQL versions deprecating many features, a lot of users also complained about significant struggles regarding both MySQL 5.7->8.0 and 8.0->8.4 upgrades. With few new features and heavy focus on code base cleanup and feature deprecation, it became obvious to many that Oracle had decided to just keep MySQL barely alive, and put all new relevant features (e.g. vector search) into Heatwave, Oracle's closed-source and cloud-only service for MySQL customers.

As it was evident that Oracle isn't investing in MySQL, Percona's Peter Zaitsev wrote Is Oracle Finally Killing MySQL in June 2024. At this time MySQL's popularity as ranked by DB-Engines had also started to tank hard, a trend that likely accelerates in 2026.

MySQL dropping significantly in DB-Engines ranking

In September 2025 news reported that Oracle was reducing its workforce and that the MySQL staff was getting heavily reduced. Obviously this does not bode well for MySQL's future, and Peter Zaitsev posted already in November stats showing that the latest MySQL maintenance release contained fewer bug fixes than before.

Open source is more than ideology: it has very real effects on software security and sovereignty

Some say they don't care if MySQL is truly open source or not, or that they don't care if it has a future in coming years, as long as it still works now. I am afraid people thinking so are taking a huge risk. The database is often the most critical part of a software application stack, and any flaw or problem in operations, let alone a security issue, will have immediate consequences, and "not caring" will eventually get people fired or sued.

In open source problems are discussed openly, and the bigger the problem, the more people and companies will contribute to fixing it. Open source as a development methodology is similar to the scientific method with free flow of ideas that are constantly contested and only the ones with the most compelling evidence win. Not being open means more obscurity, more risk and more "just trust us bro" attitude.

This open vs. closed is very visible for example in how Oracle handles security issues. We can see that in 2025 alone MySQL published 123 CVEs about security issues, while MariaDB had 8. There were 117 CVEs that only affected MySQL and not MariaDB in 2025. I haven't read them all, but typically the CVEs hardly contain any real details. As an example, the most recent one CVE-2025-53067 states "Easily exploitable vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server." There is no information a security researcher or auditor could use to verify if any original issue actually existed, or if it was fixed, or if the fix was sufficient and fully mitigating the issue or not. MySQL users just have to take the word of Oracle that it is all good now. Handling security issues like this is in stark contrast to other open source projects, where all security issues and their code fixes are open for full scrutiny after the initial embargo is over and CVE made public.

There is also various forms of enshittification going on one would not see in a true open source project, and everything about MySQL as a software, documentation and website is pushing users to stop using the open source version and move to the closed MySQL versions, and in particular to Heatwave, which is not only closed-source but also results in Oracle fully controlling customer's databases contents.

Of course, some could say this is how Oracle makes money and is able to provide a better product. But stories on Reddit and elsewhere suggest that what is going on is more like Oracle milking hard the last remaining MySQL customers who are forced to pay more and more for getting less and less.

There are options and migrating is easy, just do it

A large part of MySQL users switched to MariaDB already in the mid-2010s, in particular everyone who had cared deeply about their database software staying truly open source. That included large installations such as Wikipedia, and Linux distributions such as Fedora and Debian. Because it's open source and there is no centralized machine collecting statistics, nobody knows what the exact market shares look like. There are however some application specific stats, such as that 57% of WordPress sites around the world run MariaDB, while the share for MySQL is 42%.

For anyone running a classic LAMP stack application such as WordPress, Drupal, Mediawiki, Nextcloud, or Magento, switching the old MySQL database to MariaDB is be straightforward. As MariaDB is a fork of MySQL and mostly backwards compatible with it, swapping out MySQL for MariaDB can be done without changing any of the existing connectors or database clients, as they will continue to work with MariaDB as if it was MySQL.

For those running custom applications and who have the freedom to make changes to how and what database is used, there are tens of mature and well-functioning open source databases to choose from, with PostgreSQL being the most popular general database. If your application was built from the start for MySQL, switching to PostgreSQL may however require a lot of work, and the MySQL/MariaDB architecture and storage engine InnoDB may still offer an edge in e.g. online services where high performance, scalability and solid replication features are of highest priority. For a quick and easy migration MariaDB is probably the best option.

Switching from MySQL to the Percona Server is also very easy, as it closely tracks all changes in MySQL and deviates from it only by a small number of improvements done by Percona. However, also precisely because of it being basically just a customized version of the MySQL Server, it's not a viable long-term solution for those trying to fully ditch the dependency on Oracle.

There are also several open source databases that have no common ancestry with MySQL, but strive to be MySQL-compatible. Thus most apps built for MySQL can simply switch to using them without needing SQL statements to be rewritten. One such database is TiDB, which has been designed from scratch specifically for highly scalable and large systems, and is so good that even Amazon's latest database solution DSQL was built borrowing many ideas from TiDB. However, TiDB only really shines with larger distributed setups, so for the vast majority of regular small- and mid-scale applications currently using MySQL, the most practical solution is probably to just switch to MariaDB, which on most Linux distributions can simply be installed by running apt/dnf/brew install mariadb-server.

Whatever you end up choosing, as long as it is not Oracle, you will be better off.

11 Jan 2026 12:00am GMT

10 Jan 2026

feedPlanet Debian

Dirk Eddelbuettel: Rcpp 1.1.1 on CRAN: Many Improvements in Semi-Annual Update

rcpp logo

Team Rcpp is thrilled to share that an exciting new version 1.1.1 of Rcpp is now on CRAN (and also uploaded to Debian and already built for r2u).

Having switchted to C++11 as the minimum standard in the previous 1.1.0 release, this version takes full advantage of it and removes a lot of conditional code catering to older standards that no longer need to be supported. Consequently, the source tarball shrinks by 39% from 3.11 mb to 1.88 mb. That is a big deal. (Size peaked with Rcpp 1.0.12 two years ago at 3.43 mb; relative to its size we are down 45% !!) Removing unused code also makes maintenance easier, and quickens both compilation and installation in general.

This release continues as usual with the six-months January-July cycle started with release 1.0.5 in July 2020. Interim snapshots are always available via the r-universe page and repo. We continue to strongly encourage the use of these development released and their testing-we tend to run our systems with them too.

Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 3020 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.1% of all packages depend (directly) on Rcpp, and 60.9% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 109.8 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 2151 (JSS, 2011) and 405 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 715.

This time, I am not attempting to summarize the different changes. The full list follows below and details all these changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.1.1 (2026-01-08)

  • Changes in Rcpp API:

    • An unused old R function for a compiler version check has been removed after checking no known package uses it (Dirk in #1395)

    • A narrowing warning is avoided via a cast (Dirk in #1398)

    • Demangling checks have been simplified (Iñaki in #1401 addressing #1400)

    • The treatment of signed zeros is now improved in the Sugar code (Iñaki in #1404)

    • Preparations for phasing out use of Rf_error have been made (Iñaki in #1407)

    • The long-deprecated function loadRcppModules() has been removed (Dirk in #1416 closing #1415)

    • Some non-API includes from R were refactored to accommodate R-devel changes (Iñaki in #1418 addressing #1417)

    • An accessor to Rf_rnbeta has been removed (Dirk in #1419 also addressing #1420)

    • Code accessing non-API Rf_findVarInFrame now uses R_getVarEx (Dirk in #1423 fixing #1421)

    • Code conditional on the R version now expects at least R 3.5.0; older code has been removed (Dirk in #1426 fixing #1425)

    • The non-API ATTRIB entry point to the R API is no longer used (Dirk in #1430 addressing #1429)

    • The unwind-protect mechanism is now used unconditionally (Dirk in #1437 closing #1436)

  • Changes in Rcpp Attributes:

    • The OpenMP plugin has been generalized for different macOS compiler installations (Kevin in #1414)
  • Changes in Rcpp Documentation:

    • Vignettes are now processed via a new "asis" processor adopted from R.rsp (Dirk in #1394 fixing #1393)

    • R is now cited via its DOI (Dirk)

    • A (very) stale help page has been removed (Dirk in #1428 fixing #1427)

    • The main README.md was updated emphasizing r-universe in favor of the local drat repos (Dirk in #1431)

  • Changes in Rcpp Deployment:

    • A temporary change in R-devel concerning NA part in complex variables was accommodated, and then reverted (Dirk in #1399 fixing #1397)

    • The macOS CI runners now use macos-14 (Dirk in #1405)

    • A message is shown if R.h is included before Rcpp headers as this can lead to errors (Dirk in #1411 closing #1410)

    • Old helper functions use message() to signal they are not used, deprecation and removal to follow (Dirk in #1413 closing #1412)

    • Three tests were being silenced following #1413 (Dirk in #1422)

    • The heuristic whether to run all available tests was refined (Dirk in #1434 addressing #1433)

    • Coverage has been tweaked via additional #nocov tags (Dirk in #1435)

  • Non-release Changes:

    • Two interim non-releases 1.1.0.8.1 and .2 were made in order to unblock CRAN due to changes in R-devel rather than Rcpp

Thanks to my CRANberries, you can also look at a diff to the previous interim release along with pre-releaseds 1.1.0.8, 1.1.0.8.1 and 1.1.0.8.2 that were needed because R-devel all of a sudden decided to move fast and break things. Not our doing.

Questions, comments etc should go to the GitHub discussion section list]rcppdevellist off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well. Both sections can be searched as well.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

10 Jan 2026 8:37pm GMT

Matthias Geiger: Building a propagation box for oyster mushrooms

Inspiration

In November I watched a short documentary about a guy who grew pearl oyster mushrooms in his backyard. They used pallet boxes (half of a europallet, 60x80x20 cm) as box to keep the substrate the mycelium feeds on in. Since I really enjoy (foraged) mushrooms and had the raw materials lying around, I opted to build it myself. This also had the benefit of using what was available and not just consuming, i.e. buying a pallet box.

Preparing the raw materials

I had 4.5 m x ~ 25 cm wooden spruce planks at home. My plan was to cut those into 2 m segments, then trim the edges down to 20 cm and then cut them into handy pieces, following the dimension of half a pallet box. This is what they looked after cutting them with an electric chainsaw to around 2 m:

raw_planks

You can see that the edges are still not straight, because that's how they came out of the sawmill.

Once that was done I visited a family member that had a crosscut saw, a table saw and a band saw; all that I would need. First we trimmed the edges of the 2m planks with the table saw so they were somewhat straight; then they were flipped and the other edge was cut straight, and their width cut down to 20 cm.

After moving them over to the crosscut saw dividing them into two 60 cm and one 80 cm was fairly easy. When cutting the 2m planks from the 4m ones I calculated with extra offcuts, so I got little waste overall and could use the whole length to get my desired board.

This is what the cut pieces looked like: cut_planks

Assembly

I packed up my planks, now nicely cut to size, and I went to a hardware shop and bought hinges and screws. Assembly was fairly easy and fast: screw a hinge to a corner, hold the other plank onto the hinge so that the corners of both boards touch, and affix the hinge.

plank_with_hinge corner_with_hinge

When this was done, the frame looked like this:

finished_frame

As last step I drilled 10mm holes more or less random in the middle of the box. This is where the mushrooms will grow out of later and can be harvested.

box_with_holes

Closing thoughts

This was a fun project I finished in a day. The hinges have the benefit that they allow the box to be folded up lenght-wise: folded This allows for convenient storage. Since it's too cold outside right now, cultivation will have to wait until spring. This also just needs mycelium one can just buy, and some material fungus digests. They can also be fed coffee grounds, and harvest of the fruit body is possible circa every two weeks.

10 Jan 2026 5:45pm GMT

Scarlett Gately Moore: I love KDE too much to retire! Snap beta releases trickling in. Stay tuned.

A short but sweet note to say I am coming out of my short retirement to help with snaps again. My time is extremely limited, however we are working hard on getting snaps on CI and I have some newer snaps in -beta trickling in for testing. You must install kf6-core24 from beta as well to test them ( this will likely break older kde snaps in the process so beware. ) This is slow going as I work on them during my hour lunch at day job and spare stolen moments. KDE is coming up on its 30th birthday!!! How cool is that!

I ❤ KDE

Like my work? Consider a donation. Thank you!

Donate

10 Jan 2026 4:54pm GMT

09 Jan 2026

feedPlanet Debian

Simon Josefsson: Debian Taco – Towards a GitSecDevOps Debian

One of my holiday projects was to understand and gain more trust in how Debian binaries are built, and as the holidays are coming to an end, I'd like to introduce a new research project called Debian Taco. I apparently need more holidays, because there are still more work to be done here, so at the end I'll summarize some pending work.

Debian Taco, or TacOS, is a GitSecDevOps rebuild of Debian GNU/Linux.

The Debian Taco project publish rebuilt binary packages, package repository metadata (InRelease, Packages, etc), container images, cloud images and live images.

All packages are built from pristine source packages in the Debian archive. Debian Taco does not modify any Debian source code nor add or remove any packages found in Debian.

No servers are involved! Everything is built in GitLab pipelines and results are published through modern GitDevOps mechanism like GitLab Pages and S3 object storage. You can fork the individual projects below on GitLab.com and you will have your own Debian-derived OS available for tweaking. (Of course, at some level, servers are always involved, so this claim is a bit of hyperbole.)

Goals

The goal of TacOS is to be bit-by-bit identical with official Debian GNU/Linux, and until that has been completed, publish diffoscope output with differences.

The idea is to further categorize all artifact differences into one of the following categories:

1) An obvious bug in Debian. For example, if a package does not build reproducible.

2) An obvious bug in TacOS. For example, if our build environment does not manage to build a package.

3) Something else. This would be input for further research and consideration. This category also include things where it isn't obvious if it is a bug in Debian or in TacOS. Known examples:

3A) Packages in TacOS are rebuilt the latest available source code, not the (potentially) older package that were used to build the Debian packages. This could lead to differences in the packages. These differences may be useful to analyze to identify supply-chain attacks. See some discussion about idempotent rebuilds.

Our packages are all built from source code, unless we have not yet managed to build something. In the latter situation, Debian Taco falls back and uses the official Debian artifact. This allows an incremental publication of Debian Taco that still is 100% complete without requiring that everything is rebuilt instantly. The goal is that everything should be rebuilt, and until that has been completed, publish a list of artifacts that we use verbatim from Debian.

Debian Taco Archive

The Debian Taco Archive project generate and publish the package archive (dists/tacos-trixie/InRelease, dists/tacos-trixie/main/binary-amd64/Packages.gz, pool/* etc), similar to what is published at https://deb.debian.org/debian/.

The output of the Debian Taco Archive is available from https://debdistutils.gitlab.io/tacos/archive/.

Debian Taco Container Images

The Debian Taco Container Images project provide container images of Debian Taco for trixie, forky and sid on the amd64, arm64, ppc64el and riscv64 architectures.

These images allow quick and simple use of Debian Taco interactively, but makes it easy to deploy for container orchestration frameworks.

Debian Taco Cloud Images

The Debian Taco Cloud Images project provide cloud images of Debian Taco for trixie, forky and sid on the amd64, arm64, ppc64el and riscv64 architectures.

Launch and install Debian Taco for your cloud environment!

Debian Taco Live Images

The Debian Taco Live Images project provide live images of Debian Taco for trixie, forky and sid on the amd64 and arm64 architectures.

These images allows running Debian Taco on physical hardware (or virtual machines), and even installation for permanent use.

Debian Taco Build Images and Packages

Packages are built using debdistbuild, which was introduced in a blog about Build Debian in a GitLab Pipeline.

The first step is to prepare build images, which is done by the Debian Taco Build Images project. They are similar to the Debian Taco containers but have build-essential and debdistbuild installed on them.

Debdistbuild is launched in a per-architecture per-suite CI/CD project. Currently only trixie-amd64 is available. That project has built some essential early packages like base-files, debian-archive-keyring and hostname. They are stored in Git LFS backed by a S3 object storage. These packages were all built reproducibly. So this means Debian Taco is still 100% bit-by-bit identical to Debian, except for the renaming.

I've yet to launch a more massive wide-scale package rebuild until some outstanding issues have been resolved. I earlier rebuilt around 7000 packages from Trixie on amd64, so I know that the method easily scales.

Remaining work

Where is the diffoscope package outputs and list of package differences? For another holiday! Clearly this is an important remaining work item.

Another important outstanding issue is how to orchestrate launching the build of all packages. Clearly a list of packages is needed, and some trigger mechanism to understand when new packages are added to Debian.

One goal was to build packages from the tag2upload browse.dgit.debian.org archive, before checking the Debian Archive. This ought to be really simple to implement, but other matters came first.

GitLab or Codeberg?

Everything is written using basic POSIX /bin/sh shell scripts. Debian Taco uses the GitLab CI/CD Pipeline mechanism together with a Hetzner S3 object storage to serve packages. The scripts have only weak reliance on GitLab-specific principles, and were designed with the intention to support other platforms. I believe reliance on a particular CI/CD platform is a limitation, so I'd like to explore shipping Debian Taco through a Forgejo-based architecture, possibly via Codeberg as soon as I manage to deploy reliable Forgejo runners.

The important aspects that are required are:

1) Pipelines that can build and publish web sites similar to GitLab Pages. Codeberg has a pipeline mechanism. I've successfully used Codeberg Pages to publish the OATH Toolkit homepage homepage. Glueing this together seems feasible.

2) Container Registry. It seems Forgejo supports a Container Registry but I've not worked with it at Codeberg to understand if there are any limitations.

3) Package Registry. The Deban Taco live images are uploaded into a package registry, because they are too big for being served through GitLab Pages. It may be converted to using a Pages mechanism, or possibly through Release Artifacts if multi-GB artifacts are supported on other platforms.

I hope to continue this work and explaining more details in a series of posts, stay tuned!

09 Jan 2026 4:33pm GMT

Russell Coker: LEAF ZE1 After 6 Months

About 6 months ago I got a Nissan LEAF ZE1 (2019 model) [1]. Generally it's going well and I'm happy with most things about it.

One issue is that as there isn't a lot of weight in the front with the batteries in the centre of the car the front wheels slip easily when accelerating. It's a minor thing but a good reason for wanting AWD in an electric car.

When I got the car I got two charging devices, the one to charge from a regular 240V 10A power point (often referred to as a "granny charger") and a cable with a special EV charging connector on each end. The cable with an EV connector on each end is designed for charging that's faster than the "granny charger" but not as fast as the rapid chargers which have the cable connected to the supply so the cable temperature can be monitored and/or controlled. That cable can be used if you get a fast charger setup at your home (which I never plan to do) and apparently at some small hotels and other places with home-style EV charging. I'm considering just selling that cable on ebay as I don't think I have any need to personally own a cable other than the "granny charger".

The key fob for the LEAF has a battery installed, it's either CR2032 or CR2025 - mine has CR2025. Some reports on the Internet suggest that you can stuff a CR2032 battery in anyway but that didn't work for me as the thickness of the battery stopped some of the contacts from making a good connection. I think I could have got it going by putting some metal in between but the batteries aren't expensive enough to make it worth the effort and risk. It would be nice if I could use batteries from my stockpile of CR2032 batteries that came from old PCs but I can afford to spend a few dollars on it.

My driveway is short and if I left the charger out it would be visible from the street and at risk of being stolen. I'm thinking of chaining the charger to a tree and having some sort of waterproof enclosure for it so I don't have to go to the effort of taking it out of the boot every time I use it. Then I could also configure the car to only charge during the peak sunlight hours when the solar power my home feeds into the grid has a negative price (we have so much solar power that it's causing grid problems).

The cruise control is a pain to use, so much so that I haven't yet got it to work usefully ever. The features look good in the documentation but in practice it's not as good as the Kia one I've used previously where I could just press one button to turn it on, another button to set the current speed as the cruise control speed, and then just have it work.

The electronic compass built in to the dash turned out to be surprisingly useful. I regret not gluing a compass to the dash of previous cars. One example is when I start google navigation for a journey and it says "go South on street X" and I need to know which direction is South so I don't start in the wrong direction. Another example is when I know that I'm North of a major road that I need to take to get to my destination so I just need to go roughly South and that is enough to get me to a road I recognise.

In the past when there is a bird in the way I don't do anything different, I keep driving at the same speed and rely on the bird to see me and move out of the way. Birds have faster reactions than humans and have evolved to move at the speeds cars travel on all roads other than freeways, also birds that are on roads are usually ones that have an eye in each side of their head so they can't not see my car approaching. For decades this has worked, but recently a bird just stood on the road and got squashed. So I guess that I should honk when there's birds on the road.

Generally everything about the car is fine and I'm happy to keep driving it.

09 Jan 2026 3:32am GMT

Reproducible Builds (diffoscope): diffoscope 310 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 310. This version includes the following changes:

[ Chris Lamb ]
* Reference the Debian 'ocaml' package instead of 'ocaml-nox'.
  (Closes: #1125094)

You find out more by visiting the project homepage.

09 Jan 2026 12:00am GMT

08 Jan 2026

feedPlanet Debian

Dima Kogan: Meshroom packaged for Debian

Like the title says, I just packaged Meshroom (and all the adjacent dependencies) for Debian! This is a fancy photogrammetry toolkit that uses modern software development methods. "Modern" meaning that it has a multitude of dependencies that come from lots of disparate places, which make it impossible for a mere mortal to build the thing. The Linux "installer" is 13GB and probably is some sort of container, or something.

But now, if you have a Debian/sid box with the non-free repos enabled, you can

sudo apt install meshroom

And then you can generate and 3D-print a life-size, geometrically-accurate statue of your cat. The colmap package does a similar thing, and has been in Debian for a while. I think it can't do as many things, but it's good to have both tools easily available.

These packages are all in contrib, because they depend on a number of non-free things, most notably CUDA.

This is currently in Debian/sid, but should be picked up by the downstream distros as they're released. The next noteworthy one is Ubuntu 26.04. Testing and feedback welcome.

08 Jan 2026 11:34pm GMT

Reproducible Builds: Reproducible Builds in December 2025

Welcome to the December 2025 from the Reproducible Builds project!

Our monthly reports outline what we've been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

  1. New orig-check service to validate Debian upstream tarballs
  2. Distribution work
  3. disorderfs updated to FUSE 3
  4. Mailing list updates
  5. Three new academic papers published
  6. Website updates
  7. Upstream patches

New orig-check service to validate Debian upstream tarballs

This month, Debian Developer Lucas Nussbaum announced the orig-check service, which attempts to automatically reproduce the generation upstream tarballs (ie. the "original source" component of a Debian source package), comparing that to the upstream tarball actually shipped with Debian.

As of the time of writing, it is possible for a Debian developer to upload a source archive that does not actually correspond to upstream's version. Whilst this is not inherently malicious (it typically indicates some tooling/process issue), the very possibility that a maintainer's version may differ potentially permits a maintainer to make (malicious) changes that would be misattributed to upstream.

This service therefore nicely complements the whatsrc.org service, which was reported in our reports for both April and August. The orig-check is dedicated to Lunar, who sadly passed away a year ago.


Distribution work

In Arch Linux this month, Robin Candau and Mark Hegreberg worked at making the Arch Linux WSL image bit-for-bit reproducible. Robin also shared some implementation details and future related work on our mailing list.

Continuing a series reported in these reports for March, April and July 2025 (etc.), Simon Josefsson has published another interesting article this month, itself a followup to a post Simon published in December 2024 regarding GNU Guix Container Images that are hosted on GitLab.

In Debian this month, Micha Lenk posted to the debian-backports-announce mailing list with the news that the Backports archive will now discard binaries generated and uploaded by maintainers: "The benefit is that all binary packages [will] get built by the Debian buildds before we distribute them within the archive."

Felix Moessbauer of Siemens then filed a bug in the Debian bug tracker to signal their intention to package debsbom, a software bill of materials (SBOM) generator for distributions based on Debian. This generated a discussion on the bug inquiring about the output format as well as a question about how these SBOMs might be distributed.

Holger Levsen merged a number of significant changes written by Alper Nebi Yasak to the Debian Installer in order to improve its reproducibility. As noted in Alper's merge request, "These are the reproducibility fixes I looked into before bookworm release, but was a bit afraid to send as it's just before the release, because the things like the xorriso conversion changes the content of the files to try to make them reproducible."

In addition, 76 reviews of Debian packages were added, 8 were updated and 27 were removed this month adding to our knowledge about identified issues. A new different_package_content_when_built_with_nocheck issue type was added by Holger Levsen. []

Arnout Engelen posted to our mailing list reporting that they successfully reproduced the NixOS minimal installation ISO for the 25.11 release without relying on a pre-compiled package archive, with more details on their blog.

Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for his work there.


disorderfs updated to FUSE 3

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into system calls to reliably flush out reproducibility issues.

This month, however, Roland Clobus upgraded disorderfs* from FUSE 2 to FUSE 3 after its package automatically got removed from Debian testing. Some tests in Debian currently require disorderfs to make the Debian live images reproducible, although disorderfs is not a Debian-specific tool.


Mailing list updates

On our mailing list this month:


Three new academic papers published

Yogya Gamage and Benoit Baudry of Université de Montréal, Canada together with Deepika Tiwari and Martin Monperrus of KTH Royal Institute of Technology, Sweden published a paper on The Design Space of Lockfiles Across Package Managers:

Most package managers also generate a lockfile, which records the exact set of resolved dependency versions. Lockfiles are used to reduce build times; to verify the integrity of resolved packages; and to support build reproducibility across environments and time. Despite these beneficial features, developers often struggle with their maintenance, usage, and interpretation. In this study, we unveil the major challenges related to lockfiles, such that future researchers and engineers can address them. […]

A PDF of their paper is available online.

Benoit Baudry also posted an announcement to our mailing list, which generated a number of replies.


Betul Gokkaya, Leonardo Aniello and Basel Halak of the University of Southampton then published a paper on the A taxonomy of attacks, mitigations and risk assessment strategies within the software supply chain:

While existing studies primarily focus on software supply chain attacks' prevention and detection methods, there is a need for a broad overview of attacks and comprehensive risk assessment for software supply chain security. This study conducts a systematic literature review to fill this gap. By analyzing 96 papers published between 2015-2023, we identified 19 distinct SSC attacks, including 6 novel attacks highlighted in recent studies. Additionally, we developed 25 specific security controls and established a precisely mapped taxonomy that transparently links each control to one or more specific attacks. […]

A PDF of the paper is available online via the article's canonical page.


Aman Sharma and Martin Monperrus of the KTH Royal Institute of Technology, Sweden along with Benoit Baudry of Université de Montréal, Canada published a paper this month on Causes and Canonicalization of Unreproducible Builds in Java. The abstract of the paper is as follows:

[Achieving] reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central, and we develop a novel taxonomy of six root causes of unreproducibility. […]

A PDF of the paper is available online.


Website updates

Once again, there were a number of improvements made to our website this month including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

08 Jan 2026 10:51pm GMT

Dirk Eddelbuettel: RcppCCTZ 0.2.14 on CRAN: New Upstream, Small Edits

A new release 0.2.14 of RcppCCTZ is now on CRAN, in Debian and built for r2u.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now several others packages (four the last time we counted) include its sources too. Not ideal, but beyond our control.

This version updates to a new upstream release, and brings some small local edits. CRAN and R-devel were stumbled over us still mentioning C++11 in SystemRequirements (yes, this package is old enough for that to have mattered once). As that is a false positive-the package compiles well under any recent standard-we removed the mention. The key changes since the last CRAN release are summarised below.

Changes in version 0.2.14 (2026-01-08)

  • Synchronized with upstream CCTZ (Dirk in #46).

  • Explicitly enumerate files to be compiled in src/Makevars* (Dirk in #47)

Courtesy of my CRANberries, there is a diffstat report relative to to the previous version. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

08 Jan 2026 3:42pm GMT

Dirk Eddelbuettel: RcppSpdlog 0.0.24 on CRAN: New Upstream

Version 0.0.24 of RcppSpdlog arrived on CRAN today, has been been uploaded to Debian, and also been built for r2u. This follows an upstream release on Sunday which we incorporated immediately, but CRAN was still closed for the winter break until yesterday. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This release updates the code to the version 1.17.0 of spdlog which was released yesterday morning, and includes version 12.1.0 of fmt. No other changes besides tweaks to the documentation site (that was updated to using altdoc last release) have been made.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.24 (2026-01-07)

  • Upgraded to upstream release spdlog 1.17.0 (including fmt 12.1.0)

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

08 Jan 2026 1:59pm GMT

Sven Hoexter: Moving from hexchat to Halloy

I'm not hanging around on IRC a lot these days, but when I do I used hexchat (and xchat before). Probably a bad habbit of clinging to what I got used to for the past 25 years. But in the light of the planned removal of GTK2, it felt like it was time to look for an alternative.

Halloy looked interesting, albeit not packaged for Debian. But upstream references a flatpak (another party I did not join so far), good enough to give it a try.

$ sudo apt install flatpak
$ flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
$ flatpak install org.squidowl.halloy
$ flatpak run org.squidowl.halloy

Configuration ends up at ~/.var/app/org.squidowl.halloy/config/halloy/config.toml, which I linked for convenience to ~/.halloy.toml.

Since I connect via ZNC in an odd old setup without those virtual networks, but several accounts, and of course never bothered to replace the self signed certificate, it requires some additional configuration to be able to connect. Each account gets its own servers.<foo> block like this:

[servers.bnc-oftc]
nickname = "my-znc-user-for-this-network"
server = "sven.stormbind.net"
dangerously_accept_invalid_certs = true
password = "mypasssowrd"
port = 4711
use_tls = true

Halloy has also a small ZNC guide.

I'm growing old, so a bigger font size is useful. Be aware that font changes require an application restart to take effect.

[font]
size = 16
family = "Noto Mono"

I also prefer the single-pane mode which could be copy & pasted as documented.

Works good enough for now. hexchat was also the last none wayland application I've been using (xlsclients output is finally empty).

08 Jan 2026 10:32am GMT