12 Feb 2026
Planet Debian
Dirk Eddelbuettel: RcppSpdlog 0.0.27 on CRAN: C++20 Accommodations

Version 0.0.27 of RcppSpdlog arrived on CRAN moments ago, and will be uploaded to Debian and built for r2u shortly. The (nice) documentation site will be refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.
Brian Ripley has now turned C++20 on as a default for R-devel (aka R 4.6.0 'to be'), and this turned up misbehvior in packages using RcppSpdlog such as our spdl wrapper (offering a nicer interface from both R and C++) when relying on std::format. So for now, we turned this off and remain with fmt::format from the fmt library while we investigate further.
The NEWS entry for this release follows.
Changes in RcppSpdlog version 0.0.27 (2026-02-11)
- Under C++20 or later, keep relying on
fmt::formatuntil issues experienced usingstd::formatcan be identified and resolved
Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
12 Feb 2026 1:59pm GMT
Freexian Collaborators: Debian Contributions: cross building, rebootstrap updates, Refresh of the patch tagging guidelines and more! (by Anupa Ann Joseph)

Debian Contributions: 2026-01
Contributing to Debian is part of Freexian's mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.
cross building, by Helmut Grohne
In version 1.10.1, Meson merged a patch to make it call the correct g-ir-scanner by default thanks to Eli Schwarz. This problem affected more than 130 source packages. Helmut retried building them all and filed 69 patches as a result. A significant portion of those packages require another Meson change to call the correct vapigen. Another notable change is converting gnu-efi to multiarch, which ended up requiring changes to a number of other packages. Since Aurelien dropped the libcrypt-dev dependency from libc6-dev, this transition now is mostly complete and has resulted in most of the Perl ecosystem correctly expressing perl-xs-dev dependencies needed for cross building. It is these infrastructure changes affecting several client packages that this work targets. As a result of this continued work, about 66% of Debian's source packages now have satisfiable cross Build-Depends in unstable and about 10000 (55%) actually can be cross built. There are now more than 500 open bug reports affecting more than 2000 packages most of which carry patches.
rebootstrap, by Helmut Grohne
Maintaining architecture cross-bootstrap requires continued effort for adapting to archive changes such as glib2.0 dropping a build profile or an e2fsprogs FTBFS. Beyond those generic problems, architecture-specific problems with e.g. musl-linux-any or sparc may arise. While all these changes move things forward on the surface, the bootstrap tooling has become a growing pile of patches. Helmut managed to upstream two changes to glibc for reducing its Build-Depends in the stage2 build profile and thanks Aurelien Jarno.
Refresh of the patch tagging guidelines, by Raphaël Hertzog
Debian Enhancement Proposal #3 (DEP-3) is named "Patch Tagging Guidelines" and standardizes meta-information that Debian contributors can put in patches included in Debian source packages. With the feedback received over the years, and with the change in the package management landscape, the need to refresh those guidelines became evident. As the initial driver of that DEP, I spent a good day reviewing all the feedback (that I kept in a folder) and producing a new version of the document. The changes aim to give more weight to the syntax that is compatible with git format-patch's output, and also to clarify the expected uses and meanings of a couple of fields, including some algorithm that parsers should follow to define the state of the patch. After the announcement of the new draft on debian-devel, the revised DEP-3 received a significant number of comments that I still have to process.
Miscellaneous contributions
- Helmut uploaded
debvmmaking it work with unstable as a target distribution again. - Helmut modernized the code base backing dedup.debian.net significantly expanding the support for type checking.
- Helmut fixed the multiarch hinter once more given feedback from Fabian Grünbichler.
- Helmut worked on migrating the
rocblaspackage to forky. - Raphaël fixed RC bug #1111812 in
publicanand did some maintenance for tracker.debian.org. - Carles added support in the
festivalDebian package for systemd socket activation and systemd service and socket units. Adapted the patch for upstream and created a merge request (also fixed a MacOS X building system error while working on it). Updated Orca Wiki documentation regarding festival. Discussed a 2007 bug/feature in festival which allowed having a local shell and that the new systemd socket activation has the same code path. - Carles using po-debconf-manager worked on Catalan translations: 7 reviewed and sent; 5 follow ups, 5 deleted packages.
- Carls made some po-debconf-manager changes: now it attaches the translation file on follow ups, fixed bullseye compatibility issues.
- Carles reviewed a new Catalan apt translation.
- Carles investigated and reported a lxhotkey bug and sent a patch for the "
abcde" package. - Carles made minor updates for Debian Wiki for different pages (lxde for dead keys, Ripping with abcde troubleshooting, VirtualBox troubleshooting).
- Stefano renamed build-details.json in Python 3.14 to fix multiarch coinstallability.
- Stefano audited the tooling and ignore lists for checking the contents of the python3.X-minimal packages, finding and fixing some issues in the process.
- Stefano made a few uploads of
python3-defaultsanddh-pythonin support of Python 3.14-as-default in Ubuntu. Also investigated the risk of ignoring byte-compilation failures by default, and started down the road of implementing this. - Stefano did some sysadmin work on debian.social infrastructure.
- Stefano and Santiago worked on preparations for DebConf 26. Especially to help the local team on opening the registration, and reviewing the budget to be presented for approval.
- Stefano uploaded routine updates of
python-virtualenvandpython-flexmock. - Antonio collaborated with DSA on enabling a new proxy for salsa to prevent scrapers from taking the service down.
- Antonio did miscellaneous salsa administrative tasks.
- Antonio fixed a few Ruby packages towards the Ruby 3.4 transition.
- Antonio started work on planned improvements to the DebConf registration system.
- Santiago prepared unstable updates for the latest upstream versions of knot-dns and knot-resolver. The authoritative DNS server and DNS resolver software developed by CZ.NIC. It is worth highlighting that, given the separation of functionality compared to other implementations,
knot-dnsandknot-resolverare also less complex software, which results in advantages in terms of security: only three CVEs have been reported for knot-dns since 2011). - Santiago made some routine reviews of merge requests proposed for the Salsa CI's pipeline. E.g. a proposal to fix how sbuild chooses the chroot when building a package for experimental.
- Colin fixed lots of Python packages to handle Python 3.14 and to avoid using the deprecated
pkg_resourcesmodule. - Colin added forky support to the images used in Salsa CI pipelines.
- Colin began working on getting a release candidate of
groff 1.24.0(the first upstream release since mid-2023, so a very large set of changes) into experimental. - Lucas kept working on the preparation for Ruby 3.4 transition. Some packages fixed (support build against Ruby 3.3 and 3.4):
ruby-rbpdf,jekyll,origami-pdf,ruby-kdl,ruby-twitter,ruby-twitter-text,ruby-globalid. - Lucas supported some potential mentors in the Google Summer of Code 26 program to submit their projects.
- Anupa worked on the point release announcements for Debian 12.13 and 13.3 from the Debian publicity team side.
- Anupa attended the publicity team meeting to discuss the team activities and to plan an online sprint in February.
- Anupa attended meetings with the Debian India team to plan and coordinate the MinDebConf Kanpur and sent out related Micronews.
- Emilio coordinated various transitions and helped get rid of llvm-toolchain-17 from sid.
12 Feb 2026 12:00am GMT
10 Feb 2026
Planet Debian
Freexian Collaborators: Writing a new worker task for Debusine (by Carles Pina i Estany)

Debusine is a tool designed for Debian developers and Operating System developers in general. You can try out Debusine on debusine.debian.net, and follow its development on salsa.debian.org.
This post describes how to write a new worker task for Debusine. It can be used to add tasks to a self-hosted Debusine instance, or to submit to the Debusine project new tasks to add new capabilities to Debusine.
Tasks are the lower-level pieces of Debusine workflows. Examples of tasks are Sbuild, Lintian, Debdiff (see the available tasks).
This post will document the steps to write a new basic worker task. The example will add a worker task that runs reprotest and creates an artifact of the new type ReprotestArtifact with the reprotest log.
Tasks are usually used by workflows. Workflows solve high-level goals by creating and orchestrating different tasks (e.g. a Sbuild workflow would create different Sbuild tasks, one for each architecture).
Overview of tasks
A task usually does the following:
- It receives structured data defining its input artifacts and configuration
- Input artifacts are downloaded
- A process is run by the worker (e.g.
lintian,debdiff, etc.). In this blog post, it will runreprotest - The output (files, logs, exit code, etc.) is analyzed, artifacts and relations might be generated, and the work request is marked as completed, either with
SuccessorFailure
If you want to follow the tutorial and add the Reprotest task, your Debusine development instance should have at least one worker, one user, a debusine client set up, and permissions for the client to create tasks. All of this can be setup following the steps in the Contribute section of the documentation.
This blog post shows a functional Reprotest task. This task is not currently part of Debusine. The Reprotest task implementation is simplified (no error handling, unit tests, specific view, docs, some shortcuts in the environment preparation, etc.). At some point, in Debusine, we might add a debrebuild task which is based on buildinfo files and uses snapshot.debian.org to recreate the binary packages.
Defining the inputs of the task
The input of the reprotest task will be a source artifact (a Debian source package). We model the input with pydantic in debusine/tasks/models.py:
class ReprotestData(BaseTaskDataWithExecutor):
"""Data for Reprotest task."""
source_artifact: LookupSingle
class ReprotestDynamicData(BaseDynamicTaskDataWithExecutor):
"""Reprotest dynamic data."""
source_artifact_id: int | None = None
The ReprotestData is what the user will input. A LookupSingle is a lookup that resolves to a single artifact.
We would also have configuration for the desired variations to test, but we have left that out of this example for simplicity. Configuring variations is left as an exercise for the reader.
Since ReprotestData is a subclass of BaseTaskDataWithExecutor it also contains environment where the user can specify in which environment the task will run. The environment is an artifact with a Debian image.
The ReprotestDynamicData holds the resolution of all lookups. These can be seen in the "Internals" tab of the work request view.
Add the new Reprotest artifact data class
In order for the reprotest task to create a new Artifact of the type DebianReprotest with the log and output metadata: add the new category to ArtifactCategory in debusine/artifacts/models.py:
REPROTEST = "debian:reprotest"
In the same file add the DebianReprotest class:
class DebianReprotest(ArtifactData):
"""Data for debian:reprotest artifacts."""
reproducible: bool | None = None
def get_label(self) -> str:
"""Return a short human-readable label for the artifact."""
return "reprotest analysis"
It could also include the package name or version.
In order to have the category listed in the work request output artifacts table, edit the file debusine/db/models/artifacts.py: In ARTIFACT_CATEGORY_ICON_NAMES add ArtifactCategory.REPROTEST: "folder", and in ARTIFACT_CATEGORY_SHORT_NAMES add ArtifactCategory.REPROTEST: "reprotest",.
Create the new Task class
In debusine/tasks/ create a new file reprotest.py.
# Copyright © The Debusine Developers
# See the AUTHORS file at the top-level directory of this distribution
#
# This file is part of Debusine. It is subject to the license terms
# in the LICENSE file found in the top-level directory of this
# distribution. No part of Debusine, including this file, may be copied,
# modified, propagated, or distributed except according to the terms
# contained in the LICENSE file.
"""Task to use reprotest in debusine."""
from pathlib import Path
from typing import Any
from debusine import utils
from debusine.artifacts.local_artifact import ReprotestArtifact
from debusine.artifacts.models import (
ArtifactCategory,
CollectionCategory,
DebianSourcePackage,
DebianUpload,
WorkRequestResults,
get_source_package_name,
get_source_package_version,
)
from debusine.client.models import RelationType
from debusine.tasks import BaseTaskWithExecutor, RunCommandTask
from debusine.tasks.models import ReprotestData, ReprotestDynamicData
from debusine.tasks.server import TaskDatabaseInterface
class Reprotest(
RunCommandTask[ReprotestData, ReprotestDynamicData],
BaseTaskWithExecutor[ReprotestData, ReprotestDynamicData],
):
"""Task to use reprotest in debusine."""
TASK_VERSION = 1
CAPTURE_OUTPUT_FILENAME = "reprotest.log"
def __init__(
self,
task_data: dict[str, Any],
dynamic_task_data: dict[str, Any] | None = None,
) -> None:
"""Initialize object."""
super().__init__(task_data, dynamic_task_data)
self._reprotest_target: Path | None = None
def build_dynamic_data(
self, task_database: TaskDatabaseInterface
) -> ReprotestDynamicData:
"""Compute and return ReprotestDynamicData."""
input_source_artifact = task_database.lookup_single_artifact(
self.data.source_artifact
)
assert input_source_artifact is not None
self.ensure_artifact_categories(
configuration_key="input.source_artifact",
category=input_source_artifact.category,
expected=(
ArtifactCategory.SOURCE_PACKAGE,
ArtifactCategory.UPLOAD,
),
)
assert isinstance(
input_source_artifact.data, (DebianSourcePackage, DebianUpload)
)
subject = get_source_package_name(input_source_artifact.data)
version = get_source_package_version(input_source_artifact.data)
assert self.data.environment is not None
environment = self.get_environment(
task_database,
self.data.environment,
default_category=CollectionCategory.ENVIRONMENTS,
)
return ReprotestDynamicData(
source_artifact_id=input_source_artifact.id,
subject=subject,
parameter_summary=f"{subject}_{version}",
environment_id=environment.id,
)
def get_input_artifacts_ids(self) -> list[int]:
"""Return the list of input artifact IDs used by this task."""
if not self.dynamic_data:
return []
return [
self.dynamic_data.source_artifact_id,
self.dynamic_data.environment_id,
]
def fetch_input(self, destination: Path) -> bool:
"""Download the required artifacts."""
assert self.dynamic_data
artifact_id = self.dynamic_data.source_artifact_id
assert artifact_id is not None
self.fetch_artifact(artifact_id, destination)
return True
def configure_for_execution(self, download_directory: Path) -> bool:
"""
Find a .dsc in download_directory.
Install reprotest and other utilities used in _cmdline.
Set self._reprotest_target to it.
:param download_directory: where to search the files
:return: True if valid files were found
"""
self._prepare_executor_instance()
if self.executor_instance is None:
raise AssertionError("self.executor_instance cannot be None")
self.run_executor_command(
["apt-get", "update"],
log_filename="install.log",
run_as_root=True,
check=True,
)
self.run_executor_command(
[
"apt-get",
"--yes",
"--no-install-recommends",
"install",
"reprotest",
"dpkg-dev",
"devscripts",
"equivs",
"sudo",
],
log_filename="install.log",
run_as_root=True,
)
self._reprotest_target = utils.find_file_suffixes(
download_directory, [".dsc"]
)
return True
def _cmdline(self) -> list[str]:
"""
Build the reprotest command line.
Use configuration of self.data and self._reprotest_target.
"""
target = self._reprotest_target
assert target is not None
cmd = [
"bash",
"-c",
f"TMPDIR=/tmp ; cd /tmp ; dpkg-source -x {target} package/; "
"cd package/ ; mk-build-deps ; apt-get install --yes ./*.deb ; "
"rm *.deb ; "
"reprotest --vary=-time,-user_group,-fileordering,-domain_host .",
]
return cmd
@staticmethod
def _cmdline_as_root() -> bool:
r"""apt-get install --yes ./\*.deb must be run as root."""
return True
def task_result(
self,
returncode: int | None,
execute_directory: Path, # noqa: U100
) -> WorkRequestResults:
"""
Evaluate task output and return success.
For a successful run of reprotest:
-must have the output file
-exit code is 0
:return: WorkRequestResults.SUCCESS or WorkRequestResults.FAILURE.
"""
reprotest_file = execute_directory / self.CAPTURE_OUTPUT_FILENAME
if reprotest_file.exists() and returncode == 0:
return WorkRequestResults.SUCCESS
return WorkRequestResults.FAILURE
def upload_artifacts(
self, exec_directory: Path, *, execution_result: WorkRequestResults
) -> None:
"""Upload the ReprotestArtifact with the files and relationships."""
if not self.debusine:
raise AssertionError("self.debusine not set")
assert self.dynamic_data is not None
assert self.dynamic_data.parameter_summary is not None
reprotest_artifact = ReprotestArtifact.create(
reprotest_output=exec_directory / self.CAPTURE_OUTPUT_FILENAME,
reproducible=execution_result == WorkRequestResults.SUCCESS,
package=self.dynamic_data.parameter_summary,
)
uploaded = self.debusine.upload_artifact(
reprotest_artifact,
workspace=self.workspace_name,
work_request=self.work_request_id,
)
assert self.dynamic_data is not None
assert self.dynamic_data.source_artifact_id is not None
self.debusine.relation_create(
uploaded.id,
self.dynamic_data.source_artifact_id,
RelationType.RELATES_TO,
)
Below are the main methods with some basic explanation.
In order for Debusine to discover the task, add "Reprotest" in the file debusine/tasks/__init__.py in the __all__ list.
Let's explain the different methods of the Reprotest class:
build_dynamic_data method
The worker has no access to Debusine's database. Lookups are all resolved before the task gets dispatched to a worker, so all it has to do is download the specified input artifacts.
build_dynamic_data method lookup the artifact, assert that is a valid category, extract the package name and version, and get the environment in which it will be executed.
The environment is needed to run the task (reprotest will run in a container using unshare, incus…).
def build_dynamic_data(
self, task_database: TaskDatabaseInterface
) -> ReprotestDynamicData:
"""Compute and return ReprotestDynamicData."""
input_source_artifact = task_database.lookup_single_artifact(
self.data.source_artifact
)
assert input_source_artifact is not None
self.ensure_artifact_categories(
configuration_key="input.source_artifact",
category=input_source_artifact.category,
expected=(
ArtifactCategory.SOURCE_PACKAGE,
ArtifactCategory.UPLOAD,
),
)
assert isinstance(
input_source_artifact.data, (DebianSourcePackage, DebianUpload)
)
subject = get_source_package_name(input_source_artifact.data)
version = get_source_package_version(input_source_artifact.data)
assert self.data.environment is not None
environment = self.get_environment(
task_database,
self.data.environment,
default_category=CollectionCategory.ENVIRONMENTS,
)
return ReprotestDynamicData(
source_artifact_id=input_source_artifact.id,
subject=subject,
parameter_summary=f"{subject}_{version}",
environment_id=environment.id,
)
get_input_artifacts_ids method
Used to list the task's input artifacts in the web UI.
def get_input_artifacts_ids(self) -> list[int]:
"""Return the list of input artifact IDs used by this task."""
if not self.dynamic_data:
return []
assert self.dynamic_data.source_artifact_id is not None
return [self.dynamic_data.source_artifact_id]
fetch_input method
Download the required artifacts on the worker.
def fetch_input(self, destination: Path) -> bool:
"""Download the required artifacts."""
assert self.dynamic_data
artifact_id = self.dynamic_data.source_artifact_id
assert artifact_id is not None
self.fetch_artifact(artifact_id, destination)
return True
configure_for_execution method
Install the packages needed by the task and set _reprotest_target, which is used to build the task's command line.
def configure_for_execution(self, download_directory: Path) -> bool:
"""
Find a .dsc in download_directory.
Install reprotest and other utilities used in _cmdline.
Set self._reprotest_target to it.
:param download_directory: where to search the files
:return: True if valid files were found
"""
self._prepare_executor_instance()
if self.executor_instance is None:
raise AssertionError("self.executor_instance cannot be None")
self.run_executor_command(
["apt-get", "update"],
log_filename="install.log",
run_as_root=True,
check=True,
)
self.run_executor_command(
[
"apt-get",
"--yes",
"--no-install-recommends",
"install",
"reprotest",
"dpkg-dev",
"devscripts",
"equivs",
"sudo",
],
log_filename="install.log",
run_as_root=True,
)
self._reprotest_target = utils.find_file_suffixes(
download_directory, [".dsc"]
)
return True
_cmdline method
Return the command line to run the task.
In this case, and to keep the example simple, we will run reprotest directly in the worker's executor VM/container, without giving it an isolated virtual server.
So, this command installs the build dependencies required by the package (so reprotest can build it) and runs reprotest itself.
def _cmdline(self) -> list[str]:
"""
Build the reprotest command line.
Use configuration of self.data and self._reprotest_target.
"""
target = self._reprotest_target
assert target is not None
cmd = [
"bash",
"-c",
f"TMPDIR=/tmp ; cd /tmp ; dpkg-source -x {target} package/; "
"cd package/ ; mk-build-deps ; apt-get install --yes ./*.deb ; "
"rm *.deb ; "
"reprotest --vary=-time,-user_group,-fileordering,-domain_host .",
]
return cmd
Some reprotest variations are disabled. This is to keep the example simple with the set of packages to install and reprotest features.
_cmdline_as_root method
Since during the execution it's needed to install packages, run it as root (in the container):
@staticmethod
def _cmdline_as_root() -> bool:
r"""apt-get install --yes ./\*.deb must be run as root."""
return True
task_result method
Task succeeded if a log is generated and the return code is 0.
def task_result(
self,
returncode: int | None,
execute_directory: Path, # noqa: U100
) -> WorkRequestResults:
"""
Evaluate task output and return success.
For a successful run of reprotest:
-must have the output file
-exit code is 0
:return: WorkRequestResults.SUCCESS or WorkRequestResults.FAILURE.
"""
reprotest_file = execute_directory / self.CAPTURE_OUTPUT_FILENAME
if reprotest_file.exists() and returncode == 0:
return WorkRequestResults.SUCCESS
return WorkRequestResults.FAILURE
upload_artifacts method
Create the ReprotestArtifact with the log and the reproducible boolean, upload it, and then add a relation between the ReprotestArtifact and the source package:
def upload_artifacts(
self, exec_directory: Path, *, execution_result: WorkRequestResults
) -> None:
"""Upload the ReprotestArtifact with the files and relationships."""
if not self.debusine:
raise AssertionError("self.debusine not set")
assert self.dynamic_data is not None
assert self.dynamic_data.parameter_summary is not None
reprotest_artifact = ReprotestArtifact.create(
reprotest_output=exec_directory / self.CAPTURE_OUTPUT_FILENAME,
reproducible=execution_result == WorkRequestResults.SUCCESS,
package=self.dynamic_data.parameter_summary,
)
uploaded = self.debusine.upload_artifact(
reprotest_artifact,
workspace=self.workspace_name,
work_request=self.work_request_id,
)
assert self.dynamic_data is not None
assert self.dynamic_data.source_artifact_id is not None
self.debusine.relation_create(
uploaded.id,
self.dynamic_data.source_artifact_id,
RelationType.RELATES_TO,
)
Execution example
To run this task in a local Debusine (see steps to have it ready with an environment, permissions and users created) you can do:
$ python3 -m debusine.client artifact import-debian -w System http://deb.debian.org/debian/pool/main/h/hello/hello_2.10-5.dsc
(get the artifact ID from the output of that command)
The artifact can be seen in http://$DEBUSINE/debusine/System/artifact/$ARTIFACTID/.
Then create a reprotest.yaml:
$ cat <<EOF > reprotest.yaml
source_artifact: $ARTIFACT_ID
environment: "debian/match:codename=bookworm"
EOF
Instead of debian/match:codename=bookworm it could use the artifact ID.
Finally, create the work request to run the task:
$ python3 -m debusine.client create-work-request -w System reprotest --data reprotest.yaml
Using Debusine web you can see the work request, which should go to Running status, then Completed with Success or Failure (depending if reprotest could reproduce it or not). Clicking on the Output tab would have an artifact of type debian:reprotest with one file: the log. In the Metadata tab of the artifact it has Data: the package name and reproducible (true or false).
What is left to do?
This was a simple example of creating a task. Other things that could be done:
- unit tests
- documentation
- configurable
variations - running
reprotestdirectly on the worker host, using the executor environment as areprotest"virtual server" - in this specific example, the command line might be doing too many things that could maybe be done by other parts of the task, such as
prepare_environment. - integrate it in a workflow so it's easier to use (e.g. part of
QaWorkflow) - extract more from the log than just pass/fail
- display the output in a more useful way (implement an artifact specialized view)
10 Feb 2026 12:00am GMT