13 Oct 2025
Planet Debian
Russell Coker: WordPress Spam Users
Just over a year ago I configured my blog to only allow signed in users to comment to reduce spam [1]. This has stopped all spam comments, it was even more successful than expected but spammers keep registering accounts. I've now got almost 5000 spam accounts, an average of more than 10 per day. I don't know why they keep creating them without trying to enter comments. At first I thought that they were trying to assemble a lot of accounts for a deluge of comment spam but that hasn't happened.
There are some WordPress plugins for bulk deletion of users but I couldn't find one with support for "delete all users who haven't submitted a comment". So I do it a page at a time, but of course I don't want to do it 100 at a time so I used the below SQL to change it to 400 at a time. I initially tried larger numbers like 2000 but got Chrome timeouts when trying to click the check-box to select all users. From experimenting it seems that the time taken to check that is worse than linear. Doing it for 2000 users is obviously much more than 5* the duration of doing it for 400. 800 users was one attempt which resulted in it being possible to select them all but then it gave an error about the URL being too long when it came to actually delete them. After a binary search I found that 450 was too many but 400 worked. So now it's 12 operations to delete all the spam accounts. Each bulk delete operation is 5 GUI operations so it's 60 operations to delete 15 months of spam users. This is annoying, but less than the other problems of spam.
UPDATE `wp_usermeta` SET `meta_value` = 400 WHERE `user_id` = 2 AND `meta_key` = 'users_per_page';
Deleting the spam users reduced the size of the backup (zstd -9 of a mysql dump) for my blog by 6.5%. Then changing from zstd -9 to -19 reduced it by another 13%. After realising this difference I configured all my mysql backups to be compressed with zstd -19, this will make a difference on the system with over 30G of zstd compressed mysql backups.
13 Oct 2025 4:14am GMT
12 Oct 2025
Planet Debian
Dirk Eddelbuettel: RcppSpdlog 0.0.23 on CRAN: New Upstream
Version 0.0.23 of RcppSpdlog arrived on CRAN today (after a slight delay) and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.
This release updates the code to the version 1.16.0 of spdlog which was released yesterday morning, and includes version 1.12.0 of fmt. We also converted the documentation site to now using mkdocs-material to altdoc (plus local style and production tweaks) rather than directly.
I updated the package yesterday morning when spdlog was updated. But the passage was delayed for a day at CRAN as their machines still times out hitting the GPL-2 URL from the README.md badge, leading to a human to manually check the log assert the nothingburgerness of it. This timeout does not happen to me locally using the corresponding URL checker package. I pondered this in a r-package-devel thread and may just have to switch to using the R Project URL for the GPL-2 as this is in fact recurrning.
The NEWS entry for this release follows.
Changes in RcppSpdlog version 0.0.23 (2025-10-11)
Upgraded to upstream release spdlog 1.16.0 (including fmt 12.0)
The mkdocs-material documentation site is now generated via altdoc
Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
12 Oct 2025 11:43am GMT
11 Oct 2025
Planet Debian
John Goerzen: A Mail Delivery Mystery: Exim, systemd, setuid, and Docker, oh my!
On mail.quux, a node of NNCPNET (the NNCP-based peer-to-peer email network), I started noticing emails not being delivered. They were all in the queue, frozen, and Exim's log had entries like:
unable to set gid=5001 or uid=5001 (euid=100): local delivery to [redacted] transport=nncp
Weird.
Stranger still, when I manually ran the queue with sendmail -qff -v, they all delivered fine.
Huh.
Well, I thought, it was a one-off weird thing. But then it happened again.
Upon investigating, I observed that this issue was happening only on messages submitted by SMTP. Which, on these systems, aren't that many.
While trying different things, I tried submitting a message to myself using SMTP. Nothing to do with NNCP at all. But look at this:
jgoerzen@[redacted] R=userforward defer (-1): require_files: error for /home/jgoerzen/.forward: Permission denied
Strraaannnge….
All the information I could find about this, even a FAQ entry, said that the problem is that Exim isn't setuid root. But it is:
-rwsr-xr-x 1 root root 1533496 Mar 29 2025 /usr/sbin/exim4
This problem started when I upgraded to Debian Trixie. So what changed there?
There are a lot of possibilities; this is running in Docker using my docker-debian-base system, which runs a regular Debian in Docker, including systemd.
I eventually tracked it down to Exim migrating from init.d to systemd in trixie, and putting a bunch of lockdowns in its service file. After a bunch of trial and error, I determined that I needed to override this set of lockdowns to make it work. These overrides did the trick:
ProtectClock=false PrivateDevices=false RestrictRealtime=false ProtectKernelModules=false ProtectKernelTunables=false ProtectKernelLogs=false ProtectHostname=false
I don't know for sure if the issue is related to setuid. But if it is, there's nothing that immediately jumps out at me about any of these that would indicate a problem with setuid.
I also don't know if running in Docker makes any difference.
Anyhow, problem fixed, but mystery not solved!
11 Oct 2025 1:44am GMT
10 Oct 2025
Planet Debian
Louis-Philippe Véronneau: Montreal's Debian & Stuff - September 2025
Our Debian User Group met on September 27th for our first meeting since our summer hiatus. As always, it was fun and productive!
Here's what we did:
pollo:
sergiodj:
- worked on the following bugs:
LeLutin:
- switched from
sbuild-qemu
tosbuild-unshare
- worked on the following bugs:
tvaz:
- answered applicants (usual Application Manager stuff) as part of the New Member team
- dealt with less pleasant stuff as part of the Community team
- learned about aibohphobia!
viashimo:
- looked at hardware on PCPartPicker
- starting to port a zig version of
soundscraper
from zig 0.12 to 0.15.1
tassia:
- set up a local instance of openQA for functionality tests of Debian images
- test-drove said instance and suggested improvements to the new documentation based on a Trixie VM
Pictures
This time again, we were hosted at La Balise (formely ATSÉ).
It's nice to see this community project continuing to improve: the social housing apartments on the top floors should be opening this month! Lots of construction work was also ongoing to make the Espace des Possibles more accessible from the street level.
Some of us ended up grabbing a drink after the event at l'Isle de Garde, a pub right next to the venue, but I didn't take any pictures.
10 Oct 2025 9:30pm GMT
Reproducible Builds: Reproducible Builds in September 2025
Welcome to the September 2025 report from the Reproducible Builds project!
Welcome to the very latest report from the Reproducible Builds project. Our monthly reports outline what we've been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
In this report:
- Reproducible Builds Summit 2025
- Can't we have nice things?
- Distribution work
- Tool development
- Reproducibility testing framework
- Upstream patches
Reproducible Builds Summit 2025
Please join us at the upcoming Reproducible Builds Summit, set to take place from October 28th - 30th 2025 in Vienna, Austria!
We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort.
During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.
If you're interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!
Can't we have nice things?
Debian Developer Gunnar Wolf blogged that George V. Neville-Neil's "Kode Vicious" column in Communications of the ACM in which reproducible builds "is mentioned without needing to introduce it (assuming familiarity across the computing industry and academia)". Titled, Can't we have nice things?, the article mentions:
Once the proper measurement points are known, we want to constrain the system such that what it does is simple enough to understand and easy to repeat. It is quite telling that the push for software that enables reproducible builds only really took off after an embarrassing widespread security issue ended up affecting the entire Internet. That there had already been 50 years of software development before anyone thought that introducing a few constraints might be a good idea is, well, let's just say it generates many emotions, none of them happy, fuzzy ones. […]
Distribution work
In Debian this month, Johannes Starosta filed a bug against the debian-repro-status
package, reporting that it does not work on Debian trixie. (An upstream bug report was also filed.) Furthermore, 17 reviews of Debian packages were added, 10 were updated and 14 were removed this month adding to our knowledge about identified issues.
In March's report, we included the news that Fedora would aim for 99% package reproducibility. This change has now been deferred to Fedora 44 according to Phoronix.
Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.
Tool development
diffoscope version 306
was uploaded to Debian unstable by Chris Lamb. It included contributions already covered in previous months as well as some changes by Zbigniew Jędrzejewski-Szmek to address issues with the fdtump
support […] and to move away from the deprecated codes.open
method. […][…]
strip-nondeterminism version 1.15.0-1
was uploaded to Debian unstable by Chris Lamb. It included a contribution by Matwey Kornilov to add support for inline archive files for Erlang's escript […].
kpcyrd has released a new version of rebuilderd. As a quick recap, rebuilderd is an automatic build scheduler that tracks binary packages available in a Linux distribution and attempts to compile the official binary packages from their (purported) source code and dependencies. The code for in-toto attestations has been reworked, and the instances now feature a new endpoint that can be queried to fetch the list of public-keys an instance currently identifies itself by. […]
Lastly, Holger Levsen bumped the Standards-Version
field of disorderfs, with no changes needed. […][…]
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, however, a number of changes were made by Holger Levsen, including:
-
Setting up six new rebuilderd workers with 16 cores and 16 GB RAM each.
-
reproduce.debian.net-related:
- Do not expose pending jobs; they are confusing without explaination. […]
- Add a link to v1 API specification. […]
- Drop
rebuilderd-worker.conf
on a node. […] - Allow manual scheduling for any architectures. […]
- Update path to trixie graphs. […]
- Use the same
rebuilder-debian.sh
script for all hosts. […] - Add all other suites to all other archs. […][…][…][…]
- Update SSH host keys for new hosts. […]
- Move to the
pull184
branch. […][…][…][…][…] - Only allow 20 GB cache for workers. […]
-
OpenWrt-related:
-
Jenkins nodes:
-
Misc:
- Drop disabled Alpine Linux tests for good. […]
- Move Debian live builds and some other Debian builds to the
ionos10
node. […] - Cleanup some legacy support from releases before Debian trixie. […]
In addition, Jochen Sprickerhof made the following changes relating to reproduce.debian.net:
- Do not expose pending jobs on the main site. […]
- Switch the frontpage to reference Debian forky […], but do not attempt to build Debian forky on the
armel
architecture […]. - Use consistent and up to date
rebuilder-debian.sh
script. […] - Fix supported worker architectures. […]
- Add a basic 'excuses' page. […]
- Move to the
pull184
branch. […][…][…][…] - Fix a typo in the JavaScript. […]
- Update front page for the new v1 API. […][…]
Lastly, Roland Clobus did some maintenance relating to the reproducibility testing of the Debian Live images. […][…][…][…]
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
-
Aleksei Burlakov:
-
Bernhard M. Wiedemann:
-
Chris Lamb:
- #1113809 filed against
ms-gsl
. - #1113813 filed against
llama.cpp
. - #1114638 filed against
python-mcstasscript
. - #1114772 filed against
rocm-docs-core
. - #1114869 filed against
octave-optics
. - #1114950 filed against
g2o
. - #1114999 filed against
golang-forgejo-forgejo-levelqueue
. - #1115999 filed against
openrgb
.
- #1113809 filed against
-
Roland Clobus:
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
-
IRC:
#reproducible-builds
onirc.oftc.net
. -
Mastodon: @reproducible_builds@fosstodon.org
-
Mailing list:
rb-general@lists.reproducible-builds.org
10 Oct 2025 7:52pm GMT
Sergio Cipriano: Avoiding 5XX errors by adjusting Load Balancer Idle Timeout
Avoiding 5XX errors by adjusting Load Balancer Idle Timeout
Recently I faced a problem in production where a client was running a RabbitMQ server behind the Load Balancers we provisioned and the TCP connections were closed every minute.
My team is responsible for the LBaaS (Load Balancer as a Service) product and this Load Balancer was an Envoy proxy provisioned by our control plane.
The error was similar to this:
[2025-10-03 12:37:17,525 - pika.adapters.utils.connection_workflow - ERROR] AMQPConnector - reporting failure: AMQPConnectorSocketConnectError: timeout("TCP connection attempt timed out: ''/(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('<IP>', 5672))")
[2025-10-03 12:37:17,526 - pika.adapters.utils.connection_workflow - ERROR] AMQP connection workflow failed: AMQPConnectionWorkflowFailed: 1 exceptions in all; last exception - AMQPConnectorSocketConnectError: timeout("TCP connection attempt timed out: ''/(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('<IP>', 5672))"); first exception - None.
[2025-10-03 12:37:17,526 - pika.adapters.utils.connection_workflow - ERROR] AMQPConnectionWorkflow - reporting failure: AMQPConnectionWorkflowFailed: 1 exceptions in all; last exception - AMQPConnectorSocketConnectError: timeout("TCP connection attempt timed out: ''/(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('<IP>', 5672))"); first exception - None
At first glance, the issue is simple: the Load Balancer's idle timeout is shorter than the RabbitMQ heartbeat interval.
The idle timeout is the time at which a downstream or upstream connection will be terminated if there are no active streams. Heartbeats generate periodic network traffic to prevent idle TCP connections from closing prematurely.
Adjusting these timeout settings to align properly solved the issue.
However, what I want to explore in this post are other similar scenarios where it's not so obvious that the idle timeout is the problem. Introducing an extra network layer, such as an Envoy proxy, can introduce unpredictable behavior across your services, like intermittent 5XX errors.
To make this issue more concrete, let's look at a minimal, reproducible setup that demonstrates how adding an Envoy proxy can lead to sporadic errors.
Reproducible setup
I'll be using the following tools:
This setup is based on what Kai Burjack presented in his article.
Setting up Envoy with Docker is straightforward:
$ docker run \
--name envoy --rm \
--network host \
-v $(pwd)/envoy.yaml:/etc/envoy/envoy.yaml \
envoyproxy/envoy:v1.33-latest
I'll be running experiments with two different envoy.yaml
configurations: one that uses Envoy's TCP proxy, and another that uses Envoy's HTTP connection manager.
Here's the simplest Envoy TCP proxy setup: a listener on port 8000 forwarding traffic to a backend running on port 8080.
static_resources:
listeners:
- name: go_server_listener
address:
socket_address:
address: 0.0.0.0
port_value: 8000
filter_chains:
- filters:
- name: envoy.filters.network.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: go_server_tcp
cluster: go_server_cluster
clusters:
- name: go_server_cluster
connect_timeout: 1s
type: static
load_assignment:
cluster_name: go_server_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8080
The default idle timeout if not otherwise specified is 1 hour, which is the case here.
The backend setup is simple as well:
package main
import (
"fmt"
"net/http"
"time"
)
func helloHandler(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Hello from Go!"))
}
func main() {
http.HandleFunc("/", helloHandler)
server := http.Server{
Addr: ":8080",
IdleTimeout: 3 * time.Second,
}
fmt.Println("Starting server on :8080")
panic(server.ListenAndServe())
}
The IdleTimeout is set to 3 seconds to make it easier to test.
Now, oha
is the perfect tool to generate the HTTP requests for this test. The Load test is not meant to stress this setup, the idea is to wait long enough so that some requests are closed. The burst-delay
feature will help with that:
$ oha -z 30s -w --burst-delay 3s --burst-rate 100 http://localhost:8000
I'm running the Load test for 30 seconds, sending 100 requests at three-second intervals. I also use the -w
option to wait for ongoing requests when the duration is reached. The result looks like this:
We had 886 responses with status code 200 and 64 connections closed. The backend terminated 64 connections while the load balancer still had active requests directed to it.
Let's change the Load Balancer idle_timeout
to two seconds.
filter_chains:
- filters:
- name: envoy.filters.network.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: go_server_tcp
cluster: go_server_cluster
idle_timeout: 2s # <--- NEW LINE
Run the same test again.
Great! Now all the requests worked.
This is a common issue, not specific to Envoy Proxy or the setup shown earlier. Major cloud providers have all documented it.
AWS troubleshoot guide for Application Load Balancers says this:
The target closed the connection with a TCP RST or a TCP FIN while the load balancer had an outstanding request to the target. Check whether the keep-alive duration of the target is shorter than the idle timeout value of the load balancer.
Google troubleshoot guide for Application Load Balancers mention this as well:
Verify that the keepalive configuration parameter for the HTTP server software running on the backend instance is not less than the keepalive timeout of the load balancer, whose value is fixed at 10 minutes (600 seconds) and is not configurable.
The load balancer generates an HTTP 5XX response code when the connection to the backend has unexpectedly closed while sending the HTTP request or before the complete HTTP response has been received. This can happen because the keepalive configuration parameter for the web server software running on the backend instance is less than the fixed keepalive timeout of the load balancer. Ensure that the keepalive timeout configuration for HTTP server software on each backend is set to slightly greater than 10 minutes (the recommended value is 620 seconds).
RabbitMQ docs also warn about this:
Certain networking tools (HAproxy, AWS ELB) and equipment (hardware load balancers) may terminate "idle" TCP connections when there is no activity on them for a certain period of time. Most of the time it is not desirable.
Most of them are talking about Application Load Balancers and the test I did was using a Network Load Balancer. For the sake of completeness, I will do the same test but using Envoy's HTTP connection manager.
The updated envoy.yaml
:
static_resources:
listeners:
- name: listener
address:
socket_address:
address: 0.0.0.0
port_value: 8000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: go_server_http
access_log:
- name: envoy.access_loggers.stdout
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
route_config:
name: http_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: go_server_cluster
clusters:
- name: go_server_cluster
type: STATIC
load_assignment:
cluster_name: go_server_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 0.0.0.0
port_value: 8080
The yaml above is an example of a service proxying HTTP from 0.0.0.0:8000 to 0.0.0.0:8080. The only difference from a minimal configuration is that I enabled access logs.
Let's run the same tests with oha.
Even thought the success rate is 100%, the status code distribution show some responses with status code 503. This is the case where is not that obvious that the problem is related to idle timeout.
However, it's clear when we look the Envoy access logs:
[2025-10-10T13:32:26.617Z] "GET / HTTP/1.1" 503 UC 0 95 0 - "-" "oha/1.10.0" "9b1cb963-449b-41d7-b614-f851ced92c3b" "localhost:8000" "0.0.0.0:8080"
UC
is the short name for UpstreamConnectionTermination
. This means the upstream, which is the golang server, terminated the connection.
To fix this once again, the Load Balancer idle timeout needs to change:
clusters:
- name: go_server_cluster
type: STATIC
typed_extension_protocol_options: # <--- NEW BLOCK
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
"@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
common_http_protocol_options:
idle_timeout: 2s # <--- NEW VALUE
explicit_http_config:
http_protocol_options: {}
Finally, the sporadic 503 errors are over:
To Sum Up
Here's an example of the values my team recommends to our clients:
Key Takeaways:
- The Load Balancer idle timeout should be less than the backend (upstream) idle/keepalive timeout.
- When we are working with long lived connections, the client (downstream) should use a keepalive smaller than the LB idle timeout.
10 Oct 2025 5:04pm GMT
Dirk Eddelbuettel: RcppArmadillo 15 CRAN Transition: Offering Office Hours
Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language-and is widely used by (currently) 1273 other packages on CRAN, downloaded 41.8 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 651 times according to Google Scholar.
Armadillo 15 brought changes. We mentioned these in the 15.0.2-1 and 15.0.2-1 release blog posts:
- Minimum C++ standard of C++14
- No more suppression of deprecation notes
(The second point is a consequence of the first. Prior to C++14, deprecation notes were issue via a macro, and the macro was set up by Conrad in the common way of allowing an override, which we took advantage of in RcppArmadillo effectively shielding downstream package. In C++14 this is now an attribute, and those cannot be suppressed.)
We tested this then-upcoming change extensively: Thirteen reverse dependency runs expoloring different settings and leading to the current package setup where an automatic fallback to the last Armadillo 14 release offers fallback for hardwired C++11 use and Armadillo 15 others. Given the 1200+ reverse deoendencies, this took considerable time. All this was also quite extensively discussed with CRAN (especially Kurt Hornik) and documented / controlled via a series of issue tickets starting with overall issue #475 covering the subissues:
- open issue #475 describes the version selection between Armadillo 14 and 15 via
#define
- open issue #476 illustrates how package without deprecation notes are already suitable for Armadillo 15 and C++14
- open issue #477 demonstrates how a package with a simple deprecation note can be adjusted for Armadillo 15 and C++14
- closed issue #479 documents a small bug we created in the initial transition package RcppArmadillo 15.0.1-1 and fixed in the 15.0.2-1
- closed issue #481 discusses removal of the check for insufficient LAPACK routines which has been removed given that R 4.5.0 or later has sufficient code in its fallback LAPACK (used e.g. on Windows)
- open issue #484 offering help to the (then 226) packages needing help transitioning from (enforced) C++11
- open issue #485 offering help to the (then 135) packages needing help with deprecations
- open issue #489 coordinating pull requests and patches to 35 packages for the C++11 transition
- open issue #491 coordinating pull requests and patches to 25 packages for deprecation transition
The sixty pull requests (or emailed patches) followed a suggestion by CRAN to rank-order packages affected by their reverse dependencies sorted in descending package count. Now, while this change from Armadillo 14 to 15 was happening, CRAN also tightened the C++11 requirement for packages and imposed a deadline for changes. In discussion, CRAN also convinced me that a deadline for the deprecation warning, now unmasked, was viable (and is in fairness commensurate with similar, earlier changes triggered by changes in the behaviour of either gcc/g++ or clang/clang++). So we now have two larger deadline campaigns affecting the package (and as always there are some others).
These deadlines are coming close: October 17 for the C++11 transition, and October 23 for the deprecation warning. Now, as became clear preparing the sixty pull requests and patches, these changes are often relatively straightforward. For the former, remove the C++11 enforcement and the package will likely build without changes. For the latter, make the often simple (e.g. swith from arma::is_finite
to std::isfinite
) change. I did not encounter anything much more complicated yet.
The number of affected packages-approximated by looking at all packages with a reverse dependency on RcppArmadillo and having a deadline-can be computed as
suppressMessages(library(data.table))
D <- setDT(tools::CRAN_package_db())
P <- data.table(Package=tools::package_dependencies("RcppArmadillo", reverse=TRUE, db=D)[[1]])
W <- merge(P, D, all.x=TRUE)[is.na(Deadline)==FALSE,c(1:2,38,68)]
W
W[, nrevdep := length(tools::package_dependencies(Package, reverse=TRUE, recursive=TRUE, db=D)[[1]]), by=Package]
W[order(-nrevdep)]
and has been declining steadily from over 350 to now under 200. For that a big and heartfelt Thank You! to all the maintainers who already addressed their package and uploaded updated packages to CRAN. That rocks, and is truly appreciated.
Yet the number is still large. And while issues #489 and #491 show a number of 'pending' packages that have merged but not uploaded (yet?) there are also all the other packages I have not been able to look at in detail. While preparing sixty PRs / patches was viable over a period of a good week, I cannot create these for all packages. So with that said, here is a different suggestion for help: All of next week, I will be holding open door 'open source' office hours online two times each day (11:00h to 13:00h Central, 16:00h to 18:00h Central) which can be booked via this booking link for Monday to Friday next week in either fifteen or thirty minutes slots you can book. This should offer Google Meet video conferencing (with jitsi as an alternate, you should be able to control that) which should allow for screen sharing. (I cannot hookup Zoom as my default account has organization settings with a different calendar integration.)
If you are reading this and have a package that still needs helps, I hope to see you in the Open Source Office Hours to aid in the RcppArmadillo package updates for your package. Please book a slot!
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
10 Oct 2025 2:20pm GMT
John Goerzen: I’m Not Very Popular, Thankfully. That Makes The Internet Fun Again
"Like and subscribe!"
"Help us get our next thousand (or million) followers!"
I was using Linux before it was popular. Back in the day where you had to write Modelines for your XF86Config file - and do it properly, or else you might ruin your monitor. Back when there wasn't a word processor (thankfully; that forced me to learn LaTeX, which I used to write my papers in college).
I then ran Linux on an Alpha, a difficult proposition in an era when web browsers were either closed-source or too old to be useful; all sorts of workarounds, including emulating Digital UNIX.
Recently I wrote a deep dive into the DOS VGA text mode and how to achieve it on a modern UEFI Linux system.
Nobody can monetize things like this. I am one of maybe a dozen or two people globally that care about that sort of thing. That's fine.
Today, I'm interested in things like asynchronous communication, NNCP, and Gopher. Heck, I'm posting these words on a blog. Social media displaced those, right?
Some of the things I write about here have maybe a few dozen people on the planet interested in them. That's fine.
I have no idea how many people read my blog. I have no idea where people hear about my posts from. I guess I can check my Mastodon profile to see how many followers I have, but it's not something I tend to do. I don't know if the number is going up or down, or if it is all that much in Mastodon terms (probably not).
Thank goodness.
Since I don't have to care about what's popular, or spend hours editing video, or thousands of dollars on video equipment, I can just sit down and write about what interests me. If that also interests you, then great. If not, you can find what interests you - also fine.
I once had a colleague that was one of these "plugged into Silicon Valley" types. He would periodically tell me, with a mixture of excitement and awe, that one of my posts had made Hacker News.
This was always news to me, because I never paid a lot of attention over there. Occasionally that would bring in some excellent discussion, but more often than not, it was comments from people that hadn't read or understood the article trying to appear smart by arguing with what it - or rather, what they imagined it said, I guess.
The thing I value isn't subscriber count. It's discussion. A little discussion in the comments or on Mastodon - that's perfect, even if only 10 people read the article. I have the most fun in a community.
And I'll go on writing about NNCP and Gopher and non-square DOS pixels, with audiences of dozens globally. I have no advertisers to keep happy, and I enjoy it, so why not?
10 Oct 2025 12:59am GMT
09 Oct 2025
Planet Debian
Thorsten Alteholz: My Debian Activities in September 2025
Debian LTS
This was my hundred-thirty-fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
- [DLA 4168-2] openafs regression update to fix an incomplete patch in the previous upload.
- [DSA 5998-1] cups security update to fix two CVEs related to a authentication bypass and a denial of service.
- [DLA 4298-1] cups security update to fix two CVEs related to a authentication bypass and a denial of service.
- [DLA 4304-1] cjson security update to fix one CVE related to an out-of-bounds memory access.
- [DLA 4307-1] jq security update to fix one CVE related to a heap buffer overflow.
- [DLA 4308-1] corosync security update to fix one CVE related to a stack-based buffer overflow.
An upload of spim was not needed, as the corresponding CVE could be marked as ignored. I also started to work on an open-vm-tools and attended the monthly LTS/ELTS meeting.
Debian ELTS
This month was the eighty-sixth ELTS month. During my allocated time I uploaded or worked on:
- [ELA-1512-1] cups security update to fix two CVEs in Buster and Stretch, related to a authentication bypass and a denial of service.
- [ELA-1520-1] jq security update to fix one CVE in Buster and Stretch, related to a heap buffer overflow.
- [ELA-1524-1] corosync security update to fix one CVE in Buster and Stretch, related to a stack-based buffer overflow.
- [ELA-1527-1] mplayer security update to fix ten CVEs in Stretch, distributed all over the code.
The CVEs for open-vm-tools could be marked as not-affeceted as the corresponding plugin was not yet available. I also attended the monthly LTS/ELTS meeting.
Debian Printing
This month I uploaded a new upstream version or a bugfix version of:
- … ink to unstable to fix a gcc15 issue.
- … pnm2ppa to unstable to fix a gcc15 issue.
- … rlpr to unstable to fix a gcc15 issue.
This work is generously funded by Freexian!
Debian Astro
This month I uploaded a new upstream version or a bugfix version of:
- … supernovas to unstable (sponsored upload).
- … boinor to unstable.
- … einsteinpy to unstable.
- … libdfu-ahp to unstable to fix a cmake4 issue.
- … openvlbi to unstable to fix a cmake4 issue.
- … indi-aagcloudwatcher-ng to unstable to fix a cmake4 issue.
- … indi-astrolink4 to unstable to fix cmake4 issue.
- … indi-bresserexos2 to unstable to fix a cmake4 issue.
- … indi-apogee to unstable to fix cmake4 issue.
- … indi-dreamfocuser to unstable to fix a cmake4 issue.
- … indi-astromechfox to unstable to fix cmake4 issue.
- … indi-avalon to unstable to fix a cmake4 issue.
- … indi-armadillo-platypus to unstable to fix cmake4 issue.
- … indi-beefocus to unstable to fix a cmake4 issue.
- … indi-aok to unstable to fix a cmake4 issue.
- … indi-nexdome to unstable to fix a cmake4 issue.
- … indi-nightscape to unstable to fix a cmake4 issue.
- … indi-fli to unstable to fix a cmake4 issue.
- … indi-orion-ssg3 to unstable to fix a cmake4 issue.
- … indi-rtklib to unstable to fix a cmake4 issue.
- … indi-gpsd to unstable to fix a cmake4 issue.
- … indi-gige to unstable to fix a cmake4 issue.
- … indi-ffmv to unstable to fix a cmake4 issue.
- … indi-gpsnmea to unstable to fix a cmake4 issue.
- … indi-maxdomeii to unstable to fix a cmake4 issue.
- … indi-gphoto to unstable to fix a cmake4 issue.
- … indi-limesdr to unstable to fix a cmake4 issue.
- … indi-webcam to unstable to fix a cmake4 issue.
- … libahp-gt to unstable to fix a cmake4 issue.
- … indi-sx to unstable to fix a cmake4 issue.
- … libapogee3 to unstable to fix a cmake4 issue.
- … libpktriggercord to unstable to fix a cmake4 issue.
- … indi-sheylak to unstable to fix a cmake4 issue.
- … libfli to unstable to fix a cmake4 issue.
- … indi-talon6 to unstable to fix a cmake4 issue.
- … indi-starbook to unstable to fix a cmake4 issue.
- … indi-mgen to unstable to fix a cmake4 issue.
- … libahp-xc to unstable to fix a cmake4 issue.
- … indi-spectracyber to unstable to fix a cmake4 issue.
- … astronomical-almanac to unstable to fix a gcc15 issue.
- … indi-eqmod to unstable to fix a gcc15 issue.
- … indi-pentax to unstable
Debian IoT
This month I uploaded a new upstream version or a bugfix version of:
- … radlib to unstable, Joachim Zobel prepared a patch for a name collision of a binary.
- … pyicloud to unstable.
Debian Mobcom
This month I uploaded a new upstream version or a bugfix version of:
- … osmocom-dahdi-linux to unstable.
misc
The main topic of this month has been gcc15 and cmake4, so my upload rate was extra high. This month I uploaded a new upstream version or a bugfix version of:
- … readsb to unstable.
- … gcal to unstable. This was my first upload of a release where I am upstream as well.
- … libcds to unstable to fix a cmake4 issue.
- … pkcs11-proxy to unstable to fix cmake4 issue.
- … force-ip-protocol to unstable to fix a gcc15 issue.
- … httperf to unstable to fix a gcc15 issue.
- … otpw to unstable to fix a gcc15 issue.
- … rplay to unstable to fix a gcc15 issue.
- … uucp to unstable to fix a gcc15 issue.
- … spim to unstable to fix a gcc15 issue.
- … usb-modeswitch to unstable to fix a gcc15 issue.
- … gnucobol3 to unstable to fix a gcc15 issue.
- … gnucobol4 to unstable to fix a gcc15 issue.
I wonder what MBF will happen next, I guess the /var/lock-issue will be a good candidate.
On my fight against outdated RFPs, I closed 30 of them in September. Meanwhile only 3397 are still open, so don't hesitate to help closing one or another.
FTP master
This month I accepted 294 and rejected 28 packages. The overall number of packages that got accepted was 294.
09 Oct 2025 2:24pm GMT
Dirk Eddelbuettel: xptr 1.2.0 on CRAN: New(ly Adopted) Package!
Excited to share that xptr is back on CRAN! The xptr package helps to create, check, modify, use, share, … external pointer objects.
External pointers are used quite extensively throughout R to manage external 'resources' such as datanbase connection objects and alike, and can be very useful to pass pointers to just about any C / C++ data structure around. While described in Writing R Extensions (notably Section 5.13), they can be a little bare-bones-and so this package can be useful. It had been created by Randy Lai and maintained by him during 2017 to 2020, but then fell off CRAN. In work with nanoarrow and its clean and minimal Arrow interface xptr came in handy so I adopted it.
Several extensions and updates have been added: (compiled) function registration, continuous integration, tests, refreshed and extended documentation as well as a print format extension useful for PyCapsule objects when passing via reticulate. The package documentation site was switched to altdoc driving the most excellent Material for MkDocs framework (providing my first test case of altdoc replacing my older local scripts; I should post some more about that …).
The first NEWS entry follow.
Changes in version 1.2.0 (2025-10-03)
New maintainer
Compiled functions are now registered,
.Call()
adjustedREADME.md and DESCRIPTION edited and update
Simple unit tests and continunous integration have been added
The package documentation site has been recreated using altdoc
All manual pages for functions now contain
\value{}
sections
For more, see the package page, the git repo or the documentation site.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
09 Oct 2025 2:14pm GMT
Charles: How to Build an Incus Buster Image
It's always nice to have container images of Debian releases to test things, run applications or explore a bit without polluting your host machine. From some Brazilian friends (you know who you are ;-), I've learned the best way to debug a problem or test a fix is spinning up an incus container, getting to it and finding the minimum reproducer. So the combination incus + Debian is something that I'm very used to, but the problem is there are no images for Debian ELTS and testing security fixes to see if they actually fix the vulnerability and don't break anything else is very important.
Well, the regular images don't materialize out of thin air, right? So we can learn how they are made and try to generate ELTS images in the same way - shouldn't be that difficult, right? Well, kinda ;-)
The images available by default in incus come from images.linuxcontainers.org and are built by Jenkins using distrobuilder. If you follow the links, you will find the repository containing the yaml image definitions used by distrobuilder at github.com/lxc/lxc-ci. With a bit of investigation work, a fork, an incus VM with distrobuilder installed and some magic (also called trial and error) I was able to build a buster image! Whooray, but VM and stretch images are still work in progress.
Anyway, I wanted to share how you can build your images and document this process so I don't forget, so here we are…
Building Instructions
We will use an incus trixie VM to perform the build so we don't clutter our own machine.
incus launch images:debian/trixie <instance-name> --vm
Then let's hop into the machine and install the dependencies.
incus shell <instance-name>
And…
apt install git distrobuilder
Let's clone the repository with the yaml definition to build a buster container.
git clone --branch support-debian-buster https://github.com/charles2910/lxc-ci.git
# and cd into it
cd lxc-ci
Then all we need is to pass the correct arguments to distrobuilder so it can build the image. It can output the image in the current directory or in a pre-defined place, so let's create an easy place for the images.
mkdir -p /tmp/images/buster/container
# and perform the build
distrobuilder build-incus images/debian.yaml /tmp/images/buster/container/ \
-o image.architecture=amd64 \
-o image.release=buster \
-o image.variant=default \
-o source.url="http://archive.debian.org/debian"
It requires a build definition written in yaml format to perform the build. If you are curious, check the images/
subdir.
If all worked correctly, you should have two files in your pre-defined target directory. In our case, /tmp/images/buster/container/
contains:
incus.tar.xz rootfs.squashfs
Let's copy it to our host so we can add the image to our incus server.
incus file pull <instance-name>/tmp/images/buster/container/incus.tar.xz .
incus file pull <instance-name>/tmp/images/buster/container/rootfs.squashfs .
# and import it as debian/10
incus image import incus.tar.xz rootfs.squashfs --alias debian/10
If we are lucky, we can run our Debian buster container now!
incus launch local:debian/10 <debian-buster-instance>
incus shell <debian-buster-instance>
Well, now all that is left is to install Freexian's ELTS package repository and update the image to get a lot of CVE fixes.
apt install --assume-yes wget
wget https://deb.freexian.com/extended-lts/archive-key.gpg -O /etc/apt/trusted.gpg.d/freexian-archive-extended-lts.gpg
cat <<EOF >/etc/apt/sources.list.d/extended-lts.list
deb http://deb.freexian.com/extended-lts buster-lts main contrib non-free
EOF
apt update
apt --assume-yes upgrade
09 Oct 2025 3:01am GMT
08 Oct 2025
Planet Debian
Colin Watson: Free software activity in September 2025
About 90% of my Debian contributions this month were sponsored by Freexian.
You can also support my work directly via Liberapay or GitHub Sponsors.
Some months I feel like I'm pedalling furiously just to keep everything in a roughly working state. This was one of those months.
Python team
I upgraded these packages to new upstream versions:
- aiosmtplib
- billiard
- dbus-fast
- django-modeltranslation
- django-sass-processor
- feedparser
- flask-security
- jaraco.itertools
- mariadb-connector-python
- mistune
- more-itertools
- pydantic-settings
- pyina
- pytest-mock
- python-asyncssh
- python-bytecode
- python-ciso8601
- python-django-pgbulk
- python-ewokscore
- python-ewoksdask
- python-ewoksutils
- python-expandvars
- python-git
- python-gssapi
- python-holidays
- python-jira
- python-jpype
- python-mastodon
- python-orjson (fixing a build failure)
- python-pyftpdlib
- python-pytest-asyncio (fixing a build failure)
- python-pytest-run-parallel
- python-recurring-ical-events
- python-redis
- python-watchfiles (fixing a build failure)
- python-x-wr-timezone
- python-zipp
- pyzmq
- readability
- scalene (fixing test failures with pydantic 2.12.0~a1)
- sen (contributed supporting fix upstream)
- sqlfluff
- trove-classifiers
- ttconv
- vdirsyncer
- zope.component
- zope.configuration
- zope.deferredimport
- zope.deprecation
- zope.exceptions
- zope.i18nmessageid
- zope.interface
- zope.proxy
- zope.schema
- zope.security (contributed supporting fix upstream)
- zope.testing
- zope.testrunner
I had to spend a fair bit of time this month chasing down build/test regressions in various packages due to some other upgrades, particularly to pydantic, python-pytest-asyncio, and rust-pyo3:
- aiohappyeyeballs
- aiohttp-sse (filed bug and contributed fix upstream)
- aioimaplib (tried to fix upstream but failed to get tests working
- aiosmtplib (contributed upstream)
- app-model
- aresponses (contributed upstream)
- fastapi (filed bug and contributed fix upstream)
- ipython
- opendrop (filed bug and contributed fix upstream)
- pydantic-extra-types
- pytest-relaxed (contributed upstream)
- python-drf-spectacular
- python-fakeredis
- python-jsonrpc-websocket (contributed upstream, and sponsored upload for Tianyu Chen)
- python-odmantic
- python-pytest-trio (upstream issue, upstream PR, pytest issue about possible root cause)
- python-repoze.sphinx.autointerface
- python-sluurp
- python-youtubeaio (filed bug)
- rtsp-to-webrtc
After some upstream discussion I requested removal of pydantic-compat, since it was more trouble than it was worth to keep it working with the latest pydantic version.
I filed dh-python: pybuild-plugin-pyproject doesn't know about headers and added it to Python/PybuildPluginPyproject, and converted some packages to pybuild-plugin-pyproject
:
- aresponses
- python-azure, fixing an autopkgtest failure in kombu
- python-ciso8601
I updated dh-python to suppress generated dependencies that would be satisfied by python3 >= 3.11.
pkg_resources
is deprecated. In most cases replacing it is a relatively simple matter of porting to importlib.resources
, but packages that used its old namespace package support need more complicated work to port them to implicit namespace packages. We had quite a few bugs about this on zope.*
packages, but fortunately upstream did the hard part of this recently. I went round and cleaned up most of the remaining loose ends, with some help from Alexandre Detiste. Some of these aren't completely done yet as they're awaiting new upstream releases:
- zope.component (Debian bug, upstream PR)
- zope.configuration (Debian bug, upstream PR)
- zope.deferredimport (Debian bug, upstream PR)
- zope.deprecation (Debian bug, upstream PR)
- zope.exceptions (Debian bug, upstream PR)
- zope.i18nmessageid (Debian bug, upstream PR)
- zope.interface (Debian bug, upstream PR)
- zope.location (Debian bug, upstream PR)
- zope.security (Debian bug, upstream PR)
- zope.testing (Debian bug, upstream PR)
- zope.testrunner (Debian bug, upstream PR)
This work also caused a couple of build regressions, which I fixed:
I fixed jupyter-client so that its autopkgtests would work in Debusine.
I fixed waitress to build with the nocheck
profile.
I fixed several other build/test failures:
I fixed some other bugs:
- python-jpype: java.lang.ClassNotFoundException: org.jpype.classloader.DynamicClassLoader
- python-jpype: Please add autopkgtests (to add coverage for python3-numpy)
Code reviews
- debbugs: Fix dep8 autopkgtests, make Salsa CI fully green (still in progress)
- dput-ng: Add trixie-backports & bookworm-backports-sloppy
- openssh: Update sshd@.service to follow upstream
- openssh: authfd: fallback to default if $SSH_AUTH_SOCK is unset (still in progress)
- putty: d/rules: Use dh_assistant restore-file-on-clean
- python-debian: Update from pyupgrade to -py37-plus (still in progress)
- release-notes: issues: mention tzdata-legacy split
Other bits and pieces
I fixed several CMake 4 build failures:
I got CI for debbugs passing (!22, !23).
I fixed a build failure with GCC 15 in trn4.
I filed a release-notes bug about the tzdata reorganization in the trixie cycle.
I filed and fixed a git-dpm regression with bash 5.3.
I upgraded libfilter-perl to a new upstream version.
I optimized some code in ubuntu-dev-tools that made O(1) HTTP requests when it could instead make O(n).
08 Oct 2025 6:16pm GMT
Dirk Eddelbuettel: RPushbullet 0.3.5: Mostly Maintenance
A new version 0.3.5 of the RPushbullet package arrived on CRAN. It marks the first release in 4 1/2 years for this mature and feature-stable package. RPushbullet interfaces the neat Pushbullet service for inter-device messaging, communication, and more. It lets you easily send (programmatic) alerts like the one to the left to your browser, phone, tablet, … - or all at once.
This releases reflects mostly internal maintenance and updates to the documentation site, to continuous integration, to package metadata, … and one code robustitication. See below for more details.
Changes in version 0.3.5 (2025-10-08)
URL and BugReports fields have been added to DESCRIPTION
The
pbPost
function deals more robustly with the case of multiple target emailsThe continuous integration and the README badge have been updated
The DESCRIPTION file now use Authors@R
The (encrypted) unit test configuration has been adjusted to reflect the current set of active devices
The mkdocs-material documentation site is now generated via altdoc
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the repo where comments and suggestions are welcome.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
08 Oct 2025 5:46pm GMT
Sven Hoexter: Backstage Render Markdown in a Collapsible Block
Brief note to maybe spare someone else the trouble. If you want to hide e.g. a huge table in Backstage (techdocs/mkdocs) behind a collapsible element you need the md_in_html
extension and use the markdown
attribute for it to kick in on the <details>
html tag.
Add the extension to your mkdocs.yaml
:
markdown_extensions:
- md_in_html
Hide the table in your markdown document in a collapsible element like this:
<details markdown>
<summary>Long Table</summary>
| Foo | Bar |
|-|-|
| Fizz | Buzz |
</details>
It's also required to have an empty line between the html tag and starting the markdown part. Rendered for me that way in VSCode, GitHub and Backstage.
08 Oct 2025 3:17pm GMT
03 Oct 2025
Planet Debian
Dirk Eddelbuettel: #053: Adding llvm Snapshots for R Package Testing
Welcome to post 53 in the R4 series.
Continuing with posts #51 from Tuesday and #52 from Wednesday and their stated intent of posting some more … here is another quick one. Earlier today I helped another package developer who came to the r-package-devel list asking for help with a build error on the Fedora machine at CRAN running recent / development clang
. In such situations, the best first step is often to replicate the issue. As I pointed out on the list, the LLVM team behind clang
maintains an apt repo at apt.llvm.org/ making it a good resource to add to Debian-based container such as Rocker r-base or the offical r-base (the two are in fact interchangeable, and I take care of both).
A small pothole, however, is that the documentation at the top of apt.llvm.org site is a bit stale and behind two aspects that changed on current Debian systems (i.e. unstable/testing as used for r-base
). First, apt
now prefers files ending in .sources
(in a nicer format) and second, it now really requires a key (which is good practice). As it took me a few minutes to regather how to meet both requirements, I reckoned I might as well script this.
Et voilà the following script does that:
- it can update and upgrade the container (currently commented-out)
- it fetches the repository key in ascii form from the llvm.org site
- it creates the sources entry for, here tagged for llvm 'current' (22 at time of writing)
- it sets up the required
~/.R/Makevars
to use that compiler - it installs
clang-22
(andclang++-22
) (still using theg++
C++ library)
#!/bin/sh
## Update does not hurt but is not strictly needed
#apt update --quiet --quiet
#apt upgrade --yes
## wget -qO- https://apt.llvm.org/llvm-snapshot.gpg.key | sudo tee /etc/apt/trusted.gpg.d/apt.llvm.org.asc
## or as we are root in container
wget -qO- https://apt.llvm.org/llvm-snapshot.gpg.key > /etc/apt/trusted.gpg.d/apt.llvm.org.asc
cat <<EOF >/etc/apt/sources.list.d/llvm-dev.sources
Types: deb
URIs: http://apt.llvm.org/unstable/
# for clang-21
# Suites: llvm-toolchain-21
# for current clang
Suites: llvm-toolchain
Components: main
Signed-By: /etc/apt/trusted.gpg.d/apt.llvm.org.asc
EOF
test -d ~/.R || mkdir ~/.R
cat <<EOF >~/.R/Makevars
CLANGVER=-22
# CLANGLIB=-stdlib=libc++
CXX=clang++\$(CLANGVER) \$(CLANGLIB)
CXX11=clang++\$(CLANGVER) \$(CLANGLIB)
CXX14=clang++\$(CLANGVER) \$(CLANGLIB)
CXX17=clang++\$(CLANGVER) \$(CLANGLIB)
CXX20=clang++\$(CLANGVER) \$(CLANGLIB)
CC=clang\$(CLANGVER)
SHLIB_CXXLD=clang++\$(CLANGVER) \$(CLANGLIB)
EOF
apt update
apt install --yes clang-22
Once the script is run, one can test a package (or set of packages) against clang-22
and clang++-22
. This may help R package developers. The script is also generic enough for other development communities who can ignore (or comment-out / delete) the bit about ~/.R/Makevars
and deploy the compiler differently. Updating the softlink as apt-preferences
does is one way and done in many GitHub Actions recipes. As we only need wget
here a basic Debian container should work, possibly with the addition of wget
. For R users r-base
hits a decent sweet spot.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
03 Oct 2025 10:09pm GMT
Jonathan Dowland: Tron: Ares (soundtrack)
There's a new Nine Inch Nails album! That doesn't happen very often. There's a new Trent Reznor & Atticus Ross soundtrack! That happens all the time! For the first time, they're the same thing.
The new one, Tron: Ares, is very deliberately presented as a Nine Inch Nails album, and not a TR&AR soundtrack. But is it neither fish nor fowl? 24 tracks, four with lyrics. Singing is not unheard of on TR&AR soundtracks, but it's rare (A Minute to Breathe from the excellent Before the Flood is another). Instrumentals are not rare on NIN albums, either, but this ratio is very unusual, and has disappointed some fans who were hoping for a more traditional NIN album.
What does it mean to label something a NIN album anyway? For me, the lines are now further blurred. One thing for sure is it means a lot of media attention, and this release, as well as the film it's promoting, are all over the media at the moment. Posters, trailers, promotional tie-in items, Disney logos everywhere. The album is hitched to the Disney marketing and promotion machine. It's a bit weird seeing the NIN logo all over the place advertising the movie.
On to the music. I love TR&AR soundtracks, and some of my favourite NIN tracks are instrumentals. Despite that, three highlights for me are songs: As Alive As You Need Me To Be, I Know You Can Feel It and closer Shadow Over Me. The other stand-out is Building Better Worlds, a short instrumental and clear nod to Wendy Carlos.
My main complaint here applies to some of the more recent soundtracks as well: the tracks are too short. They're scored to scenes in the movie, which makes a lot of sense in that presentation, but less so for independent listening. It's not a problem that their earlier, lauded soundtracks suffered (The Social Network, Before the Flood, Bird Box Extended). Perhaps a future remix album will address that.
03 Oct 2025 10:01am GMT