11 Dec 2025
Planet Debian
Dirk Eddelbuettel: #056: Running r-ci with R-devel

Welcome to post 56 in the R4 series.
The recent post #54 reviewed a number of earlier posts on r-ci, our small (but very versatile) runner for continunous integration (CI) with R. The post also introduced the notion of using a container in the 'matrix' of jobs defined and running in parallel. The initial motivation was the (still ongoing, and still puzzling) variation in run-times of GitHub Actions. So when running CI and relying on r2u for the 'fast, easy, reliable: pick all three!' provision of CRAN packages as Ubuntu binaries, a small amount of time is spent prepping a basic Ubuntu instance with the necessary setup. This can be as fast as maybe 20 to 30 seconds, but it can also stretch to almost two minutes when GitHub is busier or out of sorts for other reasons. When the CI job itself is short, that is a nuisance. We presented relying on a pre-made r2u4ci container that adds just a few commands to the standard r2u container to be complete for CI. And with that setup CI runs tend to be reliably faster.
This situation is still evolving. I have not converted any of my existing CI scripts (apart from a test instance or two), but I keep monitoring the situation. However, this also offered another perspective: why not rely on a different container for a different CI aspect? When discussing the CI approach with Jeff the other day (and helping add CI to his mmap repo), it occurred to me we could also use on of the Rocker containers for R-devel. A minimal change to the underlying run.sh script later, this was accomplished. An example is provided as both a test and an illustration in the repo for package RcppInt64 in its script ci.yaml:
strategy:
matrix:
include:
- { name: container, os: ubuntu-latest, container: rocker/r2u4ci }
- { name: r-devel, os: ubuntu-latest, container: rocker/drd }
- { name: macos, os: macos-latest }
- { name: ubuntu, os: ubuntu-latest }
runs-on: ${{ matrix.os }}
container: ${{ matrix.container }}This runs both a standard Ubuntu setup (fourth entry) and the alternate just described relying on the container (first entry) along with the (usually commented-out) optional macOS setup (third entry). And line two brings the drd container from Rocker. The CI runner script now checks for a possible Rdevel binary as provided inside drd (along with alias RD) and uses it when present. And that is all that there is: no other change on the user side; tests now run under R-devel. You can see some of the initial runs at the rcppint64 repo actions log. Another example is now also at Jeff's mmap repo.
It should be noted that this relies on R-devel running packages made with R-release. Every few years this breaks when R needs to break its binary API. If and when that happens this option will be costlier as the R-devel instance will then have to (re-)install its R package dependencies. This can be accomodated easily as a step in the yaml file. And under 'normal' circumstances it is not needed.
Having easy access to recent builds of R-devel (the container refreshes weekly on a schedule) with the convenience of r2u gives another option for package testing. I may continue to test locally with R-devel as my primary option, and most likely keep my CI small and lean (usually just one R-relase run on Ubuntu) but having another option at GitHub Actions is also a good thing.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
11 Dec 2025 6:29pm GMT
08 Dec 2025
Planet Debian
Thorsten Alteholz: My Debian Activities in November 2025
Debian LTS/ELTS
This was my hundred-thirty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian and my eighty-eighth ELTS month. As the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities.
During my allocated time I uploaded or worked on:
- [DLA 4381-1] net-snmp security update to fix two CVEs related to denial of service.
- [DLA 4382-1] libsdl2 security update to fix one CVE related to a memory leak and a denial of service.
- [DLA 4380-1] cups-filters security update to fix three CVEs related to out of bounds read or writes or a heap buffer overflow.
- [ELA-1586-1] cups-filters security update to fix three CVEs in Buster and Stretch, related to out of bounds read or writes or a heap buffer overflow.
- [libcupsfilters] upload to unstable to fix two CVEs
- [cups-filters] upload to unstable to fix three CVEs
- [cups] upload to unstable to fix two CVEs
- [rlottie] upload to unstable to finally fix three CVEs
- [rplay] upload to unstable to finally fix one CVE
- [#1121342] trixie-pu bug for libcupsfilters to fix two CVEs in Trixie.
- [#1121391] trixie-pu bug for cups-filter to fix three CVEs in Trixie.
- [#1121392] bookworm-pu bug for cups-filter to fix three CVEs in Bookworm.
- [#112433] trixie-pu bug for rlottie to finally fix three CVEs in Trixie.
- [#112437] bookworm-pu bug for rlottie to finally fix three CVEs in Bookworm.
I also attended the monthly LTS/ELTS meeting and did a week of LTS/ELTS frontdesk duties. I also stumbled upon a bug in python3-paramiko, where the parsing of include statements in the ssh_config does not work. Rather annoying but already fixed in the newest version, that only needs to find its way to my old VM.
Debian Printing
This month I uploaded a new upstream version or a bugfix version of:
- … lprng to unstable.
- … cpdb-backend-cups to unstable.
- … cpdb-libs to unstable.
- … ippsample to unstable.
- … cups-filters to unstable.
I also uploaded cups to Trixie, to fix bug #1109471 related to a configuration problem with the admin panel.
This work is generously funded by Freexian!
Debian Astro
This month I uploaded a new upstream version or a bugfix version of:
- … siril to unstable (sponsored upload).
- … supernovas to unstable (sponsored upload).
Debian IoT
This month I uploaded a new upstream version or a bugfix version of:
- … openzwave-controlpanel to unstable.
- … pywws to unstable.
Debian Mobcom
This month I uploaded a new upstream version or a bugfix version of:
- … osmo-tetra to unstable.
- … libgsm to unstable.
- … osmo-tetra to unstable.
misc
This month I uploaded a new upstream version or a bugfix version of:
- … cpptest to unstable.
- … npd6 to unstable.
- … ptunnel to unstable.
- … ptunnel-ng to unstable.
- … dateutils to unstable.
- … apcupsd to unstable.
- … puppet-modules-cirrax-gitolite to unstable.
- … visam to unstable.
- … apcupsd to unstable.
On my fight against outdated RFPs, I closed 30 of them in November.
I started with about 3500 open RFP bugs. and after working six months on this project, I have closed 183 bugs. Of course new bugs appeared, so the overall number of bugs is only down to about 3360.
Though I view this as a successful project, I also have to admit that it is a bit boring to work on this daily. Therefore I close this diary again and will add the closed RFP bugs to my bug logbook now. I also try to close some of these bugs by really uploading some software, probably one package per month.
FTP master
This month I accepted 236 and rejected 16 packages. The overall number of packages that got accepted was 247.
08 Dec 2025 3:20pm GMT
François Marier: Learning a new programming language with an LLM
I started learning Go this year. First, I picked a Perl project I wanted to rewrite, got a good book and ignored AI tools since I thought they would do nothing but interfere with learning. Eventually though, I decided to experiment a bit and ended up finding a few ways to use AI assistants effectively even when learning something new.
Searching more efficiently
The first use case that worked for me was search. Instead of searching on a traditional search engine and then ending up on Stack Overflow, I could get the answer I was looking for directly in an AI side-window in my editor. Of course, that's bad news for Stack Overflow.
I was however skeptical from the beginning since LLMs make mistakes, sometimes they making up function signatures or APIs that don't exist. Therefore I got into the habit of going to the official standard library documentation to double-check suggestions. For example, if the LLM suggests using strings.SplitN, I verify the function signature and behaviour carefully before using it. Basically, "don't trust and do verify."
I stuck to the standard library in my project, but if an LLM recommends third-party dependencies for you, make sure they exist and that Socket doesn't flag them as malicious. Research has found that 5-20% of packages suggested by LLMs don't actually exist, making this a real attack vector (dubbed "slopsquatting").
Autocomplete is too distracting
A step I took early on was to disable AI autocomplete in my editor. When learning a new language, you need to develop muscle memory for the syntax. Also, Go is no Java. There's not that much boilerplate to write in general.
I found it quite distracting to see some almost correct code replace my thinking about the next step. I can see how one could go faster with these suggestions, but being a developer is not just about cranking out lines of code as fast as possible, it's also about constantly learning new things (and retaining them).
Asking about idiomatic code
One of the most useful prompts when learning a new language is "Is this the most idiomatic way to do this in Go?". Large language models are good at recognizing patterns and can point out when you're writing code that works but doesn't follow the conventions of the language. This is especially valuable early on when you don't yet have a feel for what "good" code looks like in that language.
It's usually pretty easy (at least for an experience developer) to tell when the LLM suggestion is actually counter productive or wrong. If it increases complexity or is harder to read/decode, it's probably not a good idea to do it.
Reviews
One way a new dev gets better is through code review. If you have access to a friend who's an expert in the language you're learning, then you can definitely gain a lot by asking for feedback on your code.
If you don't have access to such a valuable resource, or as a first step before you consult your friend, I found that AI-assisted code reviews can be useful:
- Get the model to write the review prompt for you. Describe what you want reviewed and let it generate a detailed prompt.
- Feed that prompt to multiple models. They each have different answers and will detect different problems.
- Be prepared to ignore 50% of what they recommend. Some suggestions will be stylistic preferences, others will be wrong, or irrelevant.
The value is in the other 50%: the suggestions that make you think about your code differently or catch genuine problems.
Similarly for security reviews:
- A lot of what they flag will need to be ignored (false positives, or things that don't apply to your threat model).
- Some of it may highlight areas for improvement that you hadn't considered.
- Occasionally, they will point out real vulnerabilities.
But always keep in mind that AI chatbots are trained to be people-pleasers and often feel the need to suggest something when nothing was needed
An unexpected benefit
One side effect of using AI assistants was that having them write the scaffolding for unit tests motivated me to increase my code coverage. Trimming unnecessary test cases and adding missing ones is pretty quick when the grunt work is already done, and I ended up testing more of my code (being a personal project written in my own time) than I might have otherwise.
Learning
In the end, I continue to believe in the value of learning from quality books (I find reading paper-based most effective). In addition, I like to create Anki questions for common mistakes or things I find I have to look up often. Remembering something will always be faster than asking an AI tool.
So my experience this year tells me that LLMs can supplement traditional time-tested learning techniques, but I don't believe it obsoletes them.
P.S. I experimented with getting an LLM to ghost-write this post for me from an outline (+ a detailed style guide) and I ended up having to rewrite at least 75% of it. It was largely a waste of time.
08 Dec 2025 12:15am GMT