05 Jul 2025
How I finally tracked my Debian uploads correctly
A long time ago, I became aware of UDD (Ultimate Debian Database), which gathers various Debian data into a single SQL database.
At that time, we were trying to do something simple: list the contributions (package uploads) of our local community, Debian Brasília. We ended up with a script that counted uploads to unstable and experimental.
I was never satisfied with the final result because some uploads were always missing. Here is an example:
debci (3.0) experimental; urgency=medium
...
[ Sergio de almeida cipriano Junior ]
* Fix Style/GlovalVars issue
* Rename blacklist to rejectlist
...
I made changes in debci 3.0, but the upload was done by someone else. This kind of contribution cannot be tracked by that script.
Then, a few years ago, I learned about Minechangelogs, which allows us to search through the changelogs of all Debian packages currently published.
Today, I decided to explore how this was done, since I couldn't find anything useful for that kind of query in UDD's tables.
That's when I came across ProjectB. It was my first time hearing about it. ProjectB is a database that stores all the metadata about the packages in the Debian archive, including the changelogs of those packages.
Now that I'm a Debian Developer, I have access to this database. If you also have access and want to try some queries, you can do this:
$ ssh <username>@mirror.ftp-master.debian.org -N -L 15434:danzi.debian.org:5435
$ psql postgresql://guest@localhost:15434/projectb?sslmode=allow
In the end, it finally solved my problem.
Using the code below, with UDD, I get 38 uploads:
import psycopg2
contributor = 'almeida cipriano'
try:
connection = psycopg2.connect(
user="udd-mirror",
password="udd-mirror",
host="udd-mirror.debian.net",
port="5432",
database="udd"
)
cursor = connection.cursor()
query = f"SELECT source,version,date,distribution,signed_by_name \
FROM public.upload_history \
WHERE changed_by_name ILIKE '%{contributor}%' \
ORDER BY date;"
cursor.execute(query)
records = cursor.fetchall()
print(f"I have {len(records)} uploads.")
cursor.close()
connection.close()
except (Exception, psycopg2.Error) as error:
print("Error while fetching data from PostgreSQL", error)
Using the code bellow, with ProjectB, I get 43 uploads (the correct amount):
import psycopg2
contributor = 'almeida cipriano'
try:
# SSH tunnel is required to access the database:
# ssh <username>@mirror.ftp-master.debian.org -N -L 15434:danzi.debian.org:5435
connection = psycopg2.connect(
user="guest",
host="localhost",
port="15434",
database="projectb",
sslmode="allow"
)
connection.set_client_encoding('UTF8')
cursor = connection.cursor()
query = f"SELECT c.source, c.version, c.changedby \
FROM changes c \
JOIN changelogs ch ON ch.id = c.changelog_id \
WHERE c.source != 'debian-keyring' \
AND (\
ch.changelog ILIKE '%{contributor}%' \
OR c.changedby ILIKE '%{contributor}%' \
)\
ORDER BY c.seen;"
cursor.execute(query)
records = cursor.fetchall()
print(f"I have {len(records)} uploads.")
cursor.close()
connection.close()
except (Exception, psycopg2.Error) as error:
print("Error while fetching data from PostgreSQL", error)
It feels good to finally solve this itch I've had for years.
05 Jul 2025 1:28pm GMT
Rumour has it that I might be a bit of a train nerd. At least I want to collect various nerdy data about my travels. Historically that data has lived in manual form in several places, but over the past year and a half I've been working on a toy project to collect most of that information into a custom tool.
That toy project uses various sources to get information about trains to fill up its database: for example, in Finland Fintraffic, the organization responsible for railway traffic management, publishes very comprehensive open data about almost everything that's moving on the Finnish railway network. Unfortunately, I cannot be on all of the trains. Thus I need to tell the system details about my journeys.
The obvious solution is to make a form that lets me save that data. Which I did, but I got very quickly bored of filling out that form, and as regular readers of this blog know, there is no reason to settle for a simple but boring solution when the alternative is to make something that is ridiculously overengineered.
Parsing data out of my train tickets
Finnish long-distance trains generally require train-specific seat reservations, which means VR (the train company) knows which trains I am on. We just need to find a way to extract that information in some machine-readable format. So my plan for the ridiculously overengineered solution was to parse the booking emails to get the details I need.
Now, VR ticket emails include the data I want in a couple of different formats: they're included as text in the HTML email body, they're in the embedded calendar invite, as text in the included PDF ticket, and encoded in the Aztec Code in the included PDF ticket. I chose to parse the last option with the hopes of building something that could be ported to parse other operators' tickets with relative ease.
Example Aztec code
After a bit of digging (thank you to the KDE Itinerary people for documenting this!) I stumbled upon an European Union Agency for Railways PDF titled ELECTRONIC SEAT/BERTH RESERVATION AND ELECTRONIC PRODUCTION OF TRANSPORT DOCUMENTS - TRANSPORT DOCUMENTS (RCT2 STANDARD) which, in its Appendix C.1, describes how the information is encoded in the code. (As a side note, various sources call these codes SSB version 1 codes, although that term isn't used in this specification. So maybe there are more specifications about the format that I haven't discovered yet!)
I then wrote a parser in Go for the binary data embedded in these codes. So far it works, although I wouldn't be surprised if there are some edge cases that it doesn't handle. In particular, the spec specifies a custom lookup table for converting between text and binary data, and that only has support for characters 0-9 and A-Z. But Finnish railway station codes can also use Ä
and Ö
.. maybe I need to buy a ticket to a station with one of those.
Extracting barcodes out of emails
A parser just for the binary format isn't enough here if the intended source input is the emails that VR sends upon making a booking. Time to write a single-purpose email server! In short, the logic in the server, again written in Go and with the help of go-smtp and go-message, is:
- Accept any mail with a reasonable body size
- Process through all body parts
- For all PDF parts, extract all images
- For all images, run them through ZXing
- For all decoded barcodes, try to parse them with my new ticket parsing library I mentioned earlier
- If any tickets are found, send the data from them and any metadata to the main backend, which will save them to a database
The custom mail server exposes an LMTP interface over TCP for my internet-facing mail servers to forward to. I chose LMTP for this because it seemed like a better fit in theory than normal (E)SMTP. I've since discovered that curl
doesn't support LMTP which makes development much harder, and in practice there's no benefit of LMTP here as all mails are being sent to the backend in a single request regardless of the number of recipients, so maybe I'll migrate it to regular SMTP at some point.
Side quest time
The last missing part is automatically forwarding the ticket mails to the new service. I've routed a dedicated subdomain to the new service, and the backend is configured to allocate addresses like i2v44g2pygkcth64stjgyuqz@somedomain.example
for each user. That's great if we wanted to manually forward mails to the service, but we can go one step above that. I created a dedicated email alias in my mail server config that routes both to my regular mailbox and the service address. That way I can update my VR account to use the alias and have mails automatically processed while still receiving backup copies of the tickets (and any other important mail that VR might send me).
Unfortunately that last part turns out something that's easier said than done. Logging in on the website, I'm greeted by this text stating I need to contact customer service by phone to change the address associated with my account. After a bit of digging, I noticed that the mobile app suggests filling out a feedback form in order to change the address. So I filled that, and after a day or two I got a "confirm you want to change your email" mail. Success!
05 Jul 2025 12:00am GMT
04 Jul 2025
For at least 12 years laptops have been defaulting to not having the traditional PC 101 key keyboard function key functionality and instead have had other functions like controlling the volume and have had a key labelled Fn to toggle the functions. It's been a BIOS option to control whether traditional function keys or controls for volume etc are the default and for at least 12 years I've configured all my laptops to have the traditional function keys as the default.
Recently I've been working in corporate IT and having exposure to many laptops with the default BIOS settings for those keys to change volume etc and no reasonable option for addressing it. This has made me reconsider the options for configuring these things.
Here's a page listing the standard uses of function keys [1]. Here is a summary of the relevant part of that page:
- F1 key launches help doesn't seem to get much use. The main help option in practice is Google (I anticipate controversy about this and welcome comments) and all the software vendors are investigating LLM options for help which probably won't involve F1.
- F2 is for renaming files but doesn't get much use. Probably most people who use graphical file managers use the right mouse button for it. I use it when sorting a selection of photos.
- F3 is for launching a search (which is CTRL-F in most programs).
- ALT-F4 is for closing a window which gets some use, although for me the windows I close are web browsers (via CTRL-W) and terminals (via CTRL-D).
- F5 is for reloading a page which is used a lot in web browsers.
- F6 moves the input focus to the URL field of a web browser.
- F8 is for moving a file which in the degenerate case covers the rename functionality of F2.
- F11 is for full-screen mode in browsers which is sometimes handy.
The keys F1, F3, F4, F7, F9, F10, and F12 don't get much use for me and for the people I observe. The F2 and F8 keys aren't useful in most programs, F6 is only really used in web browsers - but the web browser counts as "most programs" nowadays.
Here's the description of Thinkpad Fn keys [2]. I use Thinkpads for fun and Dell laptops for work, so it would be nice if they both worked in similar ways but of course they don't. Dell doesn't document how their Fn keys are laid out, but the relevant bit is that F1 to F4 are the same as on Thinkpads which is convenient as they are the ones that are likely to be commonly used and needed in a hurry.
I have used the KDE settings on my Thinkpad to map the function F1 to F3 keys to the Fn equivalents which are F1 to mute-audio, F2 for vol-down, and F3 for vol-up to allow using them without holding down the Fn key while having other function keys such as F5 and F6 have their usual GUI functionality. Now I have to could train myself to use F8 in situations where I usually use F2, at least when using a laptop.
The only other Fn combinations I use are F5 and F6 for controlling screen brightness, but that's not something I use much.
It's annoying that the laptop manufacturers forced me to this. Having a Fn key to get extra functions and not need 101+ keys on a laptop size device is a reasonable design choice. But they could have done away with the PrintScreen key to make space for something else. Also for Thinkpads a touch pad is something that could obviously be removed to gain some extra space as the Trackpoint does all that's needed in that regard.
04 Jul 2025 11:44am GMT
In the past few months, I have moved authoritative name servers (NS) of two of my domains (sahilister.net and sahil.rocks) in house using PowerDNS. Subdomains of sahilister.net see roughly 320,000 hits/day across my IN and DE mirror nodes, so adding secondary name servers with good availability (in addition to my own) servers was one of my first priorities.
I explored the following options for my secondary NS, which also didn't cost me anything:
1984 Hosting
- 1984 Hosting Company FreeDNS.
- Hosting provider from Iceland.
- AXFR over IPv4 only.
- Following secondaries are offered:
- Not all of NS support IPv6.
- Personally, I use ns1.1984.is which is hosted by Netnod, one of 13 root name servers and .SE ccTLD operator.
- Same infrastructure serves 1984.hosting as well.
Hurriance Electric
- Hurricane Electric Free DNS Hosting.
- One has to delegate NS towards one or more of ns[1-5]he.net to verify ownership. It does lead to a minor lame server period between NS addition and first zone transfer.
- Supports TSIG and DNSSEC pre-signed zones.
- Following secondaries are offered:
- The service went down when he.net domain was put on hold. NANOG thread and Hurricane Electric's response there. Better not depend on just one external provider.
- Same infrastructure serves he.net as well.
Afraid.org
- FreeDNS at Afraid.org.
- Backup DNS option on left side menu on their website.
- Following secondary offered:
Puck
- PUCK Free Secondary DNS service.
- One person show, been long-standing though there seems to be manual approval of each account, which did take some time.
- Following secondary offered:
NS-Global
Asking friends
Two of my friends and fellow mirror hosts have their own authoritative name server setup, Shrirang (ie albony) and Luke. Shirang gave me another POP in IN and through Luke (who does have an insane amount of in-house NS, see dig ns jing.rocks +short
), I added a JP POP.
If we know each other, I would be glad to host a secondary NS for you in (IN and/or DE locations).
Some notes
-
Adding a third-party secondary is putting trust that the third party would serve your zone right.
-
Hurricane Electric and 1984 hosting provide multiple NS. One can use some or all of them. Ideally, you can get away with just using your own with full set from any of these two. Play around with adding and removing secondaries, which gives you the best results. . Using everyone is anyhow overkill, unless you have specific reasons for it.
-
Moving NS in-house isn't that hard. Though, be prepared to get it wrong a few times (and some more). I have already faced partial outages because:
- Recursive resolvers (RR) in the wild behave in a weird way and cache the wrong NS response for longer time than in TTL.
- NS expiry took more than time. 2 out of 3 of my Netim's NS (my domain registrar) had stopped serving my domain, while RRs in the wild hadn't picked up my new in-house NS. I couldn't really do anything about it, though.
- Dot is pretty important at the end.
- With HE.net, I forgot to delegate my domain on their panel and just added in my NS set, thinking I've already done so (which I did but for another domain), leading to a lame server situation.
-
In terms of serving traffic, there's no distinction between primary and secondary NS. RR don't really care who they're asking the query to. So one can have hidden primary too.
-
I initially thought of adding periodic RIPE Atlas measurements from the global set but thought against it as I already host a termux mirror, which brings in thousands of queries from around the world leading to a diverse set of RRs querying my domain already.
-
In most cases, query resolution time would increase with out of zone NS servers (which most likely would be in external secondary). 1 query vs. 2 queries. Pay close attention to ADDITIONAL SECTION Shrirang's case followed by mine:
$ dig ns albony.in
; <<>> DiG 9.18.36 <<>> ns albony.in
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60525
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 9
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;albony.in. IN NS
;; ANSWER SECTION:
albony.in. 1049 IN NS ns3.albony.in.
albony.in. 1049 IN NS ns4.albony.in.
albony.in. 1049 IN NS ns2.albony.in.
albony.in. 1049 IN NS ns1.albony.in.
;; ADDITIONAL SECTION:
ns3.albony.in. 1049 IN AAAA 2a14:3f87:f002:7::a
ns1.albony.in. 1049 IN A 82.180.145.196
ns2.albony.in. 1049 IN AAAA 2403:44c0:1:4::2
ns4.albony.in. 1049 IN A 45.64.190.62
ns2.albony.in. 1049 IN A 103.77.111.150
ns1.albony.in. 1049 IN AAAA 2400:d321:2191:8363::1
ns3.albony.in. 1049 IN A 45.90.187.14
ns4.albony.in. 1049 IN AAAA 2402:c4c0:1:10::2
;; Query time: 29 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Fri Jul 04 07:57:01 IST 2025
;; MSG SIZE rcvd: 286
vs mine
$ dig ns sahil.rocks
; <<>> DiG 9.18.36 <<>> ns sahil.rocks
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64497
;; flags: qr rd ra; QUERY: 1, ANSWER: 11, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;sahil.rocks. IN NS
;; ANSWER SECTION:
sahil.rocks. 6385 IN NS ns5.he.net.
sahil.rocks. 6385 IN NS puck.nether.net.
sahil.rocks. 6385 IN NS colin.sahilister.net.
sahil.rocks. 6385 IN NS marvin.sahilister.net.
sahil.rocks. 6385 IN NS ns2.afraid.org.
sahil.rocks. 6385 IN NS ns4.he.net.
sahil.rocks. 6385 IN NS ns2.albony.in.
sahil.rocks. 6385 IN NS ns3.jing.rocks.
sahil.rocks. 6385 IN NS ns0.1984.is.
sahil.rocks. 6385 IN NS ns1.1984.is.
sahil.rocks. 6385 IN NS ns-global.kjsl.com.
;; Query time: 24 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Fri Jul 04 07:57:20 IST 2025
;; MSG SIZE rcvd: 313
- Theoretically speaking, a small increase/decrease in resolution would occur based on the chosen TLD and the popularity of the TLD in query originators area (already cached vs. fresh recursion).
- One can get away with having only 3 NS (or be like Google and have 4 anycast NS or like Amazon and have 8 or like Verisign and make it 13 :P).
- Nowhere it's written, your NS needs not to be called dns* or ns1, ns2 etc. Get creative with naming NS; be deceptive with the naming :D.
- A good understanding of RR behavior can help engineer a good authoritative NS system.
Further reading
04 Jul 2025 2:36am GMT

And this is the time when one realizes that she only has one white camisole left. And it's summer, so I'm wearing a lot of white shirts, and I always wear a white camisole under a white shirt (unless I'm wearing a full chemise).
Not a problem, I have a good pattern for a well fitting camisole that I've done multiple times, I don't even need to take my measurements and draft things, I can get some white jersey from the stash and quickly make a few.
From the stash. Where I have a roll of white jersey and one of off-white jersey. It's in the inventory. With the "position" field set to a place that no longer exists. uooops.
But I have some leftover lightweight (woven) linen fabric. Surely if I cut the pattern as is with 2 cm of allowance and then sew it with just 1 cm of allowance it will work even in a woven fabric, right?
Wrong.
I mean, it would have probably fit, but it was too tight to squeeze into, and would require adding maybe a button closure to the front. feasible, but not something I wanted.
But that's nothing that can't be solved with the Power of Insertion Lace, right?
One dig through the Lace Stash and some frantic zig-zag sewing later, I had a tube wide enough for me to squiggle in, with lace on the sides not because it was the easiest place for me to put it, but because it was the right place for it to preserve my modesty, of course.
Encouraged by this, I added a bit of lace to the front, for the look of it, and used some more insertion lace for the straps, instead of making them out of fabric.
And, it looks like it can work. I plan to wear it tonight, so that I can find out whether there is something that chafes or anything, but from a quick test it feels reasonable.

At bust level it's now a bit too wide, and it gapes a bit under the arms, but I don't think that it's going to cause significant problems, and (other than everybody on the internet) nobody is going to see it, so it's not a big deal.
I still have some linen, but I don't think I'm going to make another one with the same pattern: maybe I'll try to do something with a front opening, but I'll see later on, also after I've been looking for the missing jersey in a few more potential places.
As for now, the number of white camisoles I have has doubled, and this is progress enough for today.
04 Jul 2025 12:00am GMT
03 Jul 2025
Since some time now debputy is available in the archive. It is a declarative buildsystem for debian packages, but also includes a Language Server (LS) part. A LS is a binary can hook into any client (editor) supporting the LSP (Language Server Protocol) and deliver syntax highlighting, completions, warnings and …
03 Jul 2025 10:00pm GMT
There are many negative articles about "AI" (which is not about actual Artificial Intelligence also known as "AGI"). Which I think are mostly overblown and often ridiculous.
Resource Usage
Complaints about resource usage are common, training Llama 3.1 could apparently produce as much pollution as "10,000 round trips by car between Los Angeles and New York City". That's not great but when you compare to the actual number of people doing such drives in the US and the number of people taking commercial flights on that route it doesn't seem like such a big deal. Apparently commercial passenger jets cause CO2 emissions per passenger about equal to a car with 2 people. Why is it relevant whether pollution comes from running servers, driving cars, or steel mills? Why not just tax polluters for the damage they do and let the market sort it out? People in the US make a big deal about not being communist, so why not have a capitalist solution, make it more expensive to do undesirable things and let the market sort it out?
ML systems are a less bad use of compute resources than Bitcoin, at least ML systems give some useful results while Bitcoin has nothing good going for it.
The Dot-Com Comparison
People often complain about the apparent impossibility of "AI" companies doing what investors think they will do. But this isn't anything new, that all happened before with the "dot com boom". I'm not the first person to make this comparison, The Daily WTF (a high quality site about IT mistakes) has an interesting article making this comparison [1]. But my conclusions are quite different.
The result of that was a lot of Internet companies going bankrupt, the investors in those companies losing money, and other companies then bought up their assets and made profitable companies. The cheap Internet we now have was built on the hardware from bankrupt companies which was sold for far less than the manufacture price. That allowed it to scale up from modem speeds to ADSL without the users paying enough to cover the purchase of the infrastructure. In the early 2000s I worked for two major Dutch ISPs that went bankrupt (not my fault) and one of them continued operations in the identical manner after having the stock price go to zero (I didn't get to witness what happened with the other one). As far as I'm aware random Dutch citizens and residents didn't suffer from this and employees just got jobs elsewhere.
There are good things being done with ML systems and when companies like OpenAI go bankrupt other companies will buy the hardware and do good things.
NVidia isn't ever going to have the future sales that would justify a market capitalisation of almost 4 Trillion US dollars. This market cap can support paying for new research and purchasing rights to patented technology in a similar way to the high stock price of Google supported buying YouTube, DoubleClick, and Motorola Mobility which are the keys to Google's profits now.
The Real Upsides of ML
Until recently I worked for a company that used ML systems to analyse drivers for signs of fatigue, distraction, or other inappropriate things (smoking which is illegal in China, using a mobile phone, etc). That work was directly aimed at saving human lives with a significant secondary aim of saving wear on vehicles (in the mining industry drowsy drivers damage truck tires and that's a huge business expense).
There are many applications of ML in medical research such as recognising cancer cells in tissue samples.
There are many less important uses for ML systems, such as recognising different types of pastries to correctly bill bakery customers - technology that was apparently repurposed for recognising cancer cells.
The ability to recognise objects in photos is useful. It can be used for people who want to learn about random objects they see and could be used for helping young children learn about their environment. It also has some potential for assistance for visually impaired people, it wouldn't be good for safety critical systems (don't cross a road because a ML system says there are no cars coming) but could be useful for identifying objects (is this a lemon or a lime). The Humane AI pin had some real potential to do good things but there wasn't a suitable business model [2], I think that someone will develop similar technology in a useful way eventually.
Even without trying to do what the Humane AI Pin attempted, there are many ways for ML based systems to assist phone and PC use.
ML systems allow analysing large quantities of data and giving information that may be correct. When used by a human who knows how to recognise good answers this can be an efficient way of solving problems. I personally have solved many computer problems with the help of LLM systems while skipping over many results that were obviously wrong to me. I believe that any expert in any field that is covered in the LLM input data could find some benefits from getting suggestions from an LLM. It won't necessarily allow them to solve problems that they couldn't solve without it but it can provide them with a set of obviously wrong answers mixed in with some useful tips about where to look for the right answers.
Jobs and Politics
Noema Magazine has an insightful article about how "AI" can allow different models of work which can enlarge the middle class [3].
I don't think it's reasonable to expect ML systems to make as much impact on society as the industrial revolution, and the agricultural revolutions which took society from more than 90% farm workers to less than 5%. That doesn't mean everything will be fine but it is something that can seem OK after the changes have happened. I'm not saying "apart from the death and destruction everything will be good", the death and destruction are optional. Improvements in manufacturing and farming didn't have to involve poverty and death for many people, improvements to agriculture didn't have to involve overcrowding and death from disease. This was an issue of political decisions that were made.
The Real Problems of ML
Political decisions that are being made now have the aim of making the rich even richer and leaving more people in poverty and in many cases dying due to being unable to afford healthcare. The ML systems that aim to facilitate such things haven't been as successful as evil people have hoped but it will happen and we need appropriate legislation if we aren't going to have revolutions.
There are documented cases of suicide being inspired by Chat GPT systems [4]. There have been people inspired towards murder by ChatGPT systems but AFAIK no-one has actually succeeded in such a crime yet. There are serious issues that need to be addressed with the technology and with legal constraints about how people may use it. It's interesting to consider the possible uses of ChatGPT systems for providing suggestions to a psychologist, maybe ChatGPT systems could be used to alleviate mental health problems.
The cases of LLM systems being used for cheating on assignments etc isn't a real issue. People have been cheating on assignments since organised education was invented.
There is a real problem of ML systems based on biased input data that issue decisions that are the average of the bigotry of the people who provided input. That isn't going to be worse than the current situation of bigoted humans making decisions based on hate and preconceptions but it will be more insidious. It is possible to search for that so for example a bank could test it's mortgage approval ML system by changing one factor at a time (name, gender, age, address, etc) and see if it changes the answer. If it turns out that the ML system is biased on names then the input data could have names removed. If it turns out to be biased about address then there could be weights put in to oppose that.
For a long time there has been excessive trust in computers. Computers aren't magic they just do maths really fast and implement choices based on the work of programmers - who have all the failings of other humans. Excessive trust in a rule based system is less risky than excessive trust in a ML system where no-one really knows why it makes the decisions it makes.
Self driving cars kill people, this is the truth that Tesla stock holders don't want people to know.
Companies that try to automate everything with "AI" are going to be in for some nasty surprises. Getting computers to do everything that humans do in any job is going to be a large portion of an actual intelligent computer which if it is achieved will raise an entirely different set of problems.
I've previously blogged about ML Security [5]. I don't think this will be any worse than all the other computer security problems in the long term, although it will be more insidious.
How Will It Go?
Companies spending billions of dollars without firm plans for how to make money are going to go bankrupt no matter what business they are in. Companies like Google and Microsoft can waste some billions of dollars on AI Chat systems and still keep going as successful businesses. Companies like OpenAI that do nothing other than such chat systems won't go well. But their assets can be used by new companies when sold at less than 10% the purchase price.
Companies like NVidia that have high stock prices based on the supposed ongoing growth in use of their hardware will have their stock prices crash. But the new technology they develop will be used by other people for other purposes. If hospitals can get cheap diagnostic ML systems because of unreasonable investment into "AI" then that could be a win for humanity.
Companies that bet their entire business on AI even when it's not necessarily their core business (as Tesla has done with self driving) will have their stock price crash dramatically at a minimum and have the possibility of bankruptcy. Having Tesla go bankrupt is definitely better than having people try to use them as self driving cars.
03 Jul 2025 10:21am GMT
Disable sleep on lid close
I am using an old laptop in my homelab, but I want to do everything from my personal computer, with ssh. The default behavior in Debian is to suspend when the laptop lid is closed, but it's easy to change that, just edit
/etc/systemd/logind.conf
and change the line
#HandleLidSwitch=suspend
to
HandleLidSwitch=ignore
then
$ sudo systemctl restart systemd-logind
That's it.
03 Jul 2025 1:49am GMT
02 Jul 2025


Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language-and is widely used by (currently) 1241 other packages on CRAN, downloaded 40.4 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 634 times according to Google Scholar.
Conrad released a minor version 4.6.0 yesterday which offers new accessors for non-finite values. And despite being in Beautiful British Columbia on vacation, I had wrapped up two rounds of reverse dependency checks preparing his 4.6.0 release, and shipped this to CRAN this morning where it passed with flying colours and no human intervention-even with over 1200 reverse dependencies. The changes since the last CRAN release are summarised below.
Changes in RcppArmadillo version 14.6.0-1 (2025-07-02)
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
02 Jul 2025 9:21pm GMT


With a friendly Canadian hand wave from vacation in Beautiful British Columbia, and speaking on behalf of the Rcpp Core Team, I am excited to shared that the (regularly scheduled bi-annual) update to Rcpp just brought version 1.1.0 to CRAN. Debian builds haven been prepared and uploaded, Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution-and of course r2u should catch up tomorrow as well.
The key highlight of this release is the switch to C++11 as minimum standard. R itself did so in release 4.0.0 more than half a decade ago; if someone is really tied to an older version of R and an equally old compiler then using an older Rcpp with it has to be acceptable. Our own tests (using continuous integration at GitHub) still go back all the way to R 3.5.* and work fine (with a new-enough compiler). In the previous release post, we commented that we had only reverse dependency (falsely) come up in the tests by CRAN, this time there was none among the well over 3000 packages using Rcpp at CRAN. Which really is quite amazing, and possibly also a testament to our rigorous continued testing of our development and snapshot releases on the key branch.
This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. As just mentioned, we do of course make interim snapshot 'dev' or 'rc' releases available. While we not longer regularly update the Rcpp drat repo, the r-universe page and repo now really fill this role admirably (and with many more builds besides just source). We continue to strongly encourage their use and testing-I run my systems with these versions which tend to work just as well, and are of course also fully tested against all reverse-dependencies.
Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 3038 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.6% of all packages depend (directly) on Rcpp, and 61.3% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 100.8 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 2023 (JSS, 2011) and 380 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 695.
As mentioned, this release switches to C++11 as the minimum standard. The diffstat
display in the CRANberries comparison to the previous release shows how several (generated) sources files with C++98 boilerplate have now been removed; we also flattened a number of if
/else
sections we no longer need to cater to older compilers (see below for details). We also managed more accommodation for the demands of tighter use of the C API of R by removing DATAPTR
and CLOENV
use. A number of other changes are detailed below.
The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!
Changes in Rcpp release version 1.1.0 (2025-07-01)
-
Changes in Rcpp API:
-
C++11 is now the required minimal C++ standard
-
The std::string_view
type is now covered by wrap()
(Lev Kandel in #1356 as discussed in #1357)
-
A last remaining DATAPTR
use has been converted to DATAPTR_RO
(Dirk in #1359)
-
Under R 4.5.0 or later, R_ClosureEnv
is used instead of CLOENV
(Dirk in #1361 fixing #1360)
-
Use of lsInternal
switched to lsInternal3
(Dirk in #1362)
-
Removed compiler detection macro in a header cleanup setting C++11 as the minunum (Dirk in #1364 closing #1363)
-
Variadic templates are now used onconditionally given C++11 (Dirk in #1367 closing #1366)
-
Remove RCPP_USING_CXX11
as a #define
as C++11 is now a given (Dirk in #1369)
-
Additional cleanup for __cplusplus
checks (Iñaki in #1371 fixing #1370)
-
Unordered set construction no longer needs a macro for the pre-C++11 case (Iñaki in #1372)
-
Lambdas are supported in a Rcpp Sugar functions (Iñaki in #1373)
-
The Date(time)Vector classes now have default ctor (Dirk in #1385 closing #1384)
-
Fixed an issue where Rcpp::Language would duplicate its arguments (Kevin in #1388, fixing #1386)
-
Changes in Rcpp Attributes:
- The C++26 standard now has plugin support (Dirk in #1381 closing #1380)
-
Changes in Rcpp Documentation:
-
Several typos were correct in the NEWS file (Ben Bolker in #1354)
-
The Rcpp Libraries vignette mentions PACKAGE_types.h
to declare types used in RcppExports.cpp
(Dirk in #1355)
-
The vignettes bibliography file was updated to current package versions, and now uses doi references (Dirk in #1389)
-
Changes in Rcpp Deployment:
-
Rcpp.package.skeleton()
creates 'URL' and 'BugReports' if given a GitHub username (Dirk in #1358)
-
R 4.4.* has been added to the CI matrix (Dirk in #1376)
-
Tests involving NA propagation are skipped under linux-arm64 as they are under macos-arm (Dirk in #1379 closing #1378)
Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
02 Jul 2025 8:05pm GMT
Japan is now very hot. If you are coming to Banpaku, be prepared.
02 Jul 2025 1:12am GMT
01 Jul 2025

- Debian packages:
- debianutils:
- firmware-nonfree:
- Uploads:
- uploaded version 20250509-1 to experimental
- uploaded version 20250613-1 to experimental
- gnome-shell:
- initramfs-tools:
- Bugs:
- Merge requests:
- Uploads:
- uploaded version 0.148 to unstable
- uploaded version 0.148.1 to unstable
- uploaded version 0.148.2 to unstable
- uploaded version 0.148.3 to unstable
- uploaded version 0.149 to experimental
- ktls-utils:
- Uploads:
- uploaded version 1.1.0-1 to experimental
- linux:
- Bugs:
- Merge requests:
- Uploads:
- uploaded version 6.12.27-1~bpo12+1 to bookworm-backports
- uploaded version 6.12.30-1~bpo12+1 to bookworm-backports
- uploaded version 6.12.32-1~bpo12+1 to bookworm-backports
- (LTS) Updated the bullseye-security branch to upstream version 5.10.238, but did not yet upload it
- (LTS) linux-6.1:
- Prepared a backport of version 6.1.140-1 to bullseye-security, but did not yet upload it
- linux-base:
- Merge requests:
- Uploads:
- uploaded version 4.12~bpo12+1 to bookworm-backports
- Debian non-package bugs:
- Mailing lists:
01 Jul 2025 7:08pm GMT

Context
Testing continued
- following a suggestion of
gordon1
, unload the mediatek module first. The following seems to work, either from the console or under sway
echo devices > /sys/power/pm_test
echo reboot > /sys/power/disk
rmmod mt76x2u
echo disk > /sys/power/state
modprobe mt76x2u
- It even works via ssh (on wired ethernet) if you are a bit more patient for it to come back.
- replacing "reboot" with "shutdown" doesn't seem to affect test mode.
- replacing "devices" with "platform" (or "processors") leads to unhappiness.
- under sway, the screen goes blank, and it does not resume
- same on console
01 Jul 2025 10:29am GMT

Another short status update of what happened on my side last month. Phosh 0.48.0 is out with nice improvements, phosh.mobi e.V. is alive, helped a bit to get cellbroadcastd out, osk bugfixes and some more:
See below for details on the above and more:
- Fix crash triggered by our mpris player refactor (MR)
- Generate vapi file for libphosh (MR)
- Backport fixes for 0.47 (MR)
- Media players lockscreen plugin (MR), bugfix
- Fix lockscreen clock when am/pm is localized (MR)
- Another round of CI cleanups (MR)
- Proper life cycle for MeatinfoCache in app-grid button tests (MR)
- Enable cell broadcast display by default (MR)
- Release 0.48~rc1, 0.48.0
- Unify output config updates and support adaptive sync (MR)
- Avoid crash on shutdown (MR)
- Avoid use after free in gtk-shell (MR)
- Simplify CI (MR)
- Release 0.48~rc1, 0.48.0
phosh-mobile-settings
stevia (formerly phosh-osk-stub)
- Release 0.48~rc1, 0.48.0
- Reject non-UTF-8 dictionaries for hunspell so avoid broken completion bar (MR)
- Output tracking (MR) as prep for future work
- Handle non-UTF-8 dictionaries for hunspell for input and output (MR)
- Fix some leaks (MR)
- Handle default completer changes right away (MR)
phosh-osk-data
- Handle stevia rename (MR)
- Supply ru presage data
phosh-vala-plugins
pfs
- Fix initial empty state (MR)
- Use GNOME's mirror for fdo templates (MR)
xdg-desktop-portal-phosh
xdg-desktop-portal
- Fix categories for cell broadcasts (MR)
- Relax app-id requirement in app-chooser portal (MR)
phosh-debs
- Switch from osk-stub to stevia (MR)
meta-phosh
- Make installing from sid and experimental convenient (MR)
feedbackd
feedbackd-device-themes
gmobile
- Release 0.4.0
- Make gir and doc build warning free (MR)
GNOME clocks
- Use libfeedback instead of GTK's media api: (MR). This way the alarm become more recognizable and users can tweak alarm sounds.
- Fix flatpak build and CI in our branch that carries the needed patches for mobile
Debian
- meta-phosh: Switch to 0.47 (MR)
- libmbim: Upload 1.33.1 to experimental
- libqmi: Upload 1.37.1 to experimental
- modemmanager: Upload 1.23.1 to experimental
- Update mobile-broadband-provider-info to 20250613 (MR) in experimental
- Upload phoc 0.48~rc1, 0.48.0 to experimental
- Upload gmobile 0.4.0 to experimental
- Upload phosh-mobile-settings 0.48~rc1, 0.48.0 to experimental
- Upload xdg-desktop-portal-phosh 0.48~rc1, 0.48.0 to experimental
- Prepare stevia 0.48~rc1 and upload 0.48.0 to experimental
- Upload feedbackd 0.8.3 to experimental
- Upload feedbackd-device-themes 0.8.4 to experimental
Mobian
- Add feedbackd and wakeup timer support (MR)
ModemManager
- Release 1.25.1
- Test and warning fixes (MR)
- run asan in ci (MR) and fix more leaks
libmbim
libqmi
mobile-broadband-provider-info
Cellbroadcastd
- Better handle empty operator (MR)
- Use
GApplicaation
(MR)
- Fix library init (MR)
- Add desktop file (MR)
- Allow to send notifications for cell broadcast messages (MR)
- Build introspection data (MR)
- Only indicate Cell Broadcast support for MM >= 1.25 (MR)
- Implement duplication detection (MR)
- Reduce API surface (MR)
- Add symbols file (MR)
- Support vala (MR)
iio-sensor-proxy
- Add minimal gio dependency (MR)
twenty-twenty-hugo
gotosocial
- Explain
STARTTLS
behavior in docs (MR)
Reviews
This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!
- cellbroadcastd: Message store (MR)
- cellbroadcastd: Print severity (MR)
- cellbroadcastd: Packaging (MR)
- cellbroadcastd: Rename from cbd (MR)
- cellbroadcastd: Release 0.0.1 (MR)
- cellbroadcastd: Release 0.0.2 (MR)
- cellbroadcastd: Close file descriptors (MR)
- cellbroadcastd: Sort messages by timestamp (MR)
- meta-phosh: Ignore subprojects in format check (MR)
- p-m-s: pmOS tweaks ground work (MR)
- p-m-s: osk popover switch (MR)
- p-m-s: Add panel search (MR)
- p-m-s: Add cellbroadcastd message history (MR)
- phosh: Add search daemon and command line tool to query search results (MR)
- phosh: App-grid: Set max-width entries (MR)
- chatty: Keyboard navigation improvements (MR)
- phosh: LTR QuickSettings and fix LTR in screenshot tests (MR)
- iio-sensor-proxy: improve buffer sensor discovery: (MR)
- Calls: allow favorites to ring (MR)
- feedbackd: More haptic udev rules (MR)
- feedbackd: Simplify udev rules (MR)
- feedbackd: Support legacy LED naming scheme (MR)
- gmobile: FLX1 wakeup key support (MR)
- gmobile: FP6 support (MR)
Help Development
If you want to support my work see donations.
Comments?
Join the Fediverse thread
01 Jul 2025 8:47am GMT
Focus
This month I didn't have any particular focus. I just worked on issues in my info bubble.
Changes
Issues
Review
Sponsors
All work was done on a volunteer basis.
01 Jul 2025 1:55am GMT
30 Jun 2025

My Debian contributions this month were all sponsored by Freexian. This was a very light month; I did a few things that were easy or that seemed urgent for the upcoming trixie release, but otherwise most of my energy went into Debusine. I'll be giving a talk about that at DebConf in a couple of weeks; this is the first DebConf I'll have managed to make it to in over a decade, so I'm pretty excited.
You can also support my work directly via Liberapay or GitHub Sponsors.
PuTTY
After reading a bunch of recent discourse about X11 and Wayland, I decided to try switching my laptop (a Framework 13 AMD running Debian trixie with GNOME) over to Wayland. I don't remember why it was running X; I think I must have either inherited some configuration from my previous laptop (in which case it could have been due to anything up to ten years ago or so), or else I had some initial problem while setting up my new laptop and failed to make a note of it. Anyway, the switch was hardly noticeable, which was great.
One problem I did notice is that my preferred terminal emulator, pterm
, crashed after the upgrade. I run a slightly-modified version from git to make some small terminal emulation changes that I really must either get upstream or work out how to live without one of these days, so it took me a while to notice that it only crashed when running from the packaged version, because the crash was in code that only runs when pterm
has a set-id bit. I reported this upstream, they quickly fixed it, and I backported it to the Debian package.
groff
Upstream bug #67169 reported URLs being dropped from PDF output in some cases. I investigated the history both upstream and in Debian, identified the correct upstream patch to backport, and uploaded a fix.
libfido2
I upgraded libfido2 to 1.16.0 in experimental.
Python team
I upgraded pydantic-extra-types to a new upstream version, and fixed some resulting fallout in pendulum.
I updated python-typing-extensions in bookworm-backports, to help fix python3-tango: python3-pytango from bookworm-backports does not work (10.0.2-1~bpo12+1).
I upgraded twisted to a new upstream version in experimental.
I fixed or helped to fix a few release-critical bugs:
30 Jun 2025 11:30pm GMT