09 May 2026
Planet Debian
Jelmer Vernooij: Remove-after Annotations for Debian Files
deb-scrub-obsolete is a tool in the debian-codemods suite that tries to identify and remove cruft automatically. It knows about dummy transitional packages, superseded alternatives, and similar patterns it can detect by querying the archive. But some workarounds are too project-specific for a generic tool to recognise on its own.
Developers can leave structured comments in their packaging files that tell deb-scrub-obsolete when a particular line or block can be removed.
The Debian Janitor regularly runs various codemods like deb-scrub-obsolete on all vcs-accessible Debian packages. This means that if you leave a "remove-after: trixie" annotation in your package, you will automatically get a pull request to remove the annotated code once trixie has been released, without needing to remember to do it yourself.
The Comment Format
The annotations take the form of specially-formatted comments. For shell files (and by extension most maintainer scripts), a line-level annotation looks like this:
install -m 755 compat-wrapper /usr/lib/foo/ # remove-after: trixie
When trixie has been released, deb-scrub-obsolete will remove that line entirely. The comment can appear anywhere on the line - before or after other comments - and additional explanatory text can follow:
blah # Trixie comes with blah built in # remove-after: trixie
For larger sections, block-level annotations bracket the code to remove:
# begin-remove-after: trixie alternatives --add foo bar alternatives --add foo bar1 # end-remove-after
These blocks can be nested, which is useful when one outer condition wraps several inner ones with finer-grained timing.
Expressions
The initial set of supported expressions is deliberately small. The main one is a Debian release name: remove-after: trixie means "once trixie has been released". The condition is checked against distro-info <https://manpages.debian.org/trixie/distro-info/distro-info.1.en.html>_, the same data source that other Debian tooling uses to track release status.
The expression language is designed to be monotonic - conditions should only ever go from false to true, not back. A workaround that needs to be re-introduced after removal belongs in a new commit, not in an annotation. If deb-scrub-obsolete cannot parse an annotation it finds in a file, it leaves all annotations in that file untouched, to avoid a situation where related blocks are only partially removed.
Annotations can also carry a marker name - an arbitrary label with no spaces, commas, or the word "after" - which can then be passed to deb-scrub-obsolete on the command line. This makes it possible to trigger removal of a named set of annotations together, useful for coordinated transitions where several packages need to be cleaned up at the same time.
Future Extensions
The initial expression set is minimal; the design leaves room for richer conditions. Some candidates under consideration:
- Whether a particular suite has a new enough version of a package (removing a Build-Depends version constraint once it is satisfied everywhere)
- Whether a package has been removed from the archive
- Whether all currently-supported releases contain a new enough version
- Whether a Debian transition has completed
Compound expressions using "and" / "or" are also on the list, for cases where removal depends on multiple conditions being true simultaneously.
Status
The annotation format is specified but not yet implemented in deb-scrub-obsolete - it is planned for a future release. If you maintain Debian packages and have opinions on the annotation format or the expression language, feedback is welcome. The specification lives in scrub-obsolete/doc/scrub-annotations.md in the lintian-brush repository. Many thanks to Helmut Grohne for the initial suggestion and feedback on the design.
09 May 2026 6:45pm GMT
Russell Coker: Packaging Amazfish for Debian
I have done some packaging work on Amazfish (the smart-watch software that works with the PineTime among others) for Debian. Here is my Git repository for libnemodbus (a dependency for Amazfish that isn't in Debian) [1]. Here is my Git repository for Amazfish itself [2].
These packages are currently using QT5 which is a good reason to not upload them now as the transition to QT6 is in progress. Patching them to work with QT6 (as the libnemodbus upstream is apparently not migrating to QT6 yet) shouldn't be that difficult but is something that needs some care and communication to get it right.
Running this package on my laptop with my PineTime (which worked very reliably when run by GadgetBridge on Android) wasn't reliable and the PineTime would disconnect and refuse to connect again. Doing it on the Furilabs FLX1s gave a similar result. If Amazfish was the only Bluetooth program having problems on my laptop and on my FLX1s then I'd blame it, but both those systems have some other Bluetooth issues.
Running this on my laptop Amazfish would send it's own test notifications to my watch but system notifications (from notify-send among others) wouldn't get sent. Running this on my FLX1s I got ONE notification from my network monitoring system sent to my watch before my phone and watch stopped talking to each other.
To make things even more difficult for me the harbour-amazfish-ui program doesn't work correctly with the libraries installed on my FLX1s and doesn't display the content of many screens but it works correctly when running in a container environment with stock Debian/Testing.
Below is the script that I'm currently using to launch apps in a Debian/Testing container on my FLX1s. The comment about unshare-user doesn't apply to this version of the script but I left it in to avoid the potential for future confusion. The Furilabs people diverted the bwrap binary and have a wrapper that removes a set of parameters that they think will cause problems.
#!/bin/bash set -e BUILDBASE=/chroot/testing # bwrap: Can't mount proc on /newroot/proc: Device or resource busy # get the above with --unshare-user and --unshare-pid exec bwrap.real --bind /tmp /tmp --bind /run /run --bind $HOME $HOME --ro-bind $BUILDBASE/etc /etc --ro-bind $BUILDBASE/usr /usr --ro-bind $BUILDBASE/var/lib /var/lib --symlink usr/bin /bin --symlink usr/sbin /sbin --symlink usr/lib /lib --proc /proc --dev-bind /dev /dev --die-with-parent --new-session "$@"
Due to the range of problems I'm having I think it would be best to pass this package on to someone else who has a different test setup. It could be that further testing will reveal that my issues are related to bugs in Amazfish but I can't prove it either way at this time. Maybe when using a smart watch other than a Pine Time it will work more reliably but it seems most likely that my laptop and phone are to blame. I can't make more progress on this now.
09 May 2026 12:04pm GMT
Russell Coker: Bad Criticism of LLMs (not AI)
Discussion of "AI" systems seems to be dominated by fears of uncommon and unlikely threats. I think that we should be focusing more on real issues with LLMs and with society in general and put the most effort towards the biggest problems.
It's Not AI
True Artificial Intelligence [1] (IE a computer that has the mental capacity of a household pet) is something that I think can be developed, but it hasn't been developed and we don't have good plans for developing it. We seem to be a lot further away from achieving that goal than we were from landing on the moon in 1962 when JFK gave his historic speech.
What we have is a variety of pattern recognition systems that can predict what fits into a pattern. The most well known type of Machine Learning (ML) is the Large Language Model (LLM) which means ChatGPT and similar systems which predict which text would be likely to come next and can make an essay from it. They can give interesting and useful output, but there is no thought behind it, it's just a better form of Eliza (the famous program from 1964 that simulates conversation by pattern matching) [2]. By analysing billions of documents, storing the data in a condensed mathematical way, and then using computation to extract from that record LLMs can produce output that is unfortunately considered by some people to be good enough to include in legal documents submitted to courts, university assignments, and many other documents. But they do so without even having the thinking ability of a mouse.
To call current systems "AIs" without any significant qualifiers when criticising them is to concede the debate about the worth of such things.
If we develop AIs that can actually think we will have to deal with the issues in the SciFi horror short story Lena by qntm [3].
The Bad Arguments
Here is a list of some of the most unreasonable arguments I've seen against "AI" which distract attention from real problems both related to "AI" and other problems in society.
Suicide and Homicide
Wikipedia has a page listing Deaths Linked to Chatbots [4] which right now has 16 entries from 2023 to Feb 2026. They are all tragedies and as a society we should try to prevent such things. But what I would like to see from the media is some analysis of overall trends, yes it gets people's attention when someone dies in an unusual way but we need to have attention paid to the more numerous deaths which are preventable. It has become a standard practice to give information on Lifeline in media referencing suicide, it would be good if they also developed a practice of mentioning the relative incidence of a problem when publishing an article about it.
One of the many factors that cause more suicides than chatbots is school, Scientific American has an informative article from 2022 about the correlation between child suicide and school [5]. It is based on US statistics and shows that the lowest suicide rate is in July (a no-school month in the US) which has a rate of 2.3 per 100,000 person years. So if kids had a quality of life equivalent to July all year around then there would be 2.3 suicides per 100,000 kids every year while if they had a quality of life equivalent to a Monday in January or November it would be 3.9 suicides per 100,000 kids every year. The article states "Any time I present these data to teachers, parents, principals or school administrators, they are shocked. This should be common knowledge." It is common knowledge to anyone who takes any notice of what happens in schools, but paying attention to serious problems is unpleasant, it's more fun to pretend that school is good for everyone. No parent wants to think that they sent their child to a place that was horrible, no teacher wants to think that they are part of a system that harms kids.
The US CDC has an informative article about youth suicide [6] which documents it as the 3rd largest cause of death in the 14-18 age range fro 2021. This article was published in 2024 and based on statistics from 2023 and earlier. It notes significant differences in suicides, attempts, and "persistent feelings of sadness or hopelessness" which had girls at more than twice the rate of boys and "LGBQ+" kids at more than twice the rate of "heterosexual" students. It seems obvious that misogyny and homophobia is correlated with suicide and that's something that could and should be addressed in schools. My state has a Safer Schools program [7] to try and alleviate the problems related to homophobia, but I expect that things are getting worse in the US in that regard. 39.7% of kids in US high schools had "persistent feelings of sadness or hopelessness" before LLMs became popular, school could and should be a happy time for the vast majority of kids but instead almost half of the kids don't enjoy it and a majority of girls and "LGBQ+" kids don't. Having no mention of trans kids is a significant omission from that article, based on everything I've heard from trans people I expect that their statistics would be even worse.
One could argue that the small number of deaths inspired by use or misuse of LLMs is an indication of a larger number of people suffering in ways that don't result in death and don't get noticed. But I don't think that can compare to the fact that the majority of girls and "LGBQ+" kids have "persistent feelings of sadness or hopelessness" in the current school system.
Regarding homicide, the Australian Institute of Criminology has an article showing that in the 2003-2004 time period 49% of women who were killed were listed as a "domestic argument" [8], that's something that could and should be addressed. That article claimed 308 homicide victims in that time period which is larger than the world-wide death toll from LLMs but also less than 1/3 the death toll from car accidents in Australia. Australia has less than 0.4% of the world population, a fairly low homicide rate, and a number of homicides that vastly outnumbers all world homicides related to LLMs.
I think it's great to address any cause of suicide or homicide, but devoting government resources and legislation towards very uncommon causes instead of things that happen every day is not a good strategy. It would be fine to address all factors leading to suicide, but problems with the school system have been a major factor for decades with little effort applied to fix it.
Fraud and Other Crime
There is evidence of criminals using LLMs to help prepare for crimes, the ability to generate large amounts of text quickly can be used for fraud and extortion. This is going to be a serious problem and we need structural changes to society to deal with it. There is an ongoing issue of scammers convincing older people that their child or other young relative is in trouble and a large amount of cash is required to address it. This sort of scam as well as the more well known "Nigerian" scams will probably become more common as the cost of running them decreases. This may be more of a problem for people in developing countries as currently a common scam business model is to have people in regions where wages are low (such as Pakistan for one who I spoke to) scamming people in relatively wealthy countries like Australia so an attack with a low probability of success is financially viable. Cheaper attacks will make less affluent victims financially viable to the scammers.
While writing this post I received a financial scam phone call trying to get me to invest in SpaceX that was run by an "AI" chat system, I expect to receive more of them and this is something that needs to be dealt with via both technical measures and legislation.
Do we have to accept less freedom and less anonymity in finances as a cost of reducing financial crime? Greater restrictions on the use of cash would make some crimes more difficult or less profitable for criminals. As a society I think we need to have a discussion about a balance between financial freedom and freedom from criminal exploitation, failing to have such a discussion is likely to lead to policies which don't work well.
Also one thing that ML systems are good at is recognising patterns in data. Banks could scan all their transactions and look for patterns that correlate with fraud. They currently do this badly and do things like locking credit cards when someone goes to another country and spends money. They could do a better job of that and involve the police in cases of obvious fraud even when the customer doesn't realise that they are a victim.
This isn't a reason to criticise "AIs", it's a reason to plan defensive technology that matches the capabilities of attackers.
As an aside I used to work for a company that was developing "AI" software to scan bank phone calls and allow banks to recognise employees who acted illegally. Unfortunately the Royal Commission into banking misconduct [9] didn't impose any penalties that gave the banks a financial reason to avoid criminal activity.
Unemployment and Inequality
There are many claims about AI systems making large numbers of jobs obsolete, some of them are outlandish such as the claims that all white-collar jobs will be obsolete in the near future. There are some reasonable claims like the ability to replace some mundane jobs.
Replacing jobs that suck with computers, robots, and other machinery is a good thing! Very few people wish that they were working on a farm without a tractor. In 1900 it's estimated that between 60% and 70% of the world labour force worked in agriculture and 40% of the US labour force did so. Now it's something like 27% globally and between 1% and 3% in developed countries. Automated factories are also a good thing, it's best to avoid boring and dangerous work.
The most plausible claims about job replacement from "AI" is jobs that involve analysing and summarising documents. One example that comes to mind is the worst kind of journalism where press releases from companies are massaged into the format of a feature article. I don't think anyone wants that sort of job and doing it with "AI" hopefully means no human has to sign their name to it.
For work like programming few people will be directly replaced by "AI" but if people can do their work more efficiently while using it then less people are required. I don't think that any programmer likes the part of their job where they have to skim read long documents looking for a clue about how to solve a problem with a library or protocol. A LLM processing the document and finding the potentially useful things will take away the drudgery from the work and allow greater productivity.
The trend in replacing people has been making people work longer. If you force all employees to work 60 hour weeks then that can theoretically allow hiring fewer people than having 40 hour weeks. For some work that applies but for skilled work it mostly doesn't as productivity and work quality on average drops when people work more than 40 hours in a week.
Another trend for exploiting people is having a low minimum wage and making accommodation expensive so that many people need to work two jobs. What we need is legislation to restore the situation in the 70s where a single full time job was sufficient to provide for a family. The low minimum wage and high expenses for many things is a problem that's been slowly developing over the course of decades while being mostly ignored by journalists. If they could concentrate on the real issues that are hurting workers today they could incite political action to fix these problems.
Academic Cheating
There is no shortage of ways of cheating in school and university. There are people who are paid to write essays, mobile phones are used for cheating in exams, etc. Getting an "AI" to write essays makes it easier to cheat for the essay writing part but does so with lower quality and in a less stealthy way.
What's the worst case scenario? That we have to change to oral exams for all university subjects?
In the US the average annual price for tuition at a university is apparently $25,000, if each student had individually supervised assessment for their exams at a cost of $100 per hour it would make the degree cost 4% more. The cost of university in the US is unreasonably high and that's a problem that needs to be fixed, but a hypothetical case of increasing the price by 4% isn't going to be a major part of it.
Weak Arguments Against "AI"
Computer Security Attacks
There have been many claims made that "AI" will break the security of all systems and cause the type of disruption that was previously predicted for year 2000. Bruce Schneier has written a good analysis of the issues including how "AI" can be used by both attackers and defenders [10], he doesn't have a strong conclusion on whether the net result will be good or bad but his article does make it clear that the result is not going to be a total disaster.
While I was working on this post I read another post by Bruce Schneier that was significantly more negative about this issue [11]. While I still don't think this will destroy civilisation I found his other post convincing enough to move computer security from the bad argument section to the weak argument section.
Spidering the Web to Death
There are issues of bots from "AI" companies doing a bad job of trying to download all the Internet's content and using a lot of resources. When it was just the major search engines and the Wayback Machine doing it the load was small due to having a small number of organisations that were very good at the way they did it having evolved practices over many years. Now we have a lot of idiots doing it badly and repeatedly hitting generated content.
This is really annoying but is something that we can deal with. Currently my blog and many other sites are hosted on a Hetzner server with a E3-1271 v3 CPU with 32G of RAM and there are occasions where more than half the CPU power is being used to service web requests from such systems. Even on the "server bidding" (renting servers previously used by other customers) Hetzner isn't offering systems so slow nowadays, the slowest they offer is about 20% faster than that. This is something that can be dealt with by spending a little more on hosting until the companies doing that go bankrupt.
I'm sure this is a serious problem for some people, but for most people it's not a big deal. Also hostile traffic on the Internet is something we have all had to deal with as a part of life since the mid to late 90s.
RAM Prices
The unreasonably high prices for RAM are annoying and hurt the development of useful computer projects. Big companies can afford it, even with current high prices and large quantities of RAM used for some servers it's still not significant. But it is a major issue for hobbyists and small projects. Things like setting up a dozen test VMs for FOSS development are now too expensive for many people who develop software in their spare time.
But this is a temporary thing, if AI companies were to keep buying RAM at high rates for a few years companies would just manufacture more of it to meet demand. In some situations capitalism can work.
Environmental Damage
There are many people claiming that power used by data centers for "AI" will lead to environmental damage, using power and water when there isn't enough.
The trend of computer hardware is to get smaller and faster. It hasn't been going as fast as it used to in many areas but it hasn't stopped either and it's an exponential trend. There has been an increase in data centers (DCs) for "AI" use as the use has been increasing faster than the hardware gets smaller. Eventually they will stop increasing faster than advances in hardware and software can match and the size of DCs will decrease.
As the production of renewable energy is increases the environmental cost of energy hungry industries decreases. In a few years this won't be an issue anyone is bothered about.
False Claims About Danger as PR
Jamie McClelland makes an interesting claim that the AI companies are pushing dangers of "AI" as a method of PR [12]. That seems plausible and combined with the tendency of many journalists to just massage press releases from companies into articles could be the reason for a lot of the bad arguments against AI.
Good Arguments Against AI
Spam Everywhere
I've previously written about Communication and Hostile AIs [13]. I think that filling all communication channels with rubbish is a denial of service attack against society.
In the past communication took some effort, even the simplest email that was directly targeted at the recipient took some human effort and that reduced it's frequency. I get a lot of spam saying something like "I see your web site doesn't rank in the top for Google searches" while my web site in fact rates well and the actor named Russell Coker is ranking below me, so I know that such spam hasn't had the minimum of human involvement. Now a spammer who wanted to do a better job could get an LLM written spam for every target so the message was specifically aimed at them and would take much longer to be recognised by a human as spam and would also avoid most anti-spam software.
Searching for businesses used to be easy, the phone book had listings for them and there was a real cost to being in the book as well as humans actively trying to stop fraud. Creating fake web sites to get business isn't too difficult but it's also not trivial at the moment and such fake sites won't look complete. Now with LLMs it's possible to create hundreds of sites that have content and look reasonable without human involvement. Instead of the small number of suicides and homicides inspired by "AI" chat systems we should probably be concerned about people who need psychological or medical advice being misled by bogus web sites created as part of fraud campaigns. Imagine people searching for mental health assistance finding web sites run by cults who oppose psychology as a profession. Imagine people searching for basic medical advice such as how to cook a healthy meal getting sucked in to web sites that start sane and then lead people to Ivermectin as a universal medicine.
LLMs have the potential to take spam from quick and simple attacks to large scale targeted fraud aimed at people and organisations that don't have the resources to defend against it. There have been many reports of CEO impersonation fraud against major corporations aiming to steal hundreds of thousands of dollars and fraud against individuals who are persuaded to get amounts like $50,000 to help a relative who is allegedly in a difficult situation. But if every corner store experienced the same type of attack that CEOs experience and if every child had someone trying to steal the pocket money in the same way that relatively wealthy people are being targeted now it would really change things.
Deep Fakes
There is some overlap between filling all communications channels with rubbish (fake news etc) and deep fake. Making a fake photo of a politician or celebrity to lobby for legislative changes is a real issue but it's not what most people think of when the term "deep fake" is used.
Using photo and video faking targeting non-consenting people is a serious issue. It's not just fake porn (which is a major issue and will cause some suicides) as there are many other possibilities. Fake videos showing behaviour that justifies sacking people from their jobs is going to become an issue, for people in public facing positions even proof that the videos are fake won't necessarily help them.
Will we find ourselves in a situation where every politician gets deep-fake porn made of them and the only people who run for public office are ones who are cool with that? Will positions of leadership in the technology industry be restricted to people who aren't bothered by having the most depraved fake porn made of them?
The Justice System
We have seen a lot of evidence of law enforcement and the court system based on bias leading to bad results. The Innocence Project attempts to correct that and it's web site documents some of the things that have gone wrong [15]. Using "AI" systems to do some of the work of law enforcement by training computers on the flawed results of current systems can entrench bias and also make it harder to spot.
When determining whether someone should be considered a suspect or whether a prisoner should be eligible for parole the number of factors that a human can use is limited. But a computer can take many more factors into account so the issues of whether inappropriate factors are being used can be masked. Computers are also unable to explain decisions that they made and are also able to come up with better fake reasons.
In the past there have been racist policies in the US about banks not lending to people living in suburbs where most houses were owned by non-white people, these policies were documented and the documents have become part of the historical record showing racist policies. If a LLM decides not to lend money to people based on mathematical correlations it determined based on historical banking practices it could assign negative weights to factors such as non-English names and implement the racism in a large array of numbers with no proof.
The current cases of lawyers getting LLM systems to do some of their work and having their incompetence revealed when the computer generated work is shown to be ridiculously bad are amusing. But that is not the real problem. The real problems will start when the computers in police cars start flagging every car owned by a non-write person as having a "probable cause" for a drug stop.
Technically Not Financial Fraud
The majority of the ecosystem around "AI" is a financial scam [16]. There are companies and individuals doing good things with machine learning some of which is based on hardware and software developed as part of this ecosystem. But the majority of it has no plausible path to profits and a the future of it inevitably ends with some bankruptcies. There are circular flows of money that have the major cloud providers and NVidia looped in, when the values of these companies correct it will become apparent that they have all burned a lot of money keeping this running and all the senior people have got a share of it (the entire purpose of stock options is to allow senior people to suck money out of the company). Then every cloud provider will increase costs while under chapter 11 and all the companies that depend on them will pay whatever it takes. That includes all major companies and most governments. Unlike the dot-com boom and crash and the housing crash the coming financial crash will impact every company that we deal with and most governments. So the people in first-world countries will effectively be taxed to pay for this scam while the executives go party in Monaco. This may seem like an extreme claim but it all happened before with the dot com crash and the housing market crash.
The CEO class has an ongoing practice of doing things that aren't crimes because they lobby (bribe) politicians to make them legal. So the current stock market shenanigans around "AI" don't seem to involve things that governments consider to be crimes. But any normal person might be surprised to learn that such things are legal and most people would vote for such things to be crimes if they had the opportunity.
A global financial crisis is the least of the problems that seem likely to afflict society from "AI" systems. But it will be more immediately obvious when it happens - which could be this year!
Propaganda
Creating art requires skills that the type of people who want to create propaganda tend to lack. "AI" technologies allow creating "art" that is based on mathematical models of actual art to the requirements of the person running the program.
I have seen the term "AI Fascism" used to describe the use of "AI" to help authoritarian governments. I am dubious about whether it deserves that term and while every article I've read about the topic has had some good points I thought that they were all weak points.
But there are lots of ways that governments can abuse their populations without going full fascist. In the last century there were lots of truly terrible governments that didn't even make the top 10 of fascism.
AI Sycophants
Bruce Schneier wrote an informative blog post about AI Chatbots and Trust which focused on sycophantic chatbots [17]. We have seen a lot of evidence of terrible behaviour and stupid decisions from rich people due to having no negative consequences for bad choices. The vast majority of the history of kings concerns bad decisions made by such people. A future where middle class and poor people can make the same bad decisions as rich people wouldn't be good.
Good Things About ML
Machine Learning (abbreviated as ML) can do useful things. It's not just Large Language Models (LLMs) such as ChatGPT etc. There are also ML systems that can analyse images and other data sets.
I have found ChatGPT to be very useful for making suggestions for improving blog posts. I don't get it to write anything just ask for suggestions. It has pointed out things that I missed such as when I didn't include the price when reviewing a car because the car in question was much more expensive than I will ever pay, the price wasn't relevant to me but would be to some readers. It has also made useful suggestions about structure of blog posts, repeating points, and having a good conclusion. It has some downsides which include trying to erase my voice from my writing, suggesting that the rhetorical question "does email suck?" is unprofessional.
I have worked for a company that used ML systems to analyse driver performance and alert people if a driver is falling asleep, using a phone, or otherwise seems unable to drive safely. Their business model involved a human reviewing the images from the drivers the computer flagged and then determining who is actually doing the wrong thing. This seems a good use of the technology.
I have also worked for a company that used ML systems to analyse the performance of bank employees and detect potentially fraudulent behaviour. Preventing crime seems to be clearly a good thing and in this case the manager of the employee in question would review the evidence to make sure that they weren't being falsely accused.
Conclusion
I don't think that the problems with managing the changes that so called "AI" is introducing are particularly new. An example of how society handles change that's worth considering is car safety. The seat belt first became mandatory for aeroplanes in some jurisdictions in 1928. The Model T Ford is widely regarded as the first vehicle to start a mass market for cars and it was released in 1925. So if society acted in a reasonable way then for the majority of mass market cars seat belts would have been a standard feature. However seat belts were first made compulsory in 1970 in Victoria Australia and there are still people who think that they are safer without seat belts! The delay in adoption of car seat belts is only one example of needless deaths caused by not taking reasonable measures for car safety but it's one that's easy to demonstrate and measure.
The difference between past problems like car safety and the current problems of "AI" is that the "AI" problems will be more pervasive. Most of my history as a car driver and car passenger was in cars that are much less safe than cars made in the last 10 years. But partly through luck I've never been in a serious crash so being in cars that wouldn't have given me a low probability of surviving a freeway speed crash didn't affect me. There is no possibility that through any combination of luck and skill someone could avoid the downsides of "AI". If nothing else the results of elections will be affected and no-one can avoid that.
As a society we really need to address the real issues related to "AI" which in some cases requires legislation.
- [1] https://en.wikipedia.org/wiki/Artificial_intelligence
- [2] https://en.wikipedia.org/wiki/ELIZA
- [3] https://qntm.org/mmacevedo
- [4] https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots
- [5] https://tinyurl.com/262hrtke
- [6] https://www.cdc.gov/mmwr/volumes/73/su/su7304a9.htm
- [7] https://www.vic.gov.au/safe-schools
- [8] https://www.aic.gov.au/sites/default/files/2020-05/cfi110.pdf
- [9] https://tinyurl.com/2cbhq737
- [10] https://tinyurl.com/254wy3br
- [11] https://tinyurl.com/2cnx7t48
- [12] https://tinyurl.com/27wgwqu4
- [13] https://tinyurl.com/2354dewc
- [14] https://tinyurl.com/28vhagfz
- [15] https://innocenceproject.org/
- [16] https://fromtheprism.com/anthropic-30-billion
- [17] https://tinyurl.com/2cc6tyov
09 May 2026 10:40am GMT