17 Feb 2026

feedPlanet Debian

Russell Coker: Links February 2026

Charles Stross has a good theory of why "AI" is being pushed on corporations, really we need to just replace CEOs with LLMs [1].

This disturbing and amusing article describes how an Open AI investor appears to be having psychological problems releated to SCP based text generated by ChatGPT [2]. Definitely going to be a recursive problem as people who believe in it invest in it.

interesting analysis of dbus and design for a more secure replacement [3].

Scott Jenson gave an insightful lecture for Canonical about future potential developments in the desktop UX [4].

Ploum wrote an insightful article about the problems caused by the Github monopoly [5]. Radicale sounds interesting.

Niki Tonsky write an interesting article about the UI problems with Tahoe (latest MacOS release) due to trying to make an icon for everything [6]. They have a really good writing style as well as being well researched.

Fil-C is an interesting project to compile C/C++ programs in a memory safe way, some of which can be considered a software equivalent of CHERI [7].

Brian Krebs wrote a long list of the ways that Trump has enabled corruption and a variety of other crimes including child sex abuse in the last year [8].

This video about designing a C64 laptop is a masterclass in computer design [9].

Salon has an interesting article about the abortion thought experiment that conservatives can't handle [10].

Ron Garrett wrote an insightful blog post about abortion [11].

Bruce Schneier and Nathan E. Sanders wrote an insightful article about the potential of LLM systems for advertising and enshittification [12]. We need serious legislation about this ASAP!

17 Feb 2026 8:09am GMT

16 Feb 2026

feedPlanet Debian

Antoine Beaupré: Keeping track of decisions using the ADR model

In the Tor Project system Administrator's team (colloquially known as TPA), we've recently changed how we take decisions, which means you'll get clearer communications from us about upcoming changes or targeted questions about a proposal.

Note that this change only affects the TPA team. At Tor, each team has its own way of coordinating and making decisions, and so far this process is only used inside TPA. We encourage other teams inside and outside Tor to evaluate this process to see if it can improve your processes and documentation.

The new process

We had traditionally been using a "RFC" ("Request For Comments") process and have recently switched to "ADR" ("Architecture Decision Record").

The ADR process is, for us, pretty simple. It consists of three things:

  1. a simpler template
  2. a simpler process
  3. communication guidelines separate from the decision record

The template

As team lead, the first thing I did was to propose a new template (in ADR-100), a variation of the Nygard template. The TPA variation of the template is similarly simple, as it has only 5 headings, and is worth quoting in full:

The previous RFC template had 17 (seventeen!) headings, which encouraged much longer documents. Now, the decision record will be easier to read and digest at one glance.

An immediate effect of this is that I've started using GitLab issues more for comparisons and brainstorming. Instead of dumping in a document all sorts of details like pricing or in-depth alternatives comparison, we record those in the discussion issue, keeping the document shorter.

The process

The whole process is simple enough that it's worth quoting in full as well:

Major decisions are introduced to stakeholders in a meeting, smaller ones by email. A delay allows people to submit final comments before adoption.

Now, of course, the devil is in the details (and ADR-101), but the point is to keep things simple.

A crucial aspect of the proposal, which Jacob Kaplan-Moss calls the one weird trick, is to "decide who decides". Our previous process was vague about who makes the decision and the new template (and process) clarifies decision makers, for each decision.

Inversely, some decisions degenerate into endless discussions around trivial issues because too many stakeholders are consulted, a problem known as the Law of triviality, also known as the "Bike Shed syndrome".

The new process better identifies stakeholders:

Picking those stakeholders is still tricky, but our definitions are more explicit and aligned to the classic RACI matrix (Responsible, Accountable, Consulted, Informed).

Communication guidelines

Finally, a crucial part of the process (ADR-102) is to decouple the act of making and recording decisions from communicating about the decision. Those are two radically different problems to solve. We have found that a single document can't serve both purposes.

Because ADRs can affect a wide range of things, we don't have a specific template for communications. We suggest the Five Ws method (Who? What? When? Where? Why?) and, again, to keep things simple.

How we got there

The ADR process is not something I invented. I first stumbled upon it in the Thunderbird Android project. Then, in parallel, I was in the process of reviewing the RFC process, following Jacob Kaplan-Moss's criticism of the RFC process. Essentially, he argues that:

  1. the RFC process "doesn't include any sort of decision-making framework"
  2. "RFC processes tend to lead to endless discussion"
  3. the process "rewards people who can write to exhaustion"
  4. "these processes are insensitive to expertise", "power dynamics and power structures"

And, indeed, I have been guilty of a lot of those issues. A verbose writer, I have written extremely long proposals that I suspect no one has ever fully read. Some proposals were adopted by exhaustion, or ignored because not looping in the right stakeholders.

Our discussion issue on the topic has more details on the issues I found with our RFC process. But to give credit to the old process, it did serve us well while it was there: it's better than nothing, and it allowed us to document a staggering number of changes and decisions (95 RFCs!) made over the course of 6 years of work.

What's next?

We're still experimenting with the communication around decisions, as this text might suggest. Because it's a separate step, we also have a tendency to forget or postpone it, like this post, which comes a couple of months late.

Previously, we'd just ship a copy of the RFC to everyone, which was easy and quick, but incomprehensible to most. Now we need to write a separate communication, which is more work but, hopefully, worth the as the result is more digestible.

We can't wait to hear what you think of the new process and how it works for you, here or in the discussion issue! We're particularly interested in people that are already using a similar process, or that will adopt one after reading this.

Note: this article was also published on the Tor Blog.

16 Feb 2026 8:21pm GMT

Philipp Kern: What is happening with this "connection verification"?

You might see a verification screen pop up on more and more Debian web properties. Unfortunately the AI world of today is meeting web hosts that use Perl CGIs and are not built as multi-tiered scalable serving systems. The issues have been at three layers:

  1. Apache's serving capacity runs full - with no threads left to serve requests. This means that your connection will sit around for a long time, not getting accepted. In theory this can be configured, but that would require requests to be handled in time.
  2. Startup costs of request handlers are too high, because we spawn a process for every request. This currently affects the BTS and dgit's browse interface. packages.debian.org has been fixed, which increased scalability sufficiently.
  3. Requests themselves are too expensive to be served quickly - think git blame without caching.

Optimally we would go and solve some scalability issues with the services, however there is also a question of how much we want to be able to serve - as AI scraper demand is just a steady stream of requests that are not shown to humans.

How is it implemented?

DSA has now stood up some VMs with Varnish for proxying. Incoming TLS is provided by hitch, and TLS "on-loading" is done using haproxy. That way TLS goes in and TLS goes out. While Varnish does cache, if the content is cachable (e.g. does not depend on cookies) - that is not the primary reason for using it: It can be used for flexible query and response rewriting.

If no cookie with a proof of work is provided, the user is redirected to a challenge page that does some webcrypto in Javascript - because that looked similar to what other projects do (e.g. haphash that originally inspired the solution). However so far it looks like scrapers generally do not run with Javascript enabled, so this whole crypto proof of work business could probably be replaced with just a Javascript-based redirect. The existing solution also has big (security) holes in it. And, as we found out, Firefox is slower at webcrypto than Chrome. I have recently reduced the complexity, so you should notice it blocking you significantly less.

Once you have the cookie, you can keep accessing the site for as long as the cookie is valid. Please do not make any assumptions about the cookies, or you will be broken in the future.

For legitimate scrapers that obey robots.txt, there is now an automatically generated IP allowlist in place (thanks, Marco d'Itri). Turns out that the search engines do not actually run Javascript either and then loudly complain about the redirect to the challenge page. Other bots are generally exempt.

Conclusion

I hope that right now we found sort of the sweet spot where the admins can stop spending human time on updating firewall rules and the services are generally available, reasonably fast, and still indexed. In case you see problems or run into a block with your own (legitimate) bots, please let me know.

16 Feb 2026 7:55pm GMT