17 May 2026
Planet Grep
Jan De Luyck: Integrating the Ikea Starkvind with Home Assistant using deCONZ
Ikea Starkvind and Home Assistant
We recently bought an Ikea Starkvind Air Purifier, which supports Zigbee. I wanted to find out what I could do with it from within Home Assistant, possibly automating when it runs and when not. I also wanted to add some UI elements, like the mushroom fan card or the air purifier card, both of which rely on there being a fan entity.
Zigbee integration with deCONZ
I already had a ConbeeII Zigbee USB controller in use with the deCONZ Integration. Pairing the Starkvind was a matter of telling the Phoscon Software (which comes with the deCONZ integration) to scan for new sensors and pushing the pairing button on the Starkvind.
Surprisingly enough, only three entities showed up in Home Assistant:
- Air Purifier PM25
- Fan Mode
- Filter Runtime
deCONZ entities exposed through Home Assistant for the Starkvind Air Purifier
When looking in the deCONZ application there were a lot more attributes:
deCONZ cluster information
The deCONZ integration uses a python library for deconz, and in issue #322 I found that only these three items were actually requested to be added. I have since requested some more, but it's uncertain when and if those will be made available.
I came across this blog post by OyWin, detailing how they used the REST sensors to add their Starkvind into Home Assistant. While the approach was definitely the right way to go, I was not a fan of doing so many individual REST calls (one per sensor) as it's not needed - Home Assistant can handle it in 1 call per REST-API target.
deCONZ REST-API
Checking the deCONZ REST-API documentation for the Starkvind, there are a lot more attributes available, published under different devices: ZHAAirPurifier and ZHAParticulateMatter
The ones I wanted were:
ZHAAirPurifier
| Section | Attribute | Exposed via deCONZ integration | R/O or R/W |
|---|---|---|---|
| Config | filterlifetime | x | Read/Write |
| Config | ledindication | x | Read/Write |
| Config | locked | x | Read/Write |
| Config | mode | ✓ | Read Write |
| Config | on | x | Read Only |
| State | deviceruntime | x | Read Only |
| State | filterruntime | ✓ | Read Only |
| State | lastupdated | x | Read Only |
| State | replacefilter | x | Read Only |
| State | speed | x | Read Only |
ZHAParticulateMatter
| Section | Attribute | Exposed via deCONZ integration | R/O or R/W |
|---|---|---|---|
| State | measured_value | ✓ | Read Only |
| State | airquality | x | Read Only |
Time to get those into Home Assistant.
Configuring the deCONZ REST-API port
In order to be able to query the deCONZ REST-API, you need to make sure a port is configured in Home Assistant → Settings → Apps → deCONZ → Configuration
deCONZ App Network Configuration
If this port is set you'll be able to issue http queries to the URL of your Home Assistant installation on the port specified. In my case this is http://home-assistant.internal:40850/
Finding the correct API urls
To find the correct API endpoints to use go to Home Assistant → deCONZ → Phoscon. Open the hamburger menu on the left and pick Help → API Information
Phoscon API Information screen
In the subsequent screen, pick "Sensors" and look for "Ikea STARKVIND Air Purifier". You should find two entries in the dropdown:
Phoscon API Information for the Starkvind Air Purifier
Once you click on one of the sensors, you will get a dump of what the API returns, and on top of that window, the API endpoint URL. In my example this reads:
//home-assistant.internal:8123/api/hassio_ingress/juXMtc1g4Z85iNwXSis58q2z7Kw7XO0Lz5k2X6cBsZ0/api/792DA42905/sensors/93. The converted direct unauthenticated URL becomes http://home-assistant.internal:40850/api/792DA42905/sensors/93.
Phoscon API information for the ZHAAirPurifier entity of the Starkvind Air Purifier
792DA42905 is your own API key, and 93 is the internal numbering of deCONZ for your sensor.
Now, this URL allows you to query the API from the outside. I did not need this as I wanted to run the queries from inside Home Assistant. You can find the internal url by going to Home Assistant → Settings → Devices & services, selecting the deCONZ integration and picking the Conbee2. In the Service Info there is a "Visit" link, which shows you the internal hostname to use.
deCONZ Conbee2 Service Information
This will usually be core-deconz, so the URL becomes http://core-deconz:40850/api/<apikey>/sensors/<sensor-id>.
Home Assistant Configuration
Creating the REST sensors
Using the URL assembled above I added the sensor and binary_sensor entities to Home Assistant.
- rest:
- resource: http://core-deconz:40850/api/792DA42905/sensors/93
binary_sensor:
- name: Ikea Starkvind Led Indication
value_template: "{{ value_json.config.ledindication }}"
unique_id: ikea_starkvind_led_indication
- name: Ikea Starkvind Locked
value_template: "{{ value_json.config.locked }}"
unique_id: ikea_starkvind_locked
- name: Ikea Starkvind Sensor On
value_template: "{{ value_json.config.on }}"
unique_id: ikea_starkvind_sensor_on
- name: Ikea Starkvind Replace Filter
value_template: "{{ value_json.state.replacefilter }}"
unique_id: ikea_starkvind_replace_filter
sensor:
- name: Ikea Starkvind Filter Runtime
value_template: "{{ value_json.state.filterruntime }}"
unique_id: ikea_starkvind_filter_runtime
device_class: duration
unit_of_measurement: min
- name: Ikea Starkvind Device Runtime
value_template: "{{ value_json.state.deviceruntime }}"
unique_id: ikea_starkvind_device_runtime
device_class: duration
unit_of_measurement: min
- name: Ikea Starkvind Filter Lifetime
value_template: "{{ value_json.config.filterlifetime }}"
unique_id: ikea_starkvind_filter_lifetime
device_class: duration
unit_of_measurement: min
- name: Ikea Starkvind Mode
value_template: "{{ value_json.config.mode }}"
unique_id: ikea_starkvind_mode
- name: Ikea Starkvind Fan Speed
value_template: "{{ value_json.state.speed }}"
unique_id: ikea_starkvind_fan_speed
state_class: measurement
- name: Ikea Starkvind Last Updated
value_template: "{{ value_json.state.lastupdated + 'Z' }}"
unique_id: ikea_starkvind_lastupdated
device_class: timestamp
Enabling setting the led indicator
To be able to update the ledindication, I added a binary helper called ikea_starkvind_ledindication as a toggle, a rest_command to set it, and an automation to bind the two together:
rest_command:
ikea_starkvind_set_ledindication:
url: "http://core-deconz:40850/api/792DA42905/sensors/93"
method: put
content_type: "application/json; charset=utf-8"
payload: '{ "config": { "ledindication": {{ states("input_boolean.ikea_starkvind_ledindication") | bool | lower }}}}'
alias: Ikea Starkvind - Sync Led Indication
description: ""
triggers:
- trigger: state
entity_id:
- input_boolean.ikea_starkvind_ledindication
actions:
- action: rest_command.ikea_starkvind_set_ledindication
data: {}
mode: single
Additional sensors
I also added a template sensor to calculate the lifetime left of the filter:
template:
- sensor:
name: Ikea Starkvind Filter Lifetime Remaining
state: "{{ states('sensor.ikea_starkvind_filter_lifetime') | int - states('sensor.ikea_starkvind_filter_runtime') | int }}"
unique_id: ikea_starkvind_filter_lifetime_remaining
device_class: duration
unit_of_measurement: min
Creating a Fan entity
To use the premade cards I needed a fan entity. This can be created as a template, based off of the previously created entities:
- fan:
- name: "IKEA Starkvind"
unique_id: ikea_starkvind_fan
availability: "{{ states('select.ikea_starkvind_fan_mode') not in ['unknown', 'unavailable'] }}"
state: "{{ states('select.ikea_starkvind_fan_mode') != 'off' }}"
percentage: >
{% set map = {
'speed_1': 20, 'speed_2': 40, 'speed_3': 60,
'speed_4': 80, 'speed_5': 100
} %}
{{ map.get(states('select.ikea_starkvind_fan_mode'), 0) }}
preset_mode: >
{% if states('select.ikea_starkvind_fan_mode') == 'auto' %}auto{% endif %}
preset_modes:
- auto
speed_count: 5
turn_on:
action: select.select_option
target:
entity_id: select.ikea_starkvind_fan_mode
data:
option: auto
turn_off:
action: select.select_option
target:
entity_id: select.ikea_starkvind_fan_mode
data:
option: "off"
set_percentage:
action: select.select_option
target:
entity_id: select.ikea_starkvind_fan_mode
data:
option: >
{% if percentage == 0 %} off
{% elif percentage <= 20 %} speed_1
{% elif percentage <= 40 %} speed_2
{% elif percentage <= 60 %} speed_3
{% elif percentage <= 80 %} speed_4
{% else %} speed_5
{% endif %}
set_preset_mode:
action: select.select_option
target:
entity_id: select.ikea_starkvind_fan_mode
data:
option: "{{ preset_mode }}"
Home Assistant Cards
I tried out a few cards to see what I liked.
Custom Purifier Card
My first try was the custom air purifier card with this configuration:
type: custom:purifier-card
entity: fan.ikea_starkvind
aqi:
entity_id: sensor.ikea_starkvind_air_quality_measured_value
unit: μg/m³
stats:
- entity_id: sensor.ikea_starkvind_filter_lifetime_remaining
value_template: "{{ (value | float(0) / 60 / 24 ) | round(1) }}"
unit: days
subtitle: Filter Life Remaining
shortcuts:
- name: Speed 1
icon: mdi:weather-night
percentage: 20
- name: Speed 2
icon: mdi:circle-slice-2
percentage: 40
- name: Speed 3
icon: mdi:circle-slice-4
percentage: 60
- name: Speed 4
icon: mdi:circle-slice-6
percentage: 80
- name: Speed 5
icon: mdi:circle-slice-8
percentage: 100
- name: Auto
icon: mdi:brightness-auto
preset_mode: auto
It ended up looking like this, which did not thrill me.
Air Purifier Card
Custom cards
I cobbled something together based on the mushroom-fan-card, the button-card, the template-entity-row and the lovelace-expander-card.
type: vertical-stack
cards:
- type: custom:mushroom-fan-card
entity: fan.ikea_starkvind
icon_animation: true
primary_info: name
secondary_info: state
show_percentage_control: true
collapsible_controls: true
show_direction_control: false
show_oscillate_control: false
- type: horizontal-stack
cards:
- type: custom:button-card
icon: mdi:circle-outline
entity: fan.ikea_starkvind
show_name: false
aspect_ratio: 1/1
tap_action:
action: call-service
service: fan.set_percentage
data:
percentage: 0
target:
entity_id: fan.ikea_starkvind
styles:
card:
- background-color: |-
[[[
return entity.state === 'off'
? 'rgba(var(--rgb-state-active-color), 0.2)'
: 'var(--ha-card-background)';
]]]
icon:
- color: |-
[[[
return entity.state === 'off'
? 'var(--state-active-color)'
: 'var(--secondary-text-color)';
]]]
- type: custom:button-card
icon: mdi:weather-night
entity: fan.ikea_starkvind
show_name: false
aspect_ratio: 1/1
tap_action:
action: call-service
service: fan.set_percentage
data:
percentage: 20
target:
entity_id: fan.ikea_starkvind
styles:
card:
- background-color: |-
[[[
return entity.state === 'on' && entity.attributes.percentage === 20
? 'rgba(var(--rgb-state-active-color), 0.2)'
: 'var(--ha-card-background)';
]]]
icon:
- color: |-
[[[
return entity.state === 'on' && entity.attributes.percentage === 20
? 'var(--state-active-color)'
: 'var(--secondary-text-color)';
]]]
- type: custom:button-card
icon: mdi:circle-slice-2
entity: fan.ikea_starkvind
show_name: false
aspect_ratio: 1/1
tap_action:
action: call-service
service: fan.set_percentage
data:
percentage: 40
target:
entity_id: fan.ikea_starkvind
styles:
card:
- background-color: |-
[[[
return entity.state === 'on' && entity.attributes.percentage === 40
? 'rgba(var(--rgb-state-active-color), 0.2)'
: 'var(--ha-card-background)';
]]]
icon:
- color: |-
[[[
return entity.state === 'on' && entity.attributes.percentage === 40
? 'var(--state-active-color)'
: 'var(--secondary-text-color)';
]]]
- type: custom:button-card
icon: mdi:circle-slice-4
entity: fan.ikea_starkvind
show_name: false
aspect_ratio: 1/1
tap_action:
action: call-service
service: fan.set_percentage
data:
percentage: 60
target:
entity_id: fan.ikea_starkvind
styles:
card:
- background-color: |-
[[[
return entity.state === 'on' && entity.attributes.percentage === 60
? 'rgba(var(--rgb-state-active-color), 0.2)'
: 'var(--ha-card-background)';
]]]
icon:
- color: |-
[[[
return entity.state === 'on' && entity.attributes.percentage === 60
? 'var(--state-active-color)'
: 'var(--secondary-text-color)';
]]]
- type: custom:button-card
icon: mdi:circle-slice-6
entity: fan.ikea_starkvind
show_name: false
aspect_ratio: 1/1
tap_action:
action: call-service
service: fan.set_percentage
data:
percentage: 80
target:
entity_id: fan.ikea_starkvind
styles:
card:
- background-color: |-
[[[
return entity.state === 'on' && entity.attributes.percentage === 80
? 'rgba(var(--rgb-state-active-color), 0.2)'
: 'var(--ha-card-background)';
]]]
icon:
- color: |-
[[[
return entity.state === 'on' && entity.attributes.percentage === 80
? 'var(--state-active-color)'
: 'var(--secondary-text-color)';
]]]
- type: custom:button-card
icon: mdi:circle-slice-8
entity: fan.ikea_starkvind
show_name: false
aspect_ratio: 1/1
tap_action:
action: call-service
service: fan.set_percentage
data:
percentage: 100
target:
entity_id: fan.ikea_starkvind
styles:
card:
- background-color: |-
[[[
return entity.state === 'on' && entity.attributes.percentage === 100
? 'rgba(var(--rgb-state-active-color), 0.2)'
: 'var(--ha-card-background)';
]]]
icon:
- color: |-
[[[
return entity.state === 'on' && entity.attributes.percentage === 100
? 'var(--state-active-color)'
: 'var(--secondary-text-color)';
]]]
- type: custom:expander-card
title: Details
cards:
- type: entities
entities:
- entity: sensor.ikea_starkvind_last_updated
name: Last Updated
- entity: input_boolean.ikea_starkvind_ledindication
name: Led Indicator
- entity: sensor.ikea_starkvind_air_quality
name: Air Quality Indicator
- type: custom:template-entity-row
state: >-
{{ states('sensor.ikea_starkvind_air_quality_measured_value') }}
µg/m³
name: Air Quality
icon: mdi:bacteria-outline
- entity: binary_sensor.ikea_starkvind_replace_filter
name: Replace Filter
- type: custom:template-entity-row
name: Filter Life Remaining
state: |-
{{ (states('sensor.ikea_starkvind_filter_lifetime_remaining') |
float(0) / 60 / 24)| round(1) }} days
icon: mdi:timelapse
animation: false
clear-children: false
This one I kept :)
Collapsed custom card
Unfolded custom card
17 May 2026 6:54am GMT
Dries Buytaert: Acquia builds Drupal funding into its partner program

Today Acquia announced something I'm really proud of. We're calling it the Acquia Fair Trade Initiative.
When an Acquia partner closes a deal, 2% of that deal flows directly to the Drupal Association, credited in the partner's name, to fund Drupal's infrastructure and long-term growth.
Imagine an Acquia partner closes a $100,000 Drupal deal with Acquia. $2,000 goes to the Drupal Association, attributed to that partner. The 2% comes from Acquia, not from partner margins, so the partner keeps their full revenue and incentives.
The donation is publicly attributed in the Acquia Partner Portal and counts toward the partner's standing in the Drupal Association's Certified Partner Program. It is recognized as financial support for the Drupal Association, separate from non-financial contributions like code, case studies, or community participation.
Most of all, I like that this program is structural. It is not a one-time gift or sponsorship campaign. It is built into the economics of Acquia's partner program, so Drupal's funding grows automatically as Acquia and its partners grow.
Too often, funding for Open Source projects depends on periodic fundraising or individual goodwill. That can work, but it rarely scales in a predictable way.
Open Source sustainability works best when incentives align. With the Fair Trade Initiative, the Drupal Association receives more predictable funding, partners receive recognition through the Drupal Association's Certified Partner Program, and Acquia invests in the long-term health of the Drupal ecosystem its business depends on. And yes, this also creates more incentive for partners to work with Acquia on Drupal projects. Drupal wins, Acquia's partners win, and Acquia wins too. That is what incentive alignment looks like.
I set a reminder for myself to report back in a year, maybe sooner. I'm curious to see what this model can become.
17 May 2026 6:54am GMT
Dieter Plaetinck: Open Source Consulting & Advisory
I've been an enthusiastic contributor in the Open Source community for over 25 years. During my career, I've worked with and on Open Source software from multiple angles: as a builder, user, seller, customer, and investor. I've seen projects grow and falter. I've seen how licensing and business decisions can both destroy or boost projects and their communities. Though the biggest killer of promising projects and businesses is probably blind spots (challenges that were not expected or well understood).
I was the first engineering hire when Grafana Labs was founded. I was never a founder, board member, or executive, but I worked directly with its exceptional founders, executives and leaders. It's where I learned how to build Open Source software and enterprise OSS businesses the right way. I learned about productive collaboration with communities, with customers, and with colleagues regardless of which department they're in. I also learned the subtle interdependencies between GTM, engineering, and support, and what it really takes to build, launch, and sell products. It takes many revisions of products, their positioning, and licensing strategy (among others).
After 9 years I left Grafana and started exploring many cool OSS projects and people building them. It turned out I have some relevant experience that is complementary to their own expertise. So I started investing in cool projects and refining my thesis on commercial Open Source models, including Open Core, Source Available, and Fair Source. (see Fair Source: the next best model for commercial open source?)
Cool Open Source Software - and the people that build them - deserve support and funding.
- For funding, sometimes the answer is Venture Capital. Sometimes it's staying independent and bootstrapping, or using donations or endowments (e.g. OSS Pledge, Open Source Endowment, etc)
- For support, sometimes you just need a free friendly chat for advice. Sometimes you need a consultant, or an advisor. I'm happy to help in any way I can.
I don't claim to be the world's expert, but the few startups that worked with me were glad they did, so I decided to launch a consulting business. If this sounds interesting, check out the Consulting/advisory page for more details.
17 May 2026 6:54am GMT
Planet Debian
Russ Allbery: Review: Unwinding Anxiety
Review: Unwinding Anxiety, by Judson Brewer
| Publisher: | Avery |
| Copyright: | 2021 |
| ISBN: | 0-593-33045-5 |
| Format: | Kindle |
| Pages: | 268 |
Unwinding Anxiety is a non-fiction self-help book about how to reduce anxiety. The author is a board-certified psychiatrist specializing in addiction and substance abuse, who has subsequently done clinical and research (and commercial, more on that later) work in anxiety. His previous book, The Craving Mind, was a pop science treatment of addiction research. This book is more deliberately structured as a self-help guide.
(The cover will assure you that he has an M.D. and a Ph.D. I don't include honorifics and degrees in author listings as a small protest against the weird social rules about which degrees count and which don't.)
There are a lot of self-help books out there about anxiety. There are a lot fewer that say something relatively original. I think this is one of the latter, but I certainly have not done a survey of the subgenre, and it's possible the ideas here are only new to me. Brewer makes three basic claims in this book, all of which I found personally useful:
-
Anxiety can be usefully analyzed as a habit. The rumination loop and other related anxiety behaviors such as excessive analysis, reassurance-seeking, and negative anticipation take the form of deeply ingrained habits triggered by stimuli.
-
Raw willpower is not a useful way to break habits in general and anxiety habits in particular. In order to displace the habit, you have to retrain the part of your brain that runs habits on autopilot. Attempting to override it with willful effort is exhausting and likely to fail.
-
Habit loops in general, and anxiety loops in particular, can be defused and replaced using mindfulness techniques.
This is not the way Brewer lays out the book. He goes to some effort to lead the reader slowly through three techniques for handling anxiety (for which he uses the metaphor of "gears," like for a bicycle or car) by introducing them one at a time and encouraging the reader to become thoroughly familiar with each one before moving on to the next. Since this is a book review, I'm going to give you the whole argument at once so that you know where this book is going. This may be less helpful in practice; if you're trying to use this technique on your own anxiety, you may want to read the book instead and not jump ahead.
Brewer's three gears are:
-
Identify your habit loops and recognize when they're happening. (This part felt the most similar to traditional cognitive behavioral therapy to me.)
-
Focus on how those habit loops make you feel. Rather than trying to force the habit loop to stop, let it happen but pay very close attention to the outcome and its effects on you.
-
Find and focus on a different reaction that provides better rewards than the anxiety habit loop. Brewer suggests curiosity.
For me, the point where I thought "okay, you have my attention" is when Brewer described the way many people, particularly people without anxiety, tell people with anxiety to "just stop thinking about it" or "just do the thing you're anxious about anyway and you'll see it will be fine" and then described in detail why he believes that doesn't work. This is one of the few discussions of anxiety I've read where the author goes out of his way to stress that you cannot simply think your way out of anxiety and that repeatedly trying to do so and failing is exhausting and demoralizing.
Everyone is different and I know some people find cognitive behavioral therapy very helpful, but I find the constant effort to challenge cognitive distortions more draining and demoralizing than useful. His second gear, of not directly confronting the habit loop but instead watching its effect and thinking about its outcome, feels so much more approachable to me. Assuming, of course, it works.
Brewer's approach is essentially just mindfulness, although he mostly avoids the (to me at least) somewhat off-putting typical introduction to mindfulness via religious practice or general well-being and instead ties it to a theorized model of how habits work in the human brain. His contention is that habits, including anxiety, exist because at some point they provided a reward that was sufficiently compelling to make the habit-following part of your brain seek that reward. You were getting some benefit (a sense of control, a sense of being prepared, temporary reassurance, etc.) out of the anxiety reaction, which is why the anxiety habit formed in the first place. Once that habit is in place, it can continue without the reward. (Although in my experience there is probably still some short-term reward.)
Rather than trying to force yourself to stop following the habit, Brewer instead suggests letting the habit happen but then focusing (via mindfulness) on how following the habit makes you feel, whether it improves your sense of well-being or worsens it, and whether other actions produce different feelings. The goal, in other words, is to undermine the assumption of reward and to challenge any short-term reward with the long-term discomfort that made you want to stop being anxious.
This avoids using your conscious brain to exert direct willpower, which is exhausting and usually unsuccessful since the habit-following part of your brain is stronger (for various evolutionary psychology reasons he explains and that I found at least partly credible). Instead, you are using its strengths of observation and classification. You pay close attention to the ways in which the habit loop makes you feel bad, which in theory provides feedback to the habit-following part of your brain that can dislodge the habit. If the habit is recognized as no longer rewarding, it will weaken.
Brewer's background is in addiction treatment, so he is predisposed to see addiction in everything and one should probably be a bit cautious about his enthusiasm. He claims a great deal of success with this approach in clinical settings, mostly with addiction but also with anxiety, but this is always hard to verify. (Few doctors who write self-help books rigorously document their failures.) He apparently also has a company that produces various phone apps that assist with this technique. I'm rather cynical about anyone who talks about products their company has produced in self-help books of this type, and I'm also rather cynical about anyone who calls himself "Dr. Jud," but the book doesn't seem to be a sales pitch and there's no direct information in it about how to get the apps.
For me, the first two parts of the book were the most useful and the conception of anxiety reactions as habits made a surprising amount of intuitive sense. I thought the third part of the book, where he tries to describe a better in-the-moment reaction that you can try to build into a more beneficial habit, to be the weakest. It's mostly stock mindfulness advice that I've seen in other places, and you will be entirely unsurprised to learn that Brewer meditates and has studied meditation. I think it's clear that, for him, a feeling of curiosity works as an anxiety replacement; I'm not sure that's universal and I'm not sure it works for me.
That core idea that anxiety reactions are a type of addictive habit that have outlived their useful rewards but continue because habits are hard to change felt both useful and at least a little bit true, though. Your mileage may, of course, vary, but I've been trying out various ideas from this book since I first started reading it, and I think it's helping. If any of this clicks with you and you're also prone to anxiety, it might be worth a read.
One warning, though: Brewer's previous work on addiction includes binge eating, and while it's not a primary focus, he uses several weight loss and disordered eating examples and has a very traditional medical attitude towards weight. I'm somewhat dubious of the addiction model of weight gain in general, but more to the point, it's rather off-putting in a book supposedly about anxiety. It's something I was able to skim over, but be aware going in if you're likely to find this obnoxious.
I do think this book is a case of an addiction researcher seeing everything through the lens of addiction, and I'm a little dubious this is the right model for everyone's anxiety. But this is one of the good reasons why there are a lot of books about anxiety: Different approaches suit different people. This one made more sense to me than most; maybe you are similar.
I can't really recommend or not recommend a book like this, since I think so much will depend on whether you are one of the people for whom this specific explanation will click, but I'm glad that I read it and I think it's good to know that this model of anxiety exists.
Rating: 8 out of 10
17 May 2026 2:52am GMT
15 May 2026
Planet Debian
Antoine Beaupré: The Four Horsemen of the LLM Apocalypse
I have been battling Large Language Models (LLM1) for the past couple of weeks and have struggled to think about what it means and how to deal with its fallout.
Because the fight has come from many fronts, I've come to articulate this in terms of the Four Horsemen of the Apocalypse.
Sound track: Metallica's The Four Horsemen, preferably downloaded from Napster around 2000, but now I guess you get it on YouTube.
War: bot armies
Let's start with War. We've been battling bot armies for control of our GitLab server for a while. Bots crawl virtually infinite endpoints on our Git repositories (as opposed to downloading an archive or shallow clone), including our fork of Firefox, Tor Browser, a massive repository.
At first, we've tried various methods: robots.txt, blocking user agents, and finally blocking entire networks. I wrote asncounter. It worked for a while.
But now, blocking entire networks doesn't work: they come back some other way, typically through shady proxy networks, which is kind of ironic considering we're essentially running the largest proxy network of the world.
Out of desperation, we've forced users to use cookies when visiting our site. We haven't deployed Anubis yet, as we worry that bots have broken Anubis anyways and that it does not really defend against a well-funded attacker, something which Pretix warned against in 2025 already.
(We have a whole discussion regarding those tools here.)
But even that, predictably, has failed. I suspect what we consider bots are now really agents. They run full web browsers, JavaScript included, so a feeble cookie is no match for the massive bot armies.
Side note on LLM "order of battle"
We often underestimate the size of that army. The cloud was huge even before LLMs, serving about two thirds of the web. Even larger swaths of clients like government and corporate databases have all moved to the cloud, in shared, but private infrastructure with massive spare capacity that is readily available to anyone who pays.
LLMs have made the problem worse by dramatically expanding the capacity of the "cloud". We now have data centers that defy imagination with millions of cores, petabytes of memory, exabytes of storage.
I thought that 25 gigabit residential internet in Switzerland could bring balance, but this is nothing compared to the scale of those data centers.
Those companies can launch thousands, if not millions of fully functional web browsers at our servers. Computing power or bandwidth are not a limitation for them, our primitive infrastructure is. No one but hyperscalers can deal with this kind of load, and I suspect that they are also struggling, as even Google is deploying extreme mechanisms in reCAPTCHA.
This is the largest attack on the internet since the Morris worm but while Robert Tappan Morris went to jail on a felony, LLM companies are celebrated as innovators and will soon be too big to fail.2
Which brings us to the second horsemen, famine.
Famine: shortages
All that computing power doesn't come out of thin air: it needs massive amounts of hardware, power, and cooling.
Earlier this year, I've heard from a colleague that their Dell supplier refused to even provide a quote before August. Dell!
In February, Western Digital's hard drive production for 2026 was already sold out. Hard drives essentially doubled in price within a year, and some have now tripled. A server quote we had in November has now quadrupled, going from 10 thousand to FORTY thousand dollars for a single server.
But regular folks are facing real-life shortages as well, as city-size data centers are being built at neck-breaking speed, stealing fresh water and energy from human beings to feed the war machine.
We've been scared of losing our jobs, but it seems that Apocalypse has yet to fully materialize. Regardless for engineers, the market feels tighter than it was a couple years ago, and everyone feels on edge that they will just have to learn to operate LLMs to keep their jobs.
Which brings us, of course, to Death.
Death: security and copyright
Our third horseman is one I did not expect a couple of months ago. Back at FOSDEM, curl's maintainer Daniel Stenberg famously complained about the poor quality of LLM-generated reports but then, a few months later, everyone is scrambling to deal with floods of good reports.
In the past two weeks, this culminated in a significant number of critical security issues across multiple projects. Chained together, remote code execution vulnerabilities in Nginx and Apache and two local privilege escalations in the Linux kernel (dirtyfrag and fragnesia) essentially gave anyone root access to any unpatched server to the web.
As I write this, another vulnerability dropped, which gives read access to any file to a local user, compromising TLS and SSH private keys.
All those vulnerabilities were released without any significant coordination while people scrambled to mitigate.
Many people including Linus Torvalds are now considering issues discovered through LLMs to be essentially public. This puts some debates about disclosure processes in perspective, to say the least.
But this is not merely the death of the traditional coordinated disclosure process, the C programming language, or the Linux kernel: remember that those bots are trained on a large corpus of copyrighted material. Facebook has trained their models on pirated books and Nvidia has done deals with Anna's Archive to secure access to large swaths of copyrighted material. The US Congress seems to think LLM outputs are not copyrightable, like any other machine outputs.
With many people now vibe coding their way out of learning or remembering how computers work, is this the Death of Copyright?
And that, of course, brings us to the final horseman: Pestilence.
Pestilence: slop
There is a growing meme that programming is essentially over as we know it. That you can simply vibe-code applications from scratch and it's pretty good.
Maybe that's true.
So far, most of my attempts at resolving any complex problem with a LLM have often failed with bizarre failures. Some worked surprisingly well. Maybe, of course, I am holding it wrong.
I personally don't believe LLMs will ever be good enough to produce and maintain software at scale. They're surprisingly good at finding security flaws right now. But what I see is also a lot of Bullshit, with a capital B. It's not lying: it does not "know" anything, so it can't lie. It's misleadingly cohesive and deliberate, but it lacks meaning, intent, will.
I have not been confronted with much slop, apart from the lobster Jesus or the yellow man atrocities, and particularly not in my work. But I see what it is doing to my profession: beyond vibe-coding, people are now token-maxxing, and land-grabbing their colleagues.
I don't like what LLMs do to our communities, or the fabric of software we live with.
Software does not evolve in a void. It is a team effort, be it free software or a corporate product. Generations of humans have carefully built the scaffolding of technology required for modern networks and software to operate, in a convoluted contraption that no single human fully understands anymore.
The idea of simply giving up on that understanding entirely and delegating it to an unproven model is not only chilling, it feels just plain stupid. Not stupid as in Skynet, stupid as in "I can't get inside the data center because the authentication system is down". Except we're in a "the power plant doesn't reboot" or "their LLM found an 0day in our slop" kind of stupid.
The fifth horsemen
Researching for this article, I looked up the four horsemen and found out they original seems to have been:
- Famine
- War
- Death
- Conquest (??)
I was surprised. I grew up thinking about the horsemen being Famine, War, Pestilence, and Death. So I went back to my original source which actually claims the horsemen are:
Time has taken its toll on you, the lines that crack your face.
Famine, your body, it has torn through, withered in every place.
Pestilence for what you've had to endure, and what you have put others through
Death, deliverance for you, for sure, now there's nothing you can do
So I guess that makes no sense either, which, fair enough, I shouldn't rely on Metallica for theological references. Especially since that song was originally called Mechanix and was "about having sex at a gas station".
Anyways.
The point is, there are actually five horsemen, and the fifth one is, in my opinion, Conquest.
Those companies (and not "AI", mind you) are taking over the world. I sense a strong connection with the "post-truth" world imposed on us by fascists like Trump and Putin. It's not an accident, it's a power grab part of the Californian Ideology3. Just like Airbnb broke housing, Uber destroyed the transportation and Amazon is taking over retail and server hosting, LLM companies are essentially trying to take over if not everything, at least Cognition as a whole.
But the capitalization of those companies (OpenAI and Nvidia in particular) are so far beyond reason that their inevitable collapse will likely lead to a global financial collapse of biblical proportions.
Because they will inevitably fail like previous bubbles they are built on. And when they fail, I hope it zips all the way back through the blockchain scam, the ad surveillance system, and the dot com then git me back my internet.
The Tower of Babel
While I'm off in the woods hallucinating (ha!) on biblical allegories, I feel there's another sign that the apocalypse is coming.
The Tower of Babel myth says that humans tried to create a big tower up to heaven and become god. God confounds their speech and scatters the human race. End of utopia.
This is what is happening to our human translators now. LLMs being, after all, Language Models, they are excellent at translation work. So much that the only translators not replaced by LLMs right now are interpreters, who translate vocally in real time. But interpreters are worried about their jobs as well.
This concretely means we will lose the human capacity, as a civilization, to translate between each other. It is still an open question whether the remaining revision work will be enough for translators to avoid deskilling, but other research has shown that LLM use leads to cognitive decline, impacts critical thinking, and generally, that deskilling is a common outcome.
Ultimately, I think this is where LLMs bring us. Towards collapse.
So this is a call to arms. Fight back!
Poison bots. Build local real-world communities.
Go low tech. Moore's law is dead, make use of it.
Patch your shit. Go weird.
Refuse slop. Train your brain.
The horsemen will collapse, but let's not go down with them.
This article was written without the use of a large language model and should not be used to train one.
- I prefer "LLM" to Artificial Intelligence, as I don't consider models to have "Intelligence" which goes far beyond the analytical traits we train models for. Intelligence requires embodiment and social interaction; machines lack the innate human skills of empathy, feeling and care, which explains a lot of the evils behind the current trends.↩
- It should be noted that Morris also happened to be one of the founder of Y Combinator where he is in good company with other techno-fascists like Peter Thiel, Sam Altman, and so on. Crime, after all, pays.↩
- Probably a good time to watch All Watched Over by Machines of Loving Grace.↩
15 May 2026 9:25pm GMT
Bits from Debian: New Debian Developers and Maintainers (March and April 2026)

The following contributors got their Debian Developer accounts in the last two months:
- Filip Strömbäck (fstromback)
- Arthur Diniz (arthurbd)
- Manuel Traut (manut)
- Xiyue Deng (manphiz)
- kpcyrd (kpcyrd)
The following contributors were added as Debian Maintainers in the last two months:
- Chris Talbot
- Gabriel Filion
- Mate Kukri
Congratulations!
15 May 2026 2:00pm GMT
05 May 2026
Planet Lisp
ECL News: ECL 26.5.5 release
We are announcing a bugfix ECL release that addresses a few issues that has slipped through testing of the recent one.
Addressed issues:
-
bugfix: MAKE-PACKAGE destructively modified defining form's cons cells of the package local nicknames, breaking package literals in bytecmp (#839)
-
bugfix: the first environment is now always page-aligned by using the same allocation mechanism as all subsequent envs (#828)
-
bugfix: allow loading concatenated fasc files (#842)
-
bugfix: defclass does not redefine existing classes at compile time with forward-referenced classes in the bytecodes compiler (#843)
This release is available for download in a form of a source code archive (we do not ship prebuilt binaries):
Happy Hacking,
The ECL Developers
05 May 2026 12:00pm GMT
Gábor Melis: DRef Leaves Home
Version 0.5 of DRef, the definition reifier, is now available. It has moved to its own repository, completing its separation from PAX, where it was originally developed.

This was a long time coming. Twelve years ago today, PAX was born. From the start, PAX used the concept of locatives to refer to definitions without first-class objects. For example, to generate documentation for the *MY-VAR* variable, one could use the VARIABLE locative as in (*MY-VAR* VARIABLE). PAX needed to be able to tell whether such a definition exists, as well as access its docstring and source location.
Over time, this mechanism evolved into a portable, extensible introspection library independent of PAX. I began separating the two projects two years ago and named the new library, though they continued to share a repository. I have now removed the remaining dependencies so that DRef can live on its own.
05 May 2026 12:00am GMT
01 May 2026
Planet Lisp
Joe Marshall: Echoes of the Lisp Listener
The Lisp Machine Listener had an electric close parenthesis. When the user typed a close parenthesis, and this was the close parenthesis that finished the complete form at top level, the form would be sent to the REPL right away with no need to press enter. Here's how to get this behavior with SLY:
(defun my-sly-mrepl-electric-close-paren ()
"Insert ')' and auto-send ONLY if we are closing a top-level Lisp form."
(interactive)
(let ((state (syntax-ppss)))
(insert ")")
;; Safety checks:
;; 1. We were at depth 1 (so we are now at depth 0)
;; 2. We aren't in a string or comment
;; 3. The input actually starts with a paren (it's a form, not a sentence)
(when (and (= (car state) 1)
(not (nth 3 state))
(not (nth 4 state))
(string-match-p "^\\s-*("
(buffer-substring-no-properties (sly-mrepl--mark) (point))))
(sly-mrepl-return))))
Another cool hack is to get the REPL to do double duty as a command line to the LLM chatbot. When you type RET in the REPL, it will check if the input is a complete lisp form. If so, it will send the form to the REPL as normal. If not, it will send the input to the chatbot. Here's how to do this:
(defun my-sly-mrepl-electric-return ()
"Send to Lisp if it's a form/symbol, or wrap in (chat ...) if it's a sentence."
(interactive)
(let* ((beg (marker-position (sly-mrepl--mark)))
(end (point-max))
(input (buffer-substring-no-properties beg end))
(trimmed (string-trim input)))
(cond
;; If it's empty, just do a normal return
((string-blank-p trimmed)
(sly-mrepl-return))
;; If it starts with a paren, quote, or hash, it's definitely a Lisp form
((string-match-p "^\\s-*[(#'\"]" trimmed)
(sly-mrepl-return))
;; If it's a single word (no spaces), treat it as a symbol/form (e.g., *package*)
((not (string-match-p "\\s-" trimmed))
(sly-mrepl-return))
;; Otherwise, it's a sentence. Wrap it and fire.
(t
(delete-region beg end)
(insert (format "(chat %S)" trimmed))
(sly-mrepl-return)))))
Install as follows:
;; Apply to SLY MREPL with a safety check for the mode map (with-eval-after-load 'sly-mrepl (define-key sly-mrepl-mode-map (kbd "RET") 'my-sly-mrepl-electric-return) (define-key sly-mrepl-mode-map (kbd ")") 'my-sly-mrepl-electric-close-paren))
01 May 2026 5:29pm GMT
25 Apr 2026
FOSDEM 2026
All FOSDEM 2026 videos are online
All video recordings from FOSDEM 2026 that are worth publishing have been processed and released. Videos are linked from the individual schedule pages for the talks and the full schedule page. They are also available, organised by room, at video.fosdem.org/2026. While all released videos have been reviewed by a human, it remains possible that one or more issues fell through the cracks. If you notice any problem with a video you care about, please let us know as soon as possible so we can look into it before the video-processing infrastructure is shut down for this edition. To report any舰
25 Apr 2026 10:00pm GMT
29 Jan 2026
FOSDEM 2026
Join the FOSDEM Treasure Hunt!
Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰
29 Jan 2026 11:00pm GMT
26 Jan 2026
FOSDEM 2026
Call for volunteers
With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰
26 Jan 2026 11:00pm GMT