26 Apr 2025
Planet Debian
John Goerzen: NNCPNET Can Optionally Exchange Internet Email
A few days ago, I announced NNCPNET, the email network based atop NNCP. NNCPNET lets anyone run a real mail server on a network that supports all sorts of topologies for transport, from Internet to USB drives. And verification is done at the NNCP protocol level, so a whole host of Internet email bolt-ons (SPF, DMARC, DKIM, etc.) are unnecessary.
Shortly after announcing NNCPNET, I added an Internet bridge. This lets you get your own DOMAIN.nncpnet.org domain, and from there route email to and from the Internet using a gateway node. Simple, effective, and a way to get real email to and from your laptop or Raspberry Pi without having to have a static IP, SPF, DMARC, DKIM, etc.
It's a volunteer-run, free, service. Give it a try!
26 Apr 2025 1:01am GMT
25 Apr 2025
Planet Debian
Simon Josefsson: GitLab Runner with Rootless Privilege-less Podman on riscv64
I host my own GitLab CI/CD runners, and find that having coverage on the riscv64
CPU architecture is useful for testing things. The HiFive Premier P550 seems to be a common hardware choice. The P550 is possible to purchase online. You also need a (mini-)ATX chassi, power supply (~500W is more than sufficient), PCI-to-M2 converter and a NVMe storage device. Total cost per machine was around $8k/€8k for me. Assembly was simple: bolt everything, connect ATX power, connect cables for the front-panel, USB and and Audio. Be sure to toggle the physical power switch on the P550 before you close the box. Front-panel power button will start your machine. There is a P550 user manual available.
Below I will guide you to install the GitLab Runner on the pre-installed Ubuntu 24.04 that ships with the P550, and configure it to use Podman in root-less mode. Presumably you want to migrate to some other OS instead; hey Trisquel 13 riscv64 I'm waiting for you! I wouldn't recommend using this machine for anything sensitive, there is an awful lot of non-free and/or vendor-specific software installed, and the hardware itself is young. I am not aware of any other riscv64 hardware that has been proven to be able to run a libre OS, all of them appear to require special patches and/or non-mainline kernels.
- Login on console using username 'ubuntu' and password 'ubuntu'. You will be asked to change the password, so do that.
- Start a terminal, gain root with
sudo -i
and change the hostname:
echo jas-p550-01 > /etc/hostname
- Connect ethernet and run:
apt-get update && apt-get dist-upgrade -u
. - If your system doesn't have valid MAC address (they show as MAC '8c:00:00:00:00:00 if you run 'ip a'), you can fix this to avoid collisions if you install multiple P550's on the same network. Connect the Debug USB-C connector on the back to one of the hosts USB-A slots. Use minicom (use Ctrl-A X to exit) to talk to it.
apt-get install minicom
minicom -o -D /dev/ttyUSB3
#cmd: ifconfig
inet 192.168.0.2 netmask: 255.255.240.0
gatway 192.168.0.1
SOM_Mac0: 8c:00:00:00:00:00
SOM_Mac1: 8c:00:00:00:00:00
MCU_Mac: 8c:00:00:00:00:00
#cmd: setmac 0 CA:FE:42:17:23:00
The MAC setting will be valid after rebooting the carrier board!!!
MAC[0] addr set to CA:FE:42:17:23:00(ca:fe:42:17:23:0)
#cmd: setmac 1 CA:FE:42:17:23:01
The MAC setting will be valid after rebooting the carrier board!!!
MAC[1] addr set to CA:FE:42:17:23:01(ca:fe:42:17:23:1)
#cmd: setmac 2 CA:FE:42:17:23:02
The MAC setting will be valid after rebooting the carrier board!!!
MAC[2] addr set to CA:FE:42:17:23:02(ca:fe:42:17:23:2)
#cmd:
- For reference, if you wish to interact with the MCU you may do that via OpenOCD and telnet, like the following (as root on the P550). You need to have the Debug USB-C connected to a USB-A host port.
apt-get install openocd
wget https://raw.githubusercontent.com/sifiveinc/hifive-premier-p550-tools/refs/heads/master/mcu-firmware/stm32_openocd.cfg
echo 'acc115d283ff8533d6ae5226565478d0128923c8a479a768d806487378c5f6c3 stm32_openocd.cfg' | sha256sum -c
openocd -f stm32_openocd.cfg &
telnet localhost 4444
...
- Reboot the machine and login remotely from your laptop. Gain root and set up SSH public-key authentication and disable SSH password logins.
echo 'ssh-ed25519 AAA...' > ~/.ssh/authorized_keys
sed -i 's;^#PasswordAuthentication.*;PasswordAuthentication no;' /etc/ssh/sshd_config
service ssh restart
- With a NVME device in the PCIe slot, create a LVM partition where the GitLab runner will live:
parted /dev/nvme0n1 print
blkdiscard /dev/nvme0n1
parted /dev/nvme0n1 mklabel gpt
parted /dev/nvme0n1 mkpart jas-p550-nvm-02 ext2 1MiB 100% align-check optimal 1
parted /dev/nvme0n1 set 1 lvm on
partprobe /dev/nvme0n1
pvcreate /dev/nvme0n1p1
vgcreate vg0 /dev/nvme0n1p1
lvcreate -L 400G -n glr vg0
mkfs.ext4 -L glr /dev/mapper/vg0-glr
Now with a reasonable setup ready, let's install the GitLab Runner. The following is adapted from gitlab-runner's official installation instructions documentation. The normal installation flow doesn't work because they don't publish riscv64
apt repositories, so you will have to perform upgrades manually.
# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner_riscv64.deb
# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner-helper-images.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner_riscv64.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner-helper-images.deb
echo '68a4c2a4b5988a5a5bae019c8b82b6e340376c1b2190228df657164c534bc3c3 gitlab-runner-helper-images.deb' | sha256sum -c
echo 'ee37dc76d3c5b52e4ba35cf8703813f54f536f75cfc208387f5aa1686add7a8c gitlab-runner_riscv64.deb' | sha256sum -c
dpkg -i gitlab-runner-helper-images.deb gitlab-runner_riscv64.deb
Remember the NVMe device? Let's not forget to use it, to avoid wear and tear of the internal MMC root disk. Do this now before any files in /home/gitlab-runner
appears, or you have to move them manually.
gitlab-runner stop
echo 'LABEL=glr /home/gitlab-runner ext4 defaults,noatime 0 1' >> /etc/fstab
systemctl daemon-reload
mount /home/gitlab-runner
Next install gitlab-runner
and configure it. Replace token glrt-REPLACEME
below with the registration token you get from your GitLab project's Settings -> CI/CD -> Runners -> New project runner. I used the tags 'riscv64' and a runner description of the hostname.
gitlab-runner register --non-interactive --url https://gitlab.com --token glrt-REPLACEME --name $(hostname) --executor docker --docker-image debian:stable
We install and configure gitlab-runner to use podman, and to use non-root user.
apt-get install podman
gitlab-runner stop
usermod --add-subuids 100000-165535 --add-subgids 100000-165535 gitlab-runner
You need to run some commands as the gitlab-runner
user, but unfortunately some interaction between sudo/su and pam_systemd makes this harder than it should be. So you have to setup SSH for the user and login via SSH to run the commands. Does anyone know of a better way to do this?
# on the p550:
cp -a /root/.ssh/ /home/gitlab-runner/
chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/.ssh/
# on your laptop:
ssh gitlab-runner@jas-p550-01
systemctl --user --now enable podman.socket
systemctl --user --now start podman.socket
loginctl enable-linger gitlab-runner gitlab-runner
systemctl status --user podman.socket
We modify /etc/gitlab-runner/config.toml
as follows, replace 997
with the user id shown by systemctl status
above. See feature flags documentation for more documentation.
[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
...
[runners.docker]
host = "unix:///run/user/997/podman/podman.sock"
Note that unlike the documentation I do not add the 'privileged = true' parameter here. I will come back to this later.
Restart the system to confirm that pushing a .gitlab-ci.yml
with a job that uses the riscv64
tag like the following works properly.
dump-env-details-riscv64:
stage: build
image: riscv64/debian:testing
tags: [ riscv64 ]
script:
- set
Your gitlab-runner should now be receiving jobs and running them in rootless podman. You may view the log using journalctl
as follows:
journalctl --follow _SYSTEMD_UNIT=gitlab-runner.service
To stop the graphical environment and disable some unnecessary services, you can use:
systemctl set-default multi-user.target
systemctl disable openvpn cups cups-browsed sssd colord
At this point, things were working fine and I was running many successful builds. Now starts the fun part with operational aspects!
I had a problem when running buildah to build a new container from within a job, and noticed that aardvark-dns was crashing. You can use the Debian 'aardvark-dns' binary instead.
wget http://ftp.de.debian.org/debian/pool/main/a/aardvark-dns/aardvark-dns_1.14.0-3_riscv64.deb
echo 'df33117b6069ac84d3e97dba2c59ba53775207dbaa1b123c3f87b3f312d2f87a aardvark-dns_1.14.0-3_riscv64.deb' | sha256sum -c
mkdir t
cd t
dpkg -x ../aardvark-dns_1.14.0-3_riscv64.deb .
mv /usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.ubuntu
mv usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.debian
My setup uses podman in rootless mode without passing the -privileged parameter or any -add-cap parameters to add non-default capabilities. This is sufficient for most builds. However if you try to create container using buildah from within a job, you may see errors like this:
Writing manifest to image destination
Error: mounting new container: mounting build container "8bf1ec03d967eae87095906d8544f51309363ddf28c60462d16d73a0a7279ce1": creating overlay mount to /var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/merged, mount_data="lowerdir=/var/lib/containers/storage/overlay/l/I3TWYVYTRZ4KVYCT6FJKHR3WHW,upperdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/diff,workdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/work,volatile": using mount program /usr/bin/fuse-overlayfs: unknown argument ignored: lazytime
fuse: device not found, try 'modprobe fuse' first
fuse-overlayfs: cannot mount: No such file or directory
: exit status 1
According to GitLab runner security considerations, you should not enable the 'privileged = true' parameter, and the alternative appears to run Podman as root with privileged=false. Indeed setting privileged=true as in the following example solves the problem, as I suppose running as root would too.
[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
[runners.docker]
privileged = true
Can we do better? After some experimentation, and reading open issues with suggested capabilities and configuration snippets, I ended up with the following configuration. It runs podman in rootless mode (as the gitlab-runner
user) without --privileged
, but add the CAP_SYS_ADMIN capability and exposes the /dev/fuse device. Still, this is running as non-root user on the machine, so I think it is an improvement compared to using --privileged
and also compared to running podman as root.
[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
[runners.docker]
host = "unix:///run/user/997/podman/podman.sock"
privileged = false
cap_add = ["SYS_ADMIN"]
devices = ["/dev/fuse"]
Still I worry about the security properties of such a setup, so I only enable these settings for a separately configured runner instance that I use when I need this docker-in-docker (oh, I meant buildah-in-podman) functionality. I found one article discussing Rootless Podman without the privileged flag that suggest -isolation=chroot but I have yet to make this work. Suggestions for improvement are welcome.
Happy Riscv64 Building!
25 Apr 2025 6:30pm GMT
Ian Wienand: Avoiding layer shift on Ender V3 KE after pause
With (at least) the V1.1.0.15 firmware on the Ender V3 KE 3d printer the PAUSE macro will cause the print head to run too far on the Y axis, which causes a small layer shift when the print returns. I guess the idea is to expose the build plate as much as possible by moving the head as far to the side and back as possible, but the overrun and consequent belt slip unfortunately makes it mostly useless; the main use of this probably being to switch filaments for two colour prints.
Luckily you can fairly easily enable root access on the control pad from the settings menu. After doing this you can ssh to it's IP address with the default password Creality2023.
From there you can modify the /usr/data/printer_data/config/gcode_macro.cfg file (vi is available) to change the details of the PAUSE macro. Find the section [gcode_macro PAUSE] and modify {% set y_park = 255 %} to a more reasonable value like 150. Save the file and reboot the pad so the printing daemons restart.
On PAUSE this then moves the head to the far left about half-way down, which works fine for filament changes. Hopefully a future firmware version will update this; I will update this post if I find it does.
c.f. Ender 3 V3 KE shifting layers after pause
25 Apr 2025 11:30am GMT
Bits from Debian: Debian Project Leader election 2025 is over, Andreas Tille re-elected!
The voting period and tally of votes for the Debian Project Leader election has just concluded and the winner is Andreas Tille, who has been elected for the second time. Congratulations!
Out of a total of 1,030 developers, 362 voted. As usual in Debian, the voting method used was the Condorcet method.
More information about the result is available in the Debian Project Leader Elections 2025 page.
Many thanks to Andreas Tille, Gianfranco Costamagna, Julian Andres Klode, and Sruthi Chandran for their campaigns, and to our Developers for voting.
The new term for the project leader started on April 21st and will expire on April 20th 2026.
25 Apr 2025 10:05am GMT
24 Apr 2025
Planet Debian
Dirk Eddelbuettel: RQuantLib 0.4.26 on CRAN: Small Updates
A new minor release 0.4.26 of RQuantLib arrived on CRAN this morning, and has just now been uploaded to Debian too.
QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for nearly twenty-two years (!!) as it was one of the first packages I uploaded to CRAN.
This release of RQuantLib brings updated Windows build support taking advantage of updated Rtools, thanks to a PR by Tomas Kalibera. We also updated expected results for three of the 'schedule' tests (in a way that is dependent on the upstream library version) as the just-released QuantLib 1.38 differs slightly.
Changes in RQuantLib version 0.4.26 (2025-04-24)
Use system QuantLib (if found by pkg-config) on Windows too (Tomas Kalibera in #192)
Accommodate same test changes for schedules in QuantLib 1.38
Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
24 Apr 2025 10:27pm GMT
Jonathan McDowell: Local Voice Assistant Step 1: An ATOM Echo voice satellite
Back when I setup my home automation I ended up with one piece that used an external service: Amazon Alexa. I'd rather not have done this, but voice control is extremely convenient, both for us, and guests. Since then Home Assistant has done a lot of work in developing the capability of a local voice assistant - 2023 was their Year of Voice. I've had brief looks at this in the past, but never quite had the time to dig into setting it up, and was put off by the fact a lot of the setup instructions were just "Download our prebuilt components". While I admire the efforts to get Home Assistant fully packaged for Debian I accept that's a tricky proposition, and settle for running it in a venv
on a Debian stable container. Voice requires a lot more binary components, and I want to have "voice satellites" in more than one location, so I set about trying to understand a bit better what I was deploying, and actually building the binary bits myself.
This is the start of a write-up of that. I'll break it into a bunch of posts, trying to cover one bit in each, because otherwise this will get massive. Let's start with some requirements:
- All local processing; no call-outs to external services
- Ability to have multiple voice satellites in the house
- A desire to do wake word detection on the satellites, to avoid lots of network audio traffic all the time
- As clean an install on a Debian stable based system as possible
- Binaries built locally
- No need for a GPU
My house server is an AMD Ryzen 7 5700G, so my expectation was that I'd have enough local processing power to be able to do this. That turned out to be a valid assumption - speech to text really has come a long way in recent years. I'm still running Home Assistant 2024.3.3 - the last one that supports (but complains about) Python 3.11. Trixie has started the freeze process, so once it releases I'll look at updating the HA install. For now what I have has turned out to be Good Enough, but I know there have been improvements upstream I'm missing.
Finally, before I get into the details, I should point out that if you just want to get started with a voice assistant on Home Assistant and don't care about what's under the hood, there are a bunch of more user friendly details on Home Assistant's site itself, and they have pre-built images you can just deploy.
My first step was sorting out a "voice satellite". This is the device that actually has a microphone and speaker and communicates with the main Home Assistant setup. I'd seen the post about a $13 voice assistant, and as a result had an ATOM Echo sitting on my desk I hadn't got around to setting up.
Here, we ignore a bit about delving into exactly what's going on under the hood, even if we're compiling locally. This is a constrained embedded device and while I'm familiar with the ESP32 IDF build system I just accepted that using ESPHome and letting it do it's thing was the quickest way to get up and running. It is possible to do this all via the web with a pre-built image, but I wanted to change the wake word to "Hey Jarvis" rather than the default "Okay Nabu", and that was a good reason to bother doing a local build. We'll get into actually building a voice satellite on Debian in later posts.
I started with the default upstream assistant config and tweaked it a little for my setup:
$ diff -u m5stack-atom-echo.yaml assistant.yaml
--- m5stack-atom-echo.yaml 2025-04-18 13:41:21.812766112 +0100
+++ assistant.yaml 2025-01-20 17:33:24.918585244 +0000
@@ -1,7 +1,7 @@
substitutions:
- name: m5stack-atom-echo
+ name: study-atom-echo
friendly_name: M5Stack Atom Echo
- micro_wake_word_model: okay_nabu # alexa, hey_jarvis, hey_mycroft are also supported
+ micro_wake_word_model: hey_jarvis # alexa, hey_jarvis, hey_mycroft are also supported
esphome:
name: ${name}
@@ -16,15 +16,26 @@
version: 4.4.8
platform_version: 5.4.0
+# Enable logging
logger:
+
+# Enable Home Assistant API
api:
+ encryption:
+ key: "TGlrZVRoaXNJc1JlYWxseUl0Rm9vbGlzaFBlb3BsZSE="
ota:
- platform: esphome
- id: ota_esphome
+ password: "itsnotarealthing"
wifi:
+ ssid: "My Wifi Goes Here"
+ password: "AndThePasswordGoesHere"
+
+ # Enable fallback hotspot (captive portal) in case wifi connection fails
ap:
+ ssid: "Study-Atom-Echo Fallback Hotspot"
+ password: "ThisIsRandom"
captive_portal:
(I note that the current upstream config has moved on a bit since I first did this, but I double checked the above instructions still work at the time of writing. I end up pinning ESPHome to the right version below due to that.)
It turns out to be fairly easy to setup ESPHome in a venv
and get it to build + flash the image for you:
noodles@sevai:~$ python3 -m venv esphome-atom-echo
noodles@sevai:~$ . esphome-atom-echo/bin/activate
(esphome-atom-echo) noodles@sevai:~$ cd esphome-atom-echo/
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ pip install esphome==2024.12.4
Collecting esphome==2024.12.4
Using cached esphome-2024.12.4-py3-none-any.whl (4.1 MB)
…
Successfully installed FontTools-4.57.0 PyYAML-6.0.2 appdirs-1.4.4 attrs-25.3.0 bottle-0.13.2 defcon-0.12.1 esphome-2024.12.4 esphome-dashboard-20241217.1 freetype-py-2.5.1 fs-2.4.16 gflanguages-0.7.3 glyphsLib-6.10.1 glyphsets-1.0.0 openstep-plist-0.5.0 pillow-10.4.0 platformio-6.1.16 protobuf-3.20.3 puremagic-1.27 ufoLib2-0.17.1 unicodedata2-16.0.0
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome compile assistant.yaml
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
…
Linking .pioenvs/study-atom-echo/firmware.elf
/home/noodles/.platformio/packages/toolchain-xtensa-esp32@8.4.0+2021r2-patch5/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld: missing --end-group; added as last command line option
RAM: [= ] 10.6% (used 34632 bytes from 327680 bytes)
Flash: [======== ] 79.8% (used 1463813 bytes from 1835008 bytes)
Building .pioenvs/study-atom-echo/firmware.bin
Creating esp32 image...
Successfully created esp32 image.
esp32_create_combined_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
Wrote 0x176fb0 bytes to file /home/noodles/esphome-atom-echo/.esphome/build/study-atom-echo/.pioenvs/study-atom-echo/firmware.factory.bin, ready to flash to offset 0x0
esp32_copy_ota_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
==================================================================================== [SUCCESS] Took 130.57 seconds ====================================================================================
INFO Successfully compiled program.
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome upload --device /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0 assistant.yaml
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
…
INFO Upload with baud rate 460800 failed. Trying again with baud rate 115200.
esptool.py v4.7.0
Serial port /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
Connecting....
Chip is ESP32-PICO-D4 (revision v1.1)
Features: WiFi, BT, Dual Core, 240MHz, Embedded Flash, VRef calibration in efuse, Coding Scheme None
Crystal is 40MHz
MAC: 64:b7:08:8a:1b:c0
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Auto-detected Flash size: 4MB
Flash will be erased from 0x00010000 to 0x00176fff...
Flash will be erased from 0x00001000 to 0x00007fff...
Flash will be erased from 0x00008000 to 0x00008fff...
Flash will be erased from 0x00009000 to 0x0000afff...
Compressed 1470384 bytes to 914252...
Wrote 1470384 bytes (914252 compressed) at 0x00010000 in 82.0 seconds (effective 143.5 kbit/s)...
Hash of data verified.
Compressed 25632 bytes to 16088...
Wrote 25632 bytes (16088 compressed) at 0x00001000 in 1.8 seconds (effective 113.1 kbit/s)...
Hash of data verified.
Compressed 3072 bytes to 134...
Wrote 3072 bytes (134 compressed) at 0x00008000 in 0.1 seconds (effective 383.7 kbit/s)...
Hash of data verified.
Compressed 8192 bytes to 31...
Wrote 8192 bytes (31 compressed) at 0x00009000 in 0.1 seconds (effective 813.5 kbit/s)...
Hash of data verified.
Leaving...
Hard resetting via RTS pin...
INFO Successfully uploaded program.
And then you can watch it boot (this is mine already configured up in Home Assistant):
$ picocom --quiet --imap lfcrlf --baud 115200 /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
I (29) boot: ESP-IDF 4.4.8 2nd stage bootloader
I (29) boot: compile time 17:31:08
I (29) boot: Multicore bootloader
I (32) boot: chip revision: v1.1
I (36) boot.esp32: SPI Speed : 40MHz
I (40) boot.esp32: SPI Mode : DIO
I (45) boot.esp32: SPI Flash Size : 4MB
I (49) boot: Enabling RNG early entropy source...
I (55) boot: Partition Table:
I (58) boot: ## Label Usage Type ST Offset Length
I (66) boot: 0 otadata OTA data 01 00 00009000 00002000
I (73) boot: 1 phy_init RF data 01 01 0000b000 00001000
I (81) boot: 2 app0 OTA app 00 10 00010000 001c0000
I (88) boot: 3 app1 OTA app 00 11 001d0000 001c0000
I (96) boot: 4 nvs WiFi data 01 02 00390000 0006d000
I (103) boot: End of partition table
I (107) esp_image: segment 0: paddr=00010020 vaddr=3f400020 size=58974h (362868) map
I (247) esp_image: segment 1: paddr=0006899c vaddr=3ffb0000 size=03400h ( 13312) load
I (253) esp_image: segment 2: paddr=0006bda4 vaddr=40080000 size=04274h ( 17012) load
I (260) esp_image: segment 3: paddr=00070020 vaddr=400d0020 size=f5cb8h (1006776) map
I (626) esp_image: segment 4: paddr=00165ce0 vaddr=40084274 size=112ach ( 70316) load
I (665) boot: Loaded app from partition at offset 0x10000
I (665) boot: Disabling RNG early entropy source...
I (677) cpu_start: Multicore app
I (677) cpu_start: Pro cpu up.
I (677) cpu_start: Starting app cpu, entry point is 0x400825c8
I (0) cpu_start: App cpu up.
I (695) cpu_start: Pro cpu start user code
I (695) cpu_start: cpu freq: 160000000
I (695) cpu_start: Application information:
I (700) cpu_start: Project name: study-atom-echo
I (705) cpu_start: App version: 2024.12.4
I (710) cpu_start: Compile time: Apr 18 2025 17:29:39
I (716) cpu_start: ELF file SHA256: 1db4989a56c6c930...
I (722) cpu_start: ESP-IDF: 4.4.8
I (727) cpu_start: Min chip rev: v0.0
I (732) cpu_start: Max chip rev: v3.99
I (737) cpu_start: Chip rev: v1.1
I (742) heap_init: Initializing. RAM available for dynamic allocation:
I (749) heap_init: At 3FFAE6E0 len 00001920 (6 KiB): DRAM
I (755) heap_init: At 3FFB8748 len 000278B8 (158 KiB): DRAM
I (761) heap_init: At 3FFE0440 len 00003AE0 (14 KiB): D/IRAM
I (767) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM
I (774) heap_init: At 40095520 len 0000AAE0 (42 KiB): IRAM
I (781) spi_flash: detected chip: gd
I (784) spi_flash: flash io: dio
I (790) cpu_start: Starting scheduler on PRO CPU.
I (0) cpu_start: Starting scheduler on APP CPU.
[I][logger:171]: Log initialized
[C][safe_mode:079]: There have been 0 suspected unsuccessful boot attempts
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 0 cached, 1 written, 0 failed
[I][app:029]: Running through setup()...
[C][esp32_rmt_led_strip:021]: Setting up ESP32 LED Strip...
[D][template.select:014]: Setting up Template Select
[D][template.select:023]: State from initial (could not load stored index): On device
[D][select:015]: 'Wake word engine location': Sending state On device (index 1)
[D][esp-idf:000]: I (100) gpio: GPIO[39]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0
[D][binary_sensor:034]: 'Button': Sending initial state OFF
[C][light:021]: Setting up light 'M5Stack Atom Echo 8a1bc0'...
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:041]: Color mode: RGB
[D][template.switch:046]: Restored state ON
[D][switch:012]: 'Use listen light' Turning ON.
[D][switch:055]: 'Use listen light': Sending state ON
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:047]: State: ON
[D][light:051]: Brightness: 60%
[D][light:059]: Red: 100%, Green: 89%, Blue: 71%
[D][template.switch:046]: Restored state OFF
[D][switch:016]: 'timer_ringing' Turning OFF.
[D][switch:055]: 'timer_ringing': Sending state OFF
[C][i2s_audio:028]: Setting up I2S Audio...
[C][i2s_audio.microphone:018]: Setting up I2S Audio Microphone...
[C][i2s_audio.speaker:096]: Setting up I2S Audio Speaker...
[C][wifi:048]: Setting up WiFi...
[D][esp-idf:000]: I (206) wifi:
[D][esp-idf:000]: wifi driver task: 3ffc8544, prio:23, stack:6656, core=0
[D][esp-idf:000]:
[D][esp-idf:000][wifi]: I (1238) system_api: Base MAC address is not set
[D][esp-idf:000][wifi]: I (1239) system_api: read default base MAC address from EFUSE
[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi firmware version: ff661c3
[D][esp-idf:000][wifi]:
[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi certification version: v7.0
[D][esp-idf:000][wifi]:
[D][esp-idf:000][wifi]: I (1286) wifi:
[D][esp-idf:000][wifi]: config NVS flash: enabled
[D][esp-idf:000][wifi]:
[D][esp-idf:000][wifi]: I (1297) wifi:
[D][esp-idf:000][wifi]: config nano formating: disabled
[D][esp-idf:000][wifi]:
[D][esp-idf:000][wifi]: I (1317) wifi:
[D][esp-idf:000][wifi]: Init data frame dynamic rx buffer num: 32
[D][esp-idf:000][wifi]:
[D][esp-idf:000][wifi]: I (1338) wifi:
[D][esp-idf:000][wifi]: Init static rx mgmt buffer num: 5
[D][esp-idf:000][wifi]:
[D][esp-idf:000][wifi]: I (1348) wifi:
[D][esp-idf:000][wifi]: Init management short buffer num: 32
[D][esp-idf:000][wifi]:
[D][esp-idf:000][wifi]: I (1368) wifi:
[D][esp-idf:000][wifi]: Init dynamic tx buffer num: 32
[D][esp-idf:000][wifi]:
[D][esp-idf:000][wifi]: I (1389) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer size: 1600
[D][esp-idf:000][wifi]:
[D][esp-idf:000][wifi]: I (1399) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer num: 10
[D][esp-idf:000][wifi]:
[D][esp-idf:000][wifi]: I (1419) wifi:
[D][esp-idf:000][wifi]: Init dynamic rx buffer num: 32
[D][esp-idf:000][wifi]:
[D][esp-idf:000]: I (1441) wifi_init: rx ba win: 6
[D][esp-idf:000]: I (1441) wifi_init: tcpip mbox: 32
[D][esp-idf:000]: I (1450) wifi_init: udp mbox: 6
[D][esp-idf:000]: I (1450) wifi_init: tcp mbox: 6
[D][esp-idf:000]: I (1460) wifi_init: tcp tx win: 5760
[D][esp-idf:000]: I (1471) wifi_init: tcp rx win: 5760
[D][esp-idf:000]: I (1481) wifi_init: tcp mss: 1440
[D][esp-idf:000]: I (1481) wifi_init: WiFi IRAM OP enabled
[D][esp-idf:000]: I (1491) wifi_init: WiFi RX IRAM OP enabled
[C][wifi:061]: Starting WiFi...
[C][wifi:062]: Local MAC: 64:B7:08:8A:1B:C0
[D][esp-idf:000][wifi]: I (1513) phy_init: phy_version 4791,2c4672b,Dec 20 2023,16:06:06
[D][esp-idf:000][wifi]: I (1599) wifi:
[D][esp-idf:000][wifi]: mode : sta (64:b7:08:8a:1b:c0)
[D][esp-idf:000][wifi]:
[D][esp-idf:000][wifi]: I (1600) wifi:
[D][esp-idf:000][wifi]: enable tsf
[D][esp-idf:000][wifi]:
[D][esp-idf:000][wifi]: I (1605) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1
[D][esp-idf:000][wifi]:
[D][wifi:482]: Starting scan...
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 1 cached, 0 written, 0 failed
[W][micro_wake_word:151]: Wake word detection can't start as the component hasn't been setup yet
[D][esp-idf:000][wifi]: I (1646) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1
[D][esp-idf:000][wifi]:
[W][component:157]: Component wifi set Warning flag: scanning for networks
…
[I][wifi:617]: WiFi Connected!
…
[D][wifi:626]: Disabling AP...
[C][api:026]: Setting up Home Assistant API server...
[C][micro_wake_word:062]: Setting up microWakeWord...
[C][micro_wake_word:069]: Micro Wake Word initialized
[I][app:062]: setup() finished successfully!
[W][component:170]: Component wifi cleared Warning flag
[W][component:157]: Component api set Warning flag: unspecified
[I][app:100]: ESPHome version 2024.12.4 compiled on Apr 18 2025, 17:29:39
…
[C][logger:185]: Logger:
[C][logger:186]: Level: DEBUG
[C][logger:188]: Log Baud Rate: 115200
[C][logger:189]: Hardware UART: UART0
[C][esp32_rmt_led_strip:187]: ESP32 RMT LED Strip:
[C][esp32_rmt_led_strip:188]: Pin: 27
[C][esp32_rmt_led_strip:189]: Channel: 0
[C][esp32_rmt_led_strip:214]: RGB Order: GRB
[C][esp32_rmt_led_strip:215]: Max refresh rate: 0
[C][esp32_rmt_led_strip:216]: Number of LEDs: 1
[C][template.select:065]: Template Select 'Wake word engine location'
[C][template.select:066]: Update Interval: 60.0s
[C][template.select:069]: Optimistic: YES
[C][template.select:070]: Initial Option: On device
[C][template.select:071]: Restore Value: YES
[C][gpio.binary_sensor:015]: GPIO Binary Sensor 'Button'
[C][gpio.binary_sensor:016]: Pin: GPIO39
[C][light:092]: Light 'M5Stack Atom Echo 8a1bc0'
[C][light:094]: Default Transition Length: 0.0s
[C][light:095]: Gamma Correct: 2.80
[C][template.switch:068]: Template Switch 'Use listen light'
[C][template.switch:091]: Restore Mode: restore defaults to ON
[C][template.switch:057]: Optimistic: YES
[C][template.switch:068]: Template Switch 'timer_ringing'
[C][template.switch:091]: Restore Mode: always OFF
[C][template.switch:057]: Optimistic: YES
[C][factory_reset.button:011]: Factory Reset Button 'Factory reset'
[C][factory_reset.button:011]: Icon: 'mdi:restart-alert'
[C][captive_portal:089]: Captive Portal:
[C][mdns:116]: mDNS:
[C][mdns:117]: Hostname: study-atom-echo-8a1bc0
[C][esphome.ota:073]: Over-The-Air updates:
[C][esphome.ota:074]: Address: study-atom-echo.local:3232
[C][esphome.ota:075]: Version: 2
[C][esphome.ota:078]: Password configured
[C][safe_mode:018]: Safe Mode:
[C][safe_mode:020]: Boot considered successful after 60 seconds
[C][safe_mode:021]: Invoke after 10 boot attempts
[C][safe_mode:023]: Remain in safe mode for 300 seconds
[C][api:140]: API Server:
[C][api:141]: Address: study-atom-echo.local:6053
[C][api:143]: Using noise encryption: YES
[C][micro_wake_word:051]: microWakeWord:
[C][micro_wake_word:052]: models:
[C][micro_wake_word:015]: - Wake Word: Hey Jarvis
[C][micro_wake_word:016]: Probability cutoff: 0.970
[C][micro_wake_word:017]: Sliding window size: 5
[C][micro_wake_word:021]: - VAD Model
[C][micro_wake_word:022]: Probability cutoff: 0.500
[C][micro_wake_word:023]: Sliding window size: 5
[D][api:103]: Accepted 192.168.39.6
[W][component:170]: Component api cleared Warning flag
[W][component:237]: Component api took a long time for an operation (58 ms).
[W][component:238]: Components should block for at most 30 ms.
[D][api.connection:1446]: Home Assistant 2024.3.3 (192.168.39.6): Connected successfully
[D][ring_buffer:034]: Created ring buffer with size 2048
[D][micro_wake_word:399]: Resetting buffers and probabilities
[D][micro_wake_word:195]: State changed from IDLE to START_MICROPHONE
[D][micro_wake_word:107]: Starting Microphone
[D][micro_wake_word:195]: State changed from START_MICROPHONE to STARTING_MICROPHONE
[D][esp-idf:000]: I (11279) I2S: DMA Malloc info, datalen=blocksize=1024, dma_buf_count=4
[D][micro_wake_word:195]: State changed from STARTING_MICROPHONE to DETECTING_WAKE_WORD
That's enough to get a voice satellite that can be configured up in Home Assistant; you'll need the ESPHome Integration added, then for the noise_psk
key you use the same string as I have under api/encryption/key
in my diff above (obviously do your own, I used dd if=/dev/urandom bs=32 count=1 | base64
to generate mine).
If you're like me and a compulsive VLANer and firewaller even within your own network then you need to allow Home Assistant to connect on TCP port 6053 to the ATOM Echo, and also allow access to/from UDP port 6055 on the Echo (it'll send audio from that port to Home Assistant, then receive back audio to the same port).
At this point you can now shout "Hey Jarvis, what time is it?" at the Echo, and the white light will turn flashing blue (indicating it's heard the wake word). Which means we're ready to teach Home Assistant how to do something with the incoming audio.
24 Apr 2025 6:34pm GMT
23 Apr 2025
Planet Debian
Dirk Eddelbuettel: qlcal 0.0.15 on CRAN: Calendar Updates
The fifteenth release of the qlcal package arrivied at CRAN today, following the QuantLib 1.38 release this morning.
qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.
This releases synchronizes qlcal with the QuantLib release 1.38.
Changes in version 0.0.15 (2025-04-23)
Synchronized with QuantLib 1.38 released today
Calendar updates for China, Hongkong, Thailand
Minor continuous integration update
Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
23 Apr 2025 6:12pm GMT
Thomas Lange: FAI 6.4 and new ISO images available
The new FAI release 6.4 comes with some nice new features.
It now supports installing the Xfce edition of Linux Mint 22.1 'Xia'. There's now an additional Linux Mint ISO [1] which does an unattended Linux Mint installation via FAI and does not need a network connection because all packages are available on the ISO.
The package_config configurations now support arbitrary boolean expressions with FAI classes like this:
PACKAGES install UBUNTU && XORG && ! MINT
If you use the command ifclass
in customization scripts you can now also use these expressions.
The tool fai-kvm for starting a KVM virtual machine now uses UEFI variables if the VM is started with an UEFI environment, so boot settings are saved during a reboot.
For the installation of Rocky Linux and Almalinux in an UEFI environment some configuration files were added.
New ISO images [2] are available but it may take some time until the FAIme service [3] will supports customized Linux Mint images.
- [1]: https://fai-project.org/fai-cd/faicd64-linuxmint-only_6.4.iso
- [2]: https://fai-project.org/fai-cd/
- [3]: https://fai-project.org/FAIme
23 Apr 2025 1:21pm GMT
Steinar H. Gunderson: Recommended VCL
In line with this bug, and after losing an hour of sleep, here's some VCL that I can readily recommend if you happen to run Varnish:
sub vcl_recv { ... if (req.http.user-agent ~ "Scrapy") { return (synth(200, "FUCK YOU FUCK YOU FUCK YOU")); } ... }
But hey, we "need to respect the freedom of Scrapy users", that comes before actually not, like, destroying the Internet with AI bots.
23 Apr 2025 11:00am GMT
Michael Prokop: Lessons learned from running an open source project for 20 years @ GLT25
Time flies by so quickly, it's >20 years since I started the Grml project.
I'm giving a (german) talk about the lessons learned from 20 years of running the Grml project this Saturday, 2025-04-26 at the Grazer Linuxtage (Graz/Austria). Would be great to see you there!
23 Apr 2025 6:11am GMT
Russell Coker: Last Post About the Yoga Gen3
Just over a year ago I bought myself a Thinkpad Yoga Gen 3 [1]. That is a nice machine and I really enjoyed using it. But a few months ago it started crashing and would often play some music on boot. The music is a diagnostic code that can be interpreted by the Lenovo Android app. Often the music translated to "code 0284 TCG-compliant functionality-related error" which suggests a motherboard problem. So I bought a new motherboard.
The system still crashes with the new motherboard. It seems to only crash when on battery so that indicates that it might be a power issue causing the crashes. I configured the BIOS to disable the TPM and that avoided the TCG messages and tunes on boot but it still crashes.
An additional problem is that the design of the Yoga series is that the keys retract when the system is opened past 180 degrees and when the lid is closed. After the motherboard replacement about half the keys don't retract which means that they will damage the screen more when the lid is closed (the screen was already damaged from the keys when I bought it).
I think that spending more money on trying to fix this would be a waste. So I'll use it as a test machine and I might give it to a relative who needs a portable computer to be used when on power only.
For the moment I'm back to the Thinkpad X1 Carbon Gen 5 [2]. Hopefully the latest kernel changes to zswap and the changes to Chrome to suspend unused tabs will make up for more RAM use in other areas. Currently it seems to be giving decent performance with 8G of RAM and I usually don't notice any difference from the Yoga Gen 3.
Now I'm considering getting a Thinkpad X1 Carbon Extreme with a 4K display. But they seem a bit expensive at the moment. Currently there's only one on ebay Australia for $1200ono.
- [1] https://etbe.coker.com.au/2024/11/02/more-about-yoga-gen3/
- [2] https://etbe.coker.com.au/2022/12/08/thinkpad-x1-carbon-gen5/
23 Apr 2025 5:11am GMT
Dirk Eddelbuettel: RInside 0.2.19 on CRAN: Mostly Maintenance
A new release 0.2.19 of RInside arrived on CRAN and in Debian today. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by Rcpp.
This release fixes a minor bug that got tickled (after a decade and a half RInside) by environment variables (which we parse at compile time and encode in a C/C++ header file as constants) built using double quotes. CRAN currently needs that on one or two platforms, and RInside was erroring. This has been addressed. In the two years since the last release we also received two kind PRs updating the Qt examples to Qt6. And as always we also updated a few other things around the package.
The list of changes since the last release:
Changes in RInside version 0.2.19 (2025-04-22)
The qt example now supports Qt6 (Joris Goosen in #54 closing #53)
CMake support was refined for more recent versions (Joris Goosen in #55)
The sandboxed-server example now states more clearly that
RINSIDE_CALLBACKS
needs to be definedMore routine update to package and continuous integration.
Some now-obsolete checks for C++11 have been removed
When parsing environment variables, use of double quotes is now supported
My CRANberries also provide a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page, or to issues tickets at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
23 Apr 2025 12:40am GMT
22 Apr 2025
Planet Debian
Melissa Wen: 2025 FOSDEM: Don't let your motivation go, save time with kworkflow
2025 was my first year at FOSDEM, and I can say it was an incredible experience where I met many colleagues from Igalia who live around the world, and also many friends from the Linux display stack who are part of my daily work and contributions to DRM/KMS. In addition, I met new faces and recognized others with whom I had interacted on some online forums and we had good and long conversations.
During FOSDEM 2025 I had the opportunity to present about kworkflow in the kernel devroom. Kworkflow is a set of tools that help kernel developers with their routine tasks and it is the tool I use for my development tasks. In short, every contribution I make to the Linux kernel is assisted by kworkflow.
The goal of my presentation was to spread the word about kworkflow. I aimed to show how the suite consolidates good practices and recommendations of the kernel workflow in short commands. These commands are easily configurable and memorized for your current work setup, or for your multiple setups.
For me, Kworkflow is a tool that accommodates the needs of different agents in the Linux kernel community. Active developers and maintainers are the main target audience for kworkflow, but it is also inviting for users and user-space developers who just want to report a problem and validate a solution without needing to know every detail of the kernel development workflow.
Something I didn't emphasize during the presentation but would like to correct this flaw here is that the main author and developer of kworkflow is my colleague at Igalia, Rodrigo Siqueira. Being honest, my contributions are mostly on requesting and validating new features, fixing bugs, and sharing scripts to increase feature coverage.
So, the video and slide deck of my FOSDEM presentation are available for download here.
And, as usual, you will find in this blog post the script of this presentation and more detailed explanation of the demo presented there.
Kworkflow at FOSDEM 2025: Speaker Notes and Demo
Hi, I'm Melissa, a GPU kernel driver developer at Igalia and today I'll be giving a very inclusive talk to not let your motivation go by saving time with kworkflow.
So, you're a kernel developer, or you want to be a kernel developer, or you don't want to be a kernel developer. But you're all united by a single need: you need to validate a custom kernel with just one change, and you need to verify that it fixes or improves something in the kernel.
And that's a given change for a given distribution, or for a given device, or for a given subsystem…
Look to this diagram and try to figure out the number of subsystems and related work trees you can handle in the kernel.
So, whether you are a kernel developer or not, at some point you may come across this type of situation:
There is a userspace developer who wants to report a kernel issue and says:
- Oh, there is a problem in your driver that can only be reproduced by running this specific distribution. And the kernel developer asks:
- Oh, have you checked if this issue is still present in the latest kernel version of this branch?
But the userspace developer has never compiled and installed a custom kernel before. So they have to read a lot of tutorials and kernel documentation to create a kernel compilation and deployment script. Finally, the reporter managed to compile and deploy a custom kernel and reports:
- Sorry for the delay, this is the first time I have installed a custom kernel. I am not sure if I did it right, but the issue is still present in the kernel of the branch you pointed out.
And then, the kernel developer needs to reproduce this issue on their side, but they have never worked with this distribution, so they just created a new script, but the same script created by the reporter.
What's the problem of this situation? The problem is that you keep creating new scripts!
Every time you change distribution, change architecture, change hardware, change project - even in the same company - the development setup may change when you switch to a different project, you create another script for your new kernel development workflow!
You know, you have a lot of babies, you have a collection of "my precious scripts", like Sméagol (Lord of the Rings) with the precious ring.
Instead of creating and accumulating scripts, save yourself time with kworkflow. Here is a typical script that many of you may have. This is a Raspberry Pi 4 script and contains everything you need to memorize to compile and deploy a kernel on your Raspberry Pi 4.
With kworkflow, you only need to memorize two commands, and those commands are not specific to Raspberry Pi. They are the same commands to different architecture, kernel configuration, target device.
What is kworkflow?
Kworkflow is a collection of tools and software combined to:
- Optimize Linux kernel development workflow.
- Reduce time spent on repetitive tasks, since we are spending our lives compiling kernels.
- Standardize best practices.
- Ensure reliable data exchange across kernel workflow. For example: two people describe the same setup, but they are not seeing the same thing, kworkflow can ensure both are actually with the same kernel, modules and options enabled.
I don't know if you will get this analogy, but kworkflow is for me a megazord of scripts. You are combining all of your scripts to create a very powerful tool.
What is the main feature of kworflow?
There are many, but these are the most important for me:
- Build & deploy custom kernels across devices & distros.
- Handle cross-compilation seamlessly.
- Manage multiple architecture, settings and target devices in the same work tree.
- Organize kernel configuration files.
- Facilitate remote debugging & code inspection.
- Standardize Linux kernel patch submission guidelines. You don't need to double check documentantion neither Greg needs to tell you that you are not following Linux kernel guidelines.
- Upcoming: Interface to bookmark, apply and "reviewed-by" patches from mailing lists (lore.kernel.org).
This is the list of commands you can run with kworkflow. The first subset is to configure your tool for various situations you may face in your daily tasks.
# Manage kw and kw configurations
kw init - Initialize kw config file
kw self-update (u) - Update kw
kw config (g) - Manage kernel .config files
The second subset is to build and deploy custom kernels.
# Build & Deploy custom kernels
kw kernel-config-manager (k) - Manage kernel .config files
kw build (b) - Build kernel
kw deploy (d) - Deploy kernel image (local/remote)
kw bd - Build and deploy kernel
We have some tools to manage and interact with target machines.
# Manage and interact with target machines
kw ssh (s) - SSH support
kw remote (r) - Manage machines available via ssh
kw vm - QEMU support
To inspect and debug a kernel.
# Inspect and debug
kw device - Show basic hardware information
kw explore (e) - Explore string patterns in the work tree and git logs
kw debug - Linux kernel debug utilities
kw drm - Set of commands to work with DRM drivers
To automatize best practices for patch submission like codestyle, maintainers and the correct list of recipients and mailing lists of this change, to ensure we are sending the patch to who is interested in it.
# Automatize best practices for patch submission
kw codestyle (c) - Check code style
kw maintainers (m) - Get maintainers/mailing list
kw send-patch - Send patches via email
And the last one, the upcoming patch hub.
# Upcoming
kw patch-hub - Interact with patches (lore.kernel.org)
How can you save time with Kworkflow?
So how can you save time building and deploying a custom kernel?
First, you need a .config file.
- Without kworkflow: You may be manually extracting and managing .config files from different targets and saving them with different suffixes to link the kernel to the target device or distribution, or any descriptive suffix to help identify which is which. Or even copying and pasting from somewhere.
- With kworkflow: you can use the kernel-config-manager command, or simply
kw k
, to store, describe and retrieve a specific .config file very easily, according to your current needs.
Then you want to build the kernel:
- Without kworkflow: You are probably now memorizing a combination of commands and options.
- With kworkflow: you just need
kw b
(kw build) to build the kernel with the correct settings for cross-compilation, compilation warnings, cflags, etc. It also shows some information about the kernel, like number of modules.
Finally, to deploy the kernel in a target machine.
- Without kworkflow: You might be doing things like: SSH connecting to the remote machine, copying and removing files according to distributions and architecture, and manually updating the bootloader for the target distribution.
- With kworkflow: you just need
kw d
which does a lot of things for you, like: deploying the kernel, preparing the target machine for the new installation, listing available kernels and uninstall them, creating a tarball, rebooting the machine after deploying the kernel, etc.
You can also save time on debugging kernels locally or remotely.
- Without kworkflow: you do: ssh, manual setup and traces enablement, copy&paste logs.
- With kworkflow: more straighforward access to debug utilities: events, trace, dmesg.
You can save time on managing multiple kernel images in the same work tree.
- Without kworkflow: now you can be cloning multiple times the same repository so you don't lose compiled files when changing kernel configuration or compilation options and manually managing build and deployment scripts.
- With kworkflow: you can use
kw env
to isolate multiple contexts in the same worktree as environments, so you can keep different configurations in the same worktree and switch between them easily without losing anything from the last time you worked in a specific context.
Finally, you can save time when submitting kernel patches. In kworkflow, you can find everything you need to wrap your changes in patch format and submit them to the right list of recipients, those who can review, comment on, and accept your changes.
This is a demo that the lead developer of the kw patch-hub feature sent me. With this feature, you will be able to check out a series on a specific mailing list, bookmark those patches in the kernel for validation, and when you are satisfied with the proposed changes, you can automatically submit a reviewed-by for that whole series to the mailing list.
Demo
Now a demo of how to use kw environment to deal with different devices, architectures and distributions in the same work tree without losing compiled files, build and deploy settings, .config file, remote access configuration and other settings specific for those three devices that I have.
Setup
- Three devices:
-
laptop (debian x86 intel local) -
SteamDeck (steamos x86 amd remote) -
RaspberryPi 4 (raspbian arm64 broadcomm remote)
-
- Goal: To validate a change on DRM/VKMS using a single kernel tree.
- Kworkflow commands:
- kw env
- kw d
- kw bd
- kw device
- kw debug
- kw drm
Demo script
In the same terminal and worktree.
First target device: Laptop (debian|x86|intel|local)
$ kw env --list # list environments available in this work tree
$ kw env --use LOCAL # select the environment of local machine (laptop) to use: loading pre-compiled files, kernel and kworkflow settings.
$ kw device # show device information
$ sudo modinfo vkms # show VKMS module information before applying kernel changes.
$ <open VKMS file and change module info>
$ kw bd # compile and install kernel with the given change
$ sudo modinfo vkms # show VKMS module information after kernel changes.
$ git checkout -- drivers
Second target device: RaspberryPi 4 (raspbian|arm64|broadcomm|remote)
$ kw env --use RPI_64 # move to the environment for a different target device.
$ kw device # show device information and kernel image name
$ kw drm --gui-off-after-reboot # set the system to not load graphical layer after reboot
$ kw b # build the kernel with the VKMS change
$ kw d --reboot # deploy the custom kernel in a Raspberry Pi 4 with Raspbian 64, and reboot
$ kw s # connect with the target machine via ssh and check the kernel image name
$ exit
Third target device: SteamDeck (steamos|x86|amd|remote)
$ kw env --use STEAMDECK # move to the environment for a different target device
$ kw device # show device information
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output
$ kw debug --dmesg --follow --history --cmd="modprobe -r vkms" # run a command and show the related dmesg output
$ <add a printk with a random msg to appear on dmesg log>
$ kw bd # deploy and install custom kernel to the target device
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output after build and deploy the kernel change
Q&A
Most of the questions raised at the end of the presentation were actually suggestions and additions of new features to kworkflow.
The first participant, that is also a kernel maintainer, asked about two features: (1) automatize getting patches from patchwork (or lore) and triggering the process of building, deploying and validating them using the existing workflow, (2) bisecting support. They are both very interesting features. The first one fits well the patch-hub subproject, that is under-development, and I've actually made a similar request a couple of weeks before the talk. The second is an already existing request in kworkflow github project.
Another request was to use kexec and avoid rebooting the kernel for testing. Reviewing my presentation I realized I wasn't very clear that kworkflow doesn't support kexec. As I replied, what it does is to install the modules and you can load/unload them for validations, but for built-in parts, you need to reboot the kernel.
Another two questions: one about Android Debug Bridge (ADB) support instead of SSH and another about support to alternative ways of booting when the custom kernel ended up broken but you only have one kernel image there. Kworkflow doesn't manage it yet, but I agree this is a very useful feature for embedded devices. On Raspberry Pi 4, kworkflow mitigates this issue by preserving the distro kernel image and using config.txt file to set a custom kernel for booting. For ADB, there is no support too, and as I don't see currently users of KW working with Android, I don't think we will have this support any time soon, except if we find new volunteers and increase the pool of contributors.
The last two questions were regarding the status of b4 integration, that is under development, and other debugging features that the tool doesn't support yet.
Finally, when Andrea and I were changing turn on the stage, he suggested to add support for virtme-ng to kworkflow. So I opened an issue for tracking this feature request in the project github.
With all these questions and requests, I could see the general need for a tool that integrates the variety of kernel developer workflows, as proposed by kworflow. Also, there are still many cases to be covered by kworkflow.
Despite the high demand, this is a completely voluntary project and it is unlikely that we will be able to meet these needs given the limited resources. We will keep trying our best in the hope we can increase the pool of users and contributors too.
22 Apr 2025 7:30pm GMT
Joey Hess: offgrid electric car
Eight months ago I came up my rocky driveway in an electric car, with the back full of solar panel mounting rails. I didn't know how I'd manage to keep it charged. I got the car earlier than planned, with my offgrid solar upgrade only beginning. There's no nearby EV charger, and winter was coming, less solar power every day. Still, it was the right time to take a leap to offgid EV life.
My existing 1 kilowatt solar array could charge the car only 5 miles on a good day. Here's my first try at charging the car offgrid:
![]() |
It was not worth charging the car that way, the house battery tended to get drained while doing that, and adding cycles to that battery is not desirable. So that was only a proof of concept, I knew I'd need to upgrade.
My goal with the upgrade was to charge the car directly from the sun, even when it was cloudy, using the house battery only to skate over brief darker periods (like a thunderstorm). By mid October, I had enough solar installed to do that (5 kilowatts).
![]() |
![]() |
Using this, in 2 days I charged the car up from 57% to 82%, and took off on a celebratory road trip to Niagra Falls, where I charged the car from hydro power from a dam my grandfather had engineered.
When I got home, it was November. Days were getting ever shorter. My solar upgrade was only 1/3rd complete and could charge the car 30-some miles per day, but only on a good day, and weather was getting worse. I came back with a low state of charge (both car and me), and needed to get back to full in time for my Thanksgiving trip at the end of the month. I decided to limit my trips to town.
![]() |
This kind of medium term planning about car travel was new to me. But not too unusual for offgrid living. You look at the weather forecast and make some rough plans, and get to feel connected to the natural world a bit more.
December is the real test for offgrid solar, and honestly this was a bit rough, with a road trip planned for the end of the month. I did the usual holiday stuff but otherwise holed up at home a bit more than I usually would. Charging was limited and the cold made it charge less efficiently.
![]() |
Still, I was busy installing more solar panels, and by winter solstice, was back to charging 30 miles on a good day.
Of course, from there out things improved. In January and February I was able to charge up easily enough for my usual trips despite the cold. By March the car was often getting full before I needed to go anywhere, and I was doing long round trips without bothering to fast charge along the way, coming home low, knowing even cloudy days would let it charge up enough.
That brings me up to today. The car is 80% full and heading up toward 100% for a long trip on Friday. Despite the sky being milky white today with no visible sun, there's plenty of power to absorb, and the car charger turned on at 11 am with the house battery already full.
My solar upgrade is only 2/3rds complete, and also I have not yet installed my inverter upgrade, so the car can only currenly charge at 9 amps despite much more solar power often being available. So I'm looking forward to how next December goes with my full planned solar array and faster charging.
But first, a summer where I expect the car will mostly be charged up and ready to go at all times, and the only car expense will be fast charging on road trips!
By the way, the code I've written to automate offgrid charging that runs only when there's enough solar power is here.
And here are the charging graphs for the other months. All told, it's charged 475 kwh offgrid, enough to drive more than 1500 miles.
![]() |
![]() |
![]() |
![]() |
22 Apr 2025 4:45pm GMT
21 Apr 2025
Planet Debian
Gunnar Wolf: Want your title? Here, have some XML!
As it seems ChatGPT would phrase it… Sweet Mother of God!
I received a mail from my University's Scholar Administrative division informing me my Doctor degree has been granted and emitted (yayyyyyy! 👨🎓), and before printing the corresponding documents, I should review all of the information is correct.
Attached to the mail, I found they sent me a very friendly and welcoming XML file, that stated it followed the schema at https://www.siged.sep.gob.mx/titulos/schema.xsd… Wait! There is nothing to be found in that address! Well, never mind, I can make sense out of a XML document, right?
Of course, who needs an XSD schema? Everybody can parse through the data in a XML document, right? Of course, it took me close to five seconds to spot a minor mistake (in the finish and start dates of my previous degree), for which I mailed the relevant address…
But… What happens if I try to undestand the world as seen by 9.8 out of 10 people getting a title from UNAM, in all of its different disciplines (scientific, engineering, humanities…) Some people will have no clue about what to do with a XML file. Fortunately, the mail has a link to a very useful tutorial (roughly translated by myself):
The attached file has an XML extension, so in order to visualize it, you must open it with a text editor such as Notepad or Sublime Text. In case you have any questions on how to open the file, please refer to the following guide: https://www.dgae.unam.mx/guia_abrir_xml.html
Seriously! Asking people getting a title in just about any area of knowledge to… Install SublimeText to validate the content of a XML (that includes the oh-so-very-readable signature of some universitary bureaucrat).
Of course, for many years Mexican people have been getting XML files by mail (for any declared monetary exchange, i.e. buying goods or offering services), but they are always sent together with a render of such XML to a personalized PDF. And yes - the PDF is there only to give the human receiving the file an easier time understanding it. Who thought a bare XML was a good idea? 😠
21 Apr 2025 6:33pm GMT
Louis-Philippe Véronneau: One last Bookworm for the road — report from the Montreal 2025 BSP
Hello, hello, hello!
This report for the Bug Squashing Party we held in Montreal on March 28-29th is very late ... but better late than never? We're now at our fifth BSP in a row1, which is both nice and somewhat terrifying.
Have I really been around for five Debian releases already? Geez...
This year, around 13 different people showed up, including some brand new folks! All in all, we ended up working on 77 bugs, 61 of which have since been closed.
This is somewhat skewed by the large number of Lintian bugs I closed by merging and releasing the very many patches submitted by Maytham Alsudany (hello Maytham!), but that was still work :D
For our past few events, we have been renting a space at Ateliers de la transition socio-écologique. This building used to be nunnery (thus the huge cross on the top floor), but has since been transformed into a multi-faceted project.
BSPs are great and this one was no exception. You should try to join an upcoming event or to organise one if you can. It is loads of fun and you will be helping the Debian project release its next stable version sooner!
As always, thanks to Debian for granting us a budget for the food and to rent the venue.
Pictures
Here are a bunch of pictures of the BSP, mixed in with some other pictures I took at this venue during a previous event.
21 Apr 2025 5:00am GMT