23 Dec 2025

feedPlanet Grep

Lionel Dricot: Prepare for That Stupid World

Prepare for That Stupid World

You probably heard about the Wall Street Journal story where they had a snack-vending machine run by a chatbot created by Anthropic.

At first glance, it is funny and it looks like journalists doing their job criticising the AI industry. If you are curious, the video is there (requires JS).

But what appears to be journalism is, in fact, pure advertising. For both WSJ and Anthropic. Look at how WSJ journalists are presented as "world class", how no-subtle the Anthropic guy is when telling them they are the best and how the journalist blush at it. If you are taking the story at face value, you are failing for the trap which is simple: "AI is not really good but funny, we must improve it."

The first thing that blew my mind was how stupid the whole idea is. Think for one second. One full second. Why do you ever want to add a chatbot to a snack vending machine? The video states it clearly: the vending machine must be stocked by humans. Customers must order and take their snack by themselves. The AI has no value at all.

Automated snack vending machine is a solved problem since nearly a century. Why do you want to make your vending machine more expensive, more error-prone, more fragile and less efficient for your customers?

What this video is really doing is normalising the fact that "even if it is completely stupid, AI will be everywhere, get used to it!"

The Anthropic guy himself doesn't seem to believe his own lies, to the point of making me uncomfortable. Toward the ends, he even tries to warn us: "Claude AI could run your business but you don't want to come one day and see you have been locked out." At which the journalist adds, "Or has ordered 100 PlayStations."

And then he gives up:

"Well, the best you can do is probably prepare for that world."

Still from the video where Anthropic’s employee says "probably prepare for that world" Still from the video where Anthropic's employee says "probably prepare for that world"

None of the world class journalists seemed to care. They are probably too badly paid for that. I was astonished to see how proud they were, having spent literally hours chatting with a bot just to get a free coke, even queuing for the privilege of having a free coke. A coke that cost a few minutes of minimum-wage work.

So the whole thing is advertising a world where chatbots will be everywhere and where world-class workers will do long queue just to get a free soda.

And the best advice about it is that you should probably prepare for that world.

I'm Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

23 Dec 2025 4:42am GMT

Frederic Descamps: Deploying on OCI with the starter kit – part 6 (GenAI)

In the previous articles [1], [2], [3], [4], [5], we saw how to easily and quickly deploy an application server and a database to OCI. We also noticed that we have multiple programming languages to choose from. In this article, we will see how to use OCI GenAI Service (some are also available with the […]

23 Dec 2025 4:42am GMT

Frank Goossens: Fietsen met een glimlach op winterzonnewende

Op mijn fietstochtje vandaag, terwijl ik door Borgharen reed, schonk een mij voor altijd onbekende wandelaar me een gulle glimlach. Amper een secondje verbondenheid en dan de zon die kort daarna doorbrak en mij Winterzonnewende op de fiets was memorabel!

Source

23 Dec 2025 4:42am GMT

22 Dec 2025

feedPlanet Debian

Jonathan McDowell: NanoKVM: I like it

I bought a NanoKVM. I'd heard some of the stories about how terrible it was beforehand, and some I didn't learn about until afterwards, but at £52, including VAT + P&P, that seemed like an excellent bargain for something I was planning to use in my home network environment.

Let's cover the bad press first. apalrd did a video, entitled NanoKVM: The S stands for Security (Armen Barsegyan has a write up recommending a PiKVM instead that lists the objections raised in the video). Matej Kovačič wrote an article about the hidden microphone on a Chinese NanoKVM. Various other places have picked up both of these and still seem to be running with them, 10 months later.

Next, let me explain where I'm coming from here. I have over 2 decades of experience with terrible out-of-band access devices. I still wince when I think of the Sun Opteron servers that shipped with an iLOM that needed a 32-bit Windows browser in order to access it (IIRC some 32 bit binary JNI blob). It was a 64 bit x86 server from a company who, at the time, still had a major non-Windows OS. Sheesh. I do not assume these devices are fit for exposure to the public internet, even if they come from "reputable" vendors. Add into that the fact the NanoKVM is very much based on a development board (the LicheeRV Nano), and I felt I knew what I was getting into here.

And, as a TL;DR, I am perfectly happy with my purchase. Sipeed have actually dealt with a bunch of apalrd's concerns (GitHub ticket), which I consider to be an impressive level of support for this price point. Equally the microphone is explained by the fact this is a £52 device based on a development board. You're giving it USB + HDMI access to a host on your network, if you're worried about the microphone then you're concentrating on the wrong bit here.

I started out by hooking the NanoKVM up to my Raspberry Pi classic, which I use as a serial console / network boot tool for working on random bits of hardware. That meant the NanoKVM had no access to the outside world (the Pi is not configured to route, or NAT, for the test network interface), and I could observe what went on. As it happens you can do an SSH port forward of port 80 with this sort of setup and it all works fine - no need for the NanoKVM to have any external access, and it copes happily with being accessed as http://localhost:8000/ (though you do need to choose MJPEG as the video mode, more forwarding or enabling HTTPS is needed for an H.264 WebRTC session).

IPv6 is enabled in the kernel. My test setup doesn't have a router advertisements configured, but I could connect to the web application over the v6 link-local address that came up automatically.

My device reports:

Image version:              v1.4.1
Application version:        2.2.9

That's recent, but the GitHub releases page has 2.3.0 listed as more recent.

Out of the box it's listening on TCP port 80. SSH is not running, but there's a toggle to turn it on and the web interface offers a web based shell (with no extra authentication over the normal login). On first use I was asked to set a username + password. Default access, as you'd expect from port 80, is HTTP, but there's a toggle to enable HTTPS. It generates a self signed certificate - for me it had the CN localhost but that might have been due to my use of port forwarding. Enabling HTTPS does not disable HTTP, but HTTP just redirects to the HTTPS URL.

As others have discussed it does a bunch of DNS lookups, primarily for NTP servers but also for cdn.sipeed.com. The DNS servers are hard coded:

~ # cat /etc/resolv.conf
nameserver 192.168.0.1
nameserver 8.8.4.4
nameserver 8.8.8.8
nameserver 114.114.114.114
nameserver 119.29.29.29
nameserver 223.5.5.5

This is actually restored on boot from /boot/resolv.conf, so if you want changes to persist you can just edit that file. NTP is configured with a standard set of pool.ntp.org services in /etc/ntp.conf (this does not get restored on reboot, so can just be edited in place). I had dnsmasq on the Pi setup to hand out DNS + NTP servers, but both were ignored (though actually udhcpc does write the DNS details to /etc/resolv.conf.dhcp).

My assumption is the lookup to cdn.sipeed.com is for firmware updates (as I bought the NanoKVM cube it came fully installed, so no need for a .so download to make things work); when working DNS was provided I witness attempts to connect to HTTPS. I've not bothered digging further into this. I did go grab the latest.zip being served from the URL, which turned out to be v2.2.9, matching what I have installed, not the latest on GitHub.

I note there's an iptables setup (with nftables underneath) that's not fully realised - it seems to be trying to allow inbound HTTP + WebRTC, as well as outbound SSH, but everything is default accept so none of it gets hit. Setting up a default deny outbound and tweaking a little should provide a bit more reassurance it's not going to try and connect out somewhere it shouldn't.

It looks like updates focus solely on the KVM application, so I wanted to take a look at the underlying OS. This is buildroot based:

~ # cat /etc/os-release
NAME=Buildroot
VERSION=-g98d17d2c0-dirty
ID=buildroot
VERSION_ID=2023.11.2
PRETTY_NAME="Buildroot 2023.11.2"

The kernel reports itself as 5.10.4-tag-. Somewhat ancient, but actually an LTS kernel. Except we're now up to 5.10.247, so it obviously hasn't been updated in some time.

TBH, this is what I expect (and fear) from embedded devices. They end up with some ancient base OS revision and a kernel with a bunch of hacks that mean it's not easily updated. I get that the margins on this stuff are tiny, but I do wish folk would spend more time upstreaming. Or at least updating to the latest LTS point release for their kernel.

The SSH client/daemon is full-fat OpenSSH:

~ # sshd -V
OpenSSH_9.6p1, OpenSSL 3.1.4 24 Oct 2023

There are a number of CVEs fixed in later OpenSSL 3.1 versions, though at present nothing that looks too concerning from the server side. Yes, the image has tcpdump + aircrack installed. I'm a little surprised at aircrack (the device has no WiFi and even though I know there's a variant that does, it's not a standard debug tool the way tcpdump is), but there's a copy of GNU Chess in there too, so it's obvious this is just a kitchen-sink image. FWIW it looks like the buildroot config is here.

Sadly the UART that I believe the bootloader/kernel are talking to is not exposed externally - the UART pin headers are for UART1 + 2, and I'd have to open up the device to get to UART0. I've not yet done this (but doing so would also allow access to the SD card, which would make trying to compile + test my own kernel easier).

In terms of actual functionality it did what I'd expect. 1080p HDMI capture was fine. I'd have gone for a lower resolution, but I think that would have required tweaking on the client side. It looks like the 2.3.0 release allows EDID tweaking, so I might have to investigate that. The keyboard defaults to a US layout, which caused some problems with the | symbol until I reconfigured the target machine not to expect a GB layout.

There's also the potential to share out images via USB. I copied a Debian trixie netinst image to /data on the NanoKVM and was able to select it in the web interface and have it appear on the target machine easily. There's also the option to fetch direct from a URL in the web interface, but I was still testing without routable network access, so didn't try that. There's plenty of room for images:

~ # df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mmcblk0p2            7.6G    823.3M      6.4G  11% /
devtmpfs                 77.7M         0     77.7M   0% /dev
tmpfs                    79.0M         0     79.0M   0% /dev/shm
tmpfs                    79.0M     30.2M     48.8M  38% /tmp
tmpfs                    79.0M    124.0K     78.9M   0% /run
/dev/mmcblk0p1           16.0M     11.5M      4.5M  72% /boot
/dev/mmcblk0p3           22.2G    160.0K     22.2G   0% /data

The NanoKVM also appears as an RNDIS USB network device, with udhcpd running on the interface. IP forwarding is not enabled, and there's no masquerading rules setup, so this doesn't give the target host access to the "management" LAN by default. I guess it could be useful for copying things over to the target host, as a more flexible approach than a virtual disk image.

One thing to note is this makes for a bunch of devices over the composite USB interface. There are 3 HID devices (keyboard, absolute mouse, relative mouse), the RNDIS interface, and the USB mass storage. I had a few occasions where the keyboard input got stuck after I'd been playing about with big data copies over the network and using the USB mass storage emulation. There is a HID-only mode (no network/mass storage) to try and help with this, and a restart of the NanoKVM generally brought things back, but something to watch out for. Again I see that the 2.3.0 application update mentions resetting the USB hardware on a HID reset, which might well help.

As I stated at the start, I'm happy with this purchase. Would I leave it exposed to the internet without suitable firewalling? No, but then I wouldn't do so for any KVM. I wanted a lightweight KVM suitable for use in my home network, something unlikely to see heavy use but that would save me hooking up an actual monitor + keyboard when things were misbehaving. So far everything I've seen says I've got my money's worth from it.

22 Dec 2025 5:38pm GMT

Hellen Chemtai: Overcoming Challenges in OpenQA Images Testing: My Internship Journey

Hello there 👋 . Today will be an in depth review on my work with the Debian OpenQA images testing team. I will highlight the struggles that I have had so far during my Outreachy internship.

The OpenQA images testing team uses OpenQA to automatically install images e.g. Gnome Images. The images are then tested using tests written in Perl. My current tasks include speech install and capture all audio. I am also installing Live Gnome image to Windows using BalenaEtcher then testing it. A set of similar tasks will also be collaborated on. While working on tasks, I have to go through the guides. I also learn how Perl works so as to edit and create tests. For every change made, I have to re-run the job in developer mode. I have to create needles that have matches and click co-ordinates. I have been stuck on some of these instances:

  1. During installation, my job would not process a second HDD I had added. Roland Clobus , one of my mentors from the team gave me a variable to work with. The solution was adding "NUMDISKS=2" as part of the command.
  2. While working on a file, one of the needles would only work after file edits. Afterwards it would fail to "assert_and_click". What kept bugging me was why it was passing after the first instance then failing after. The solution was adding a "wait_still_screen" to the code. This would ensure any screen changes loaded first before clicking happened.
  3. I was stuck on finding the keys that would be needed for a context menu. I added "button => 'right' " in the "assert_and_click " code.
  4. Windows 11 installation was constantly failing. Roland pointed out he was working on it so I had to use Windows 10.
  5. Windows 10 Virtual Machine does not connect to the internet because of update restrictions. I had to switch to Linux Virtual Machine for a download job.

When I get stuck, at times I seek guidance from the mentors. I still look for solutions in the documentation. Here are some of the documentation that have helped me get through some of these challenges.

  1. Installation and creating tests guide - https://salsa.debian.org/qa/openqa/openqa-tests-debian/-/tree/debian/documentation . These guides help in installation and creating of tests.
  2. OpenQA official documentation - https://open.qa/docs/ . This documentation is very comprehensive. I used it recently to read about PUBLISH_HDD_n to save the updated version of a HDD_n I am using.
  3. OpenQA test API documentation - https://open.qa/api/testapi/ . This documentation shows me which parameters to use. I have used it recently to find how to right click on a mouse and special characters.
  4. OpenQA variables file in Gitlab - https://salsa.debian.org/qa/openqa/openqa-tests-debian/-/blob/debian/VARIABLES.md . This has explanations of the most commonly used variables .
  5. OpenQA repository in Gitlab - https://salsa.debian.org/qa/openqa/openqa-tests-debian . I go through the Perl tests. Understand how they work . Then integrate my tests using the a similar manner so that it would look uniform.
  6. OpenQa tests - https://openqa.debian.net/tests. I use these tests to find machine settings. I also find test sequences and the assets I would need to create similar tests. I used it recently to look at how graphical login was being implemented then shutdown.

The list above are the documentation that are supposed to be used for these tests and finding solutions. If I don't find anything within these, I then ask Roland for help. I try to go through the autoinst documentation that are from the links provided in the Gitlab README.md file : https://salsa.debian.org/qa/openqa/openqa-tests-debian/-/blob/debian/README.md . They are also comprehensive but are very technical .

In general, I get challenges but there is always a means to solve them through documentation provided. The mentors are also very helpful whenever we get challenges. I have gained team contribution skills , upgraded my git skills, learned Perl and how to test using OpenQA. I am still polishing on how to make my needles better. My current progress is thus good. We learn one day at a time.

22 Dec 2025 2:01pm GMT

Emmanuel Kasper: Configuring a mail transfert agent to interact with the Debian bug tracker

Email interface of the Debian bug tracker

The main interface of the Debian bug tracker, at http://bugs.debian.org, is e-mail, and modifications are made to existing bugs by sending an email to an address like 873518@bugs.Debian.org.

The web interface allows to browse bugs, but any addition to the bug itself will require an email client.

This sounds a bit weird in 2025, as http REST clients with Oauth access tokens for command line tools interacting with online resources are today the norm. However we should remember the Debian project goes back to 1993 and the bug tracker software debugs, was released in 1994. REST itself was first introduced in 2000, six years later.

In any case, using an email client to create or modify bug reports is not a bad idea per se:

  • the internet mail protocol, SMTP, is a well known and standardized protocol defined in an IETF RFC.
  • no need for account creation and authentication, you just need an email address to interact. There is a risk of spam, but in my experience this has been very low. When authentication is needed, Debian Developpers sign their work with their private GPG key.
  • you can use the bug tracker using the interface of your choice: webmail, graphical mail clients like Thunderbird or Evolution, text clients like Mutt or Pine, or command line tools like bts.

A system wide minimal Mail Transfert Agent to send mail

We can configure bts as a SMTP client, with username and password. In SMTP client mode, we would need to enter the SMTP settings from our mail service provider.

The other option is to configure a Mail Transfert Agent (MTA) which provides a system wide sendmail interface, that all command line and automation tools can use send email. For instance reportbug and git send-email are able to use the sendmail interface. Why a sendmail interface ? Because sendmail used to be the default MTA of Unix back in the days, thus many programs sending mails expect something which looks like sendmail locally.

A popular, maintained and packaged minimal MTA is msmtp, we are going to use it.

msmtp installation and configuration

Installation is just an apt away:

# apt install msmtp msmtp-mta
# msmtp --version
msmtp version 1.8.23

You can follow this blog post to configure msmtp, including saving your mail account credentials in the Gnome keyring.

Once installed, you can verify that msmtp-mta created a sendmail symlink.

$ ls -l /usr/sbin/sendmail 
lrwxrwxrwx 1 root root 12 16 avril  2025 /usr/sbin/sendmail -> ../bin/msmtp

bts, git-send-email and reportbug will pipe their output to /usr/sbin/sendmail and msmtp will send the email in the background.

Testing with with a simple mail client

Debian comes out of the box with a primitive mail client, bsd-mailx that you can use to test your MTA set up. If you have configured msmtp correctly you send an email to yourself using

$ echo "hello world" | mail -s "my mail subject" user@domain.org

Now you can open bugs for Debian with reportbug, tag them with bts and send git formated patches from the command line with git send-email.

22 Dec 2025 9:09am GMT

18 Dec 2025

feedPlanet Lisp

Eugene Zaikonnikov: Lisp job opening in Bergen, Norway

As a heads-up my employer now has an opening for a Lisp programmer in Bergen area. Due to hands-on nature of developing the distributed hardware product the position is 100% on-prem.

18 Dec 2025 12:00am GMT

11 Dec 2025

feedPlanet Lisp

Scott L. Burson: FSet v2.1.0 released: Seq improvements

I have just released FSet v2.1.0 (also on GitHub).

This release is mostly to add some performance and functionality improvements for seqs. Briefly:

See the above links for the full release notes.

UPDATE: there's already a v2.1.1; I had forgotten to export the new function char-seq?.

11 Dec 2025 4:01am GMT

09 Dec 2025

feedFOSDEM 2026

/dev/random and lightning talks

The room formally known as "Lightning Talks" is now known as /dev/random. After 25 years, we say goodbye to the old Lightning Talks format. In place, we have two new things! /dev/random: 15 minute talks on a random, interesting, FOSS-related subject, just like the older Lightning Talks New Lightning Talks: a highly condensed batch of 5 minute quick talks in the main auditorium on various FOSS-related subjects! Last year we experimented with running a more spontaneous lightning talk format, with a submission deadline closer to the event and strict short time limits (under five minutes) for each speaker. The experiment舰

09 Dec 2025 11:00pm GMT

04 Dec 2025

feedPlanet Lisp

Tim Bradshaw: Literals and constants in Common Lisp

Or, constantp is not enough.

Because I do a lot of things with Štar, and for other reasons, I spend a fair amount of time writing various compile-time optimizers for things which have the semantics of function calls. You can think of iterator optimizers in Štar as being a bit like compiler macros: the aim is to take a function call form and to turn it, in good cases, into something quicker1. One important way of doing this is to be able to detect things which are known at compile-time: constants and literals, for instance.

One of the things this has made clear to me is that, like John Peel, constantp is not enough. Here's an example.

(in-row-major-array a :simple t :element-type 'fixnum) is a function call whose values Štar can use to tell it how to iterate (via row-major-aref) over an array. When used in a for form, its optimizer would like to be able to expand into something involving (declare (type (simple-array fixnum *) ...), so that the details of the array are known to the compiler, which can then generate fast code for row-major-aref. This makes a great deal of difference to performance: array access to simple arrays of known element types is usually much faster than to general arrays.

In order to do this it needs to know two things:

You might say, well, that's what constantp is for2. It's not: constantp tells you only the first of these, and you need both.

Consider this code, in a file to be compiled:

(defconstant et 'fixnum)

(defun ... ...
  (for ((e (in-array a :element-type et)))
    ...)
  ...)

Now, constantpwill tell you that et is indeed a compile-time constant. But it won't tell you its value, and in particular nothing says it needs to be bound at compile-time at all: (symbol-value 'et) may well be an error at compile-time.

constantp is not enough3! instead you need a function that tells you 'yes, this thing is a compile-time constant, and its value is …'. This is what literal does4: it conservatively answers the question, and tells you the value if so. In particular, an expression like (literal '(quote fixnum)) will return fixnum, the value, and t to say yes, it is a compile-time constant. It can't do this for things defined with defconstant, and it may miss other cases, but when it says something is a compile-time constant, it is. In particular it works for actual literals (hence its name), and for forms whose macroexpansion is a literal.

That is enough in practice.


  1. Śtar's iterator optimizers are not compiler macros, because the code they write is inserted in various places in the iteration construct, but they're doing a similar job: turning a construct involving many function calls into one requiring fewer or no function calls.

  2. And you may ask yourself, "How do I work this?" / And you may ask yourself, "Where is that large automobile?" / And you may tell yourself, "This is not my beautiful house" / And you may tell yourself, "This is not my beautiful wife"

  3. Here's something that staryed as a mail message which tries to explain this in some more detail. In the case of variables defconstant is required to tell constantp that a variable is a constant at compile-time but is not required (and should not be required) to evaluate the initform, let alone actually establish a binding at that time. In SBCL it does both (SBCL doesn't really have a compilation environment). In LW, say, it at least does not establish a binding, because LW does have a compilation environment. That means that in LW compiling a file has fewer compile-time side-effects than it does in SBCL. Outside of variables, it's easily possible that a compiler might be smart enough to know that, given (defun c (n) (+ n 15)), then (constantp '(c 1) <compilation environment>) is true. But you can't evaluate (c 1) at compile-time at all. constantp tells you that you don't need to bind variables to prevent multiple evaluation, it doesn't, and can't, tell you what their values will be.

  4. Part of the org.tfeb.star/utilities package.

04 Dec 2025 4:23pm GMT

15 Nov 2025

feedFOSDEM 2026

FOSDEM 2026 Accepted Stands

With great pleasure we can announce that the following project will have a stand at FOSDEM 2026! ASF Community BSD + FreeBSD Project Checkmk CiviCRM Cloud Native Computing Foundation + OpenInfra & the Linux Foundation: Building the Open Source Infrastructure Ecosystem Codeberg and Forgejo Computer networks with BIRD, KNOT and Turris Debian Delta Chat (Sunday) Digital Public Goods Dolibar ERP CRM + Odoo Community Association (OCA) Dronecode Foundation + The Zephyr Project Eclipse Foundation F-Droid and /e/OS + OW2 FOSS community / Murena degooglized phones and suite Fedora Project Firefly Zero Foreman FOSS United + fundingjson (and FLOSS/fund) FOSSASIA Framework舰

15 Nov 2025 11:00pm GMT

13 Nov 2025

feedFOSDEM 2026

FOSDEM 2026 Main Track Deadline Reminder

Submit your proposal for the FOSDEM main track before it's too late! The deadline for main track submissions is earlier than it usually is (16th November, that's in a couple of days!), so don't be caught out. For full details on submission information, look at the original call for participation.

13 Nov 2025 11:00pm GMT