25 Dec 2025
Planet Grep
Frederic Descamps: Deploying on OCI with the starter kit – part 8 (using MySQL REST Service)
The starter kit deploys a MySQL HeatWave DB System on OCI and enables the MySQL REST Service automatically: The REST Service enables us to provide access to data without requiring SQL. It also provides access to some Gen AI functionalities available in MySQL HeatWave. Adding data to MRS using Visual Studio Code To be able […]
25 Dec 2025 1:47am GMT
Dries Buytaert: Christmas lights, powered by Drupal
Drupal-blue LEDs, controllable through a REST API and a Drupal website. Photo by Phil Norton.
It's Christmas Eve, and Phil Norton is controlling his Christmas lights with Drupal. You can visit his site, pick a color, and across the room, a strip of LEDs changes to match. That feels extra magical on Christmas Eve.
I like how straightforward his implementation is. A Drupal form stores the color value using the State API, a REST endpoint exposes that data as JSON, and MicroPython running on a Pimoroni Plasma board polls the endpoint and updates the LEDs.
I've gone down the electronics rabbit hole myself with my solar-powered website and basement temperature monitor, both using Drupal as the backend. I didn't do an electronics project in 2025, but this makes me want to do another one in 2026.
I also didn't realize you could buy light strips where each LED can be controlled individually. That alone makes me want to up my Christmas game next year.
But addressable LEDs are useful for more than holiday decorations. You could show how many people are on your site, light up a build as it moves through your CI/CD pipeline, flash on failed logins, or visualize the number of warnings in your Drupal logs. This quickly stops being a holiday decoration and starts looking like a tax-deductible business expense.
Beyond the fun factor, Phil's tutorial does real teaching. It uses Drupal features many of us barely think about anymore: the State API, REST resources, flood protection, even the built-in HTML color field. It's not just a clever demo, but also a solid tutorial.
The Drupal community gets stronger when people share work this clearly and generously. If you've been curious about IoT, this is a great entry point.
Merry Christmas to those celebrating. Go build something that blinks. May your deployments be smooth and your Drupal-powered Christmas lights shine bright.
25 Dec 2025 1:47am GMT
Amedee Van Gasse: sort -u vs sort | uniq: a tiny Linux fork in the road
I recently fell into one of those algorithmic rabbit holes that only the internet can provide. The spark was a YouTube Short by @TechWithHazem: a rapid-fire terminal demo showing a neat little text-processing trick built entirely out of classic Linux tools. No frameworks, no dependencies, just pipes, filters, and decades of accumulated wisdom compressed into under two minutes.
That's the modern paradox of Unix & Linux culture: tools older than many of us are being rediscovered through vertical videos and autoplay feeds. A generation raised on Shorts and Reels is bumping into sort, uniq, and friends, often for the first time, and asking very reasonable questions like: wait, why are there two ways to do this?
So let's talk about one of those deceptively small choices.
The question
What's better?
sort -u
or
sort | uniq
At first glance, they seem equivalent. Both give you sorted, unique lines of text. Both appear in scripts, blog posts, and Stack Overflow answers. Both are "correct".
But Linux has opinions, and those opinions are usually encoded in flags.
The short answer
sort -u is almost always better.
The longer answer is where the interesting bits live.
What actually happens
sort -u tells sort to do two things at once:
- sort the input
- suppress duplicate lines
That's one program, one job, one set of buffers, and one round of temporary files. Fewer processes, less data sloshing around, and fewer opportunities for your CPU to sigh quietly.
By contrast, sort | uniq is a two-step relay race. sort does the sorting, then hands everything to uniq, which removes duplicates - but only if they're adjacent. That adjacency requirement is why the sort is mandatory in the first place.
This pipeline works because Linux tools compose beautifully. But composition has a cost: an extra process, an extra pipe, and extra I/O.
On small inputs, you'll never notice. On large ones, sort -u usually wins on performance and simplicity.
Clarity matters too
There's also a human factor.
When you see sort -u, the intent is explicit: "I want sorted, unique output."
When you see sort | uniq, you have to mentally remember a historical detail: uniq only removes adjacent duplicates.
That knowledge is common among Linux people, but it's not obvious. sort -u encodes the idea directly into the command.
When uniq still earns its keep
All that said, uniq is not obsolete. It just has a narrower, sharper purpose.
Use sort | uniq when you want things that sort -u cannot do, such as:
- counting duplicates (
uniq -c) - showing only duplicated lines (
uniq -d) - showing only lines that occur once (
uniq -u)
In those cases, uniq isn't redundant - it's the point.
A small philosophical note
This is one of those Linux moments that looks trivial but teaches a bigger lesson. Linux tools evolve. Sometimes functionality migrates inward, from pipelines into flags, because common patterns deserve first-class support.
sort -u is not "less Linuxy" than sort | uniq. It's Linux noticing a habit and formalizing it.
The shell still lets you build LEGO castles out of pipes. It just also hands you pre-molded bricks when the shape is obvious.
The takeaway
If you just want unique, sorted lines:
sort -u
If you want insight about duplication:
sort | uniq …
Same ecosystem, different intentions.
And yes, it's mildly delightful that a 1'30" YouTube Short can still provoke a discussion about tools designed in the 1970s. The terminal endures. The format changes. The ideas keep resurfacing - sorted, deduplicated, and ready for reuse.
25 Dec 2025 1:47am GMT
24 Dec 2025
Planet Debian
Daniel Lange: Getting scanning to work with Gimp on Trixie

Trixie ships Gimp 3.0.4 and the 3.x series has gotten incompatible to XSane, the common frontend for scanners on Linux.
Hence the maintainer, Jörg Frings-Fürst, has disabled the Gimp integration temporarily in response to a Debian bug #1088080.
There seems to be no tracking bug for getting the functionality back but people have been commenting on Debian bug #993293 as that is ... loosely related
.
There are two options to get the Scanning functionality back in Trixie until this is properly resolved by an updated XSane in Debian (e.g. via trixie-backports):
Lee Yingtong Li (RunasSudo) has created a Python script that calls XSane as a cli application and published it at https://yingtongli.me/git/gimp-xsanecli/. This worked okish for me but needed me to find the scan in /tmp/ a number of times. This is a good stop-gap script if you need to scan from Gimp $now and look for a quick solution.
Upstream has completed the necessary steps to get XSane working as a Gimp 3.x plugin at https://gitlab.com/sane-project/frontend/xsane. Unfortunately compiling this is a bit involved but I made a version that can be dropped into /usr/local/bin or $HOME/bin and works alongside Gimp and the system-installed XSane.
So:
sudo apt install gimp xsane- Download xsane-1.0.0-fit-003 (752kB, AMD64 executable for Trixie) and place it in
/usr/local/bin(as root) sha256sum /usr/local/bin/xsane-1.0.0-fit-003
# result needs to be af04c1a83c41cd2e48e82d04b6017ee0b29d555390ca706e4603378b401e91b2sudo chmod +x /usr/local/bin/xsane-1.0.0-fit-003- # Link the executable into the Gimp plugin directory as the user running Gimp:
mkdir -p $HOME/.config/GIMP/3.0/plug-ins/xsane/
ln -s /usr/local/bin/xsane-1.0.0-fit-003 $HOME/.config/GIMP/3.0/plug-ins/xsane/ - Restart Gimp
- Scan from Gimp via File → Create → Acquire → XSane
The source code for the xsane executable above is available under GPL-2 at https://gitlab.com/sane-project/frontend/xsane/-/tree/c5ac0d921606309169067041931e3b0c73436f00. This points to the last upstream commit from 27. September 2025 at the time of writing this blog article.
24 Dec 2025 9:00am GMT
23 Dec 2025
Planet Debian
Jonathan Dowland: Remarkable

My Remarkable tablet, displaying my 2025 planner.
During my PhD, on a sunny summer's day, I copied some papers to read onto an iPad and cycled down to an outdoor cafe next to the beach. armed with a coffee and an ice cream, I sat and enjoyed the warmth. The only problem was that due to the bright sunlight, I couldn't see a damn thing.
In 2021 I decided to take the plunge and buy the Remarkable 2 that has been heavily advertised at the time. Over the next four or so years, I made good use of it to read papers; read drafts of my own papers and chapters; read a small number of technical books; use as a daily planner; take meeting notes for work, PhD and later, personal matters.
I didn't buy the remarkable stylus or folio cover instead opting for a (at the time, slightly cheaper) LAMY AL-star EMR. And a fantastic fabric sleeve cover from Emmerson Gray.
I installed a hack which let me use the Lamy's button to activate an eraser and also added a bunch of other tweaks. I wouldn't recommend that specific hack anymore as there are safer alternatives (personally untested, but e.g. https://github.com/isaacwisdom/RemarkableLamyEraser)
Pros: the writing experience is unparalleled. Excellent. I enjoy writing with fountain pens on good paper but that experience comes with inky fingers, dried up nibs, and a growing pile of paper notebooks. The remarkable is very nearly as good without those drawbacks.
Cons: lower contrast than black on white paper and no built in illumination. It needs good light to read. Almost the opposite problem to the iPad! I've tried a limited number of external clip on lights but nothing is frictionless to use.
The traditional two-column, wide margin formatting for academic papers is a bad fit for the remarkable's size (just as it is for computer display sizes. Really is it good for anything people use anymore?). You can pinch to zoom which is OK, or pre-process papers (with e.g. Briss) to reframe them to be more suitable but that's laborious.
The newer model, the Remarkable Paper Pro, might address both those issues: its bigger; has illumination and has also added colour which would be a nice to have. It's also a lot more expensive.
I had considered selling on the tablet after I finished my PhD. My current plan, inspired to some extent by my former colleague Aleksey Shipilëv, who makes great use of his, is to have a go at using it more often, to see if it continues to provide value for me: more noodling out thoughts for work tasks, more drawings (e.g. plans for 3D models) and more reading of tech books.
23 Dec 2025 10:58am GMT
Daniel Kahn Gillmor: AI and Secure Messaging Don't Mix

AI and Secure Messaging Don't Mix
Over on the ACLU's Free Future blog, I just published an article titled AI and Secure Messaging Don't Mix.
The blogpost assumes for the sake of the argument that people might actually want to have an AI involved in their personal conversations, and explores why Meta's Private Processing doesn't offer the level of assurance that they want it to offer.
In short, the promises of "confidential cloud computing" are built on shaky foundations, especially against adversaries as powerful as Meta themselves.
If you really want AI in your chat, the baseline step for privacy preservation is to include it in your local compute base, not to use a network service! But these operators clearly don't value private communication as much as they value binding you to their services.
But let's imagine some secure messenger that actually does put message confidentiality first -- and imagine they had integrated some sort of AI capability into the messenger. That at least bypasses the privacy questions around AI use.
Would you really want to talk with your friends, as augmented by their local AI, though? Would you want an AI, even one running locally with perfect privacy, intervening in your social connections?
What if it summarized your friend's messages to you in a way that led you to misunderstand (or ignore) an important point your friend had made? What if it encouraged you to make an edgy joke that comes across wrong? Or to say something that seriously upsets a friend? How would you respond? How would you even know that it had happened?
My handle is dkg. More times than i can count, i've had someone address me in a chat as "dog" and then cringe and apologize and blame their spellchecker/autocorrect. I can laugh these off because the failure mode is so obvious and transparent -- and repeatable. (also, dogs are awesome, so i don't really mind!)
But when our attention (and our responses!) are being shaped and molded by these plausibility engines, how will we even know that mistakes are being made? What if the plausibility engine you've hooked into your messenger embeds subtle (or unsubtle!) bias?
Don't we owe it to each other to engage with actual human attention?
23 Dec 2025 5:00am GMT
18 Dec 2025
Planet Lisp
Eugene Zaikonnikov: Lisp job opening in Bergen, Norway
As a heads-up my employer now has an opening for a Lisp programmer in Bergen area. Due to hands-on nature of developing the distributed hardware product the position is 100% on-prem.
18 Dec 2025 12:00am GMT
11 Dec 2025
Planet Lisp
Scott L. Burson: FSet v2.1.0 released: Seq improvements
I have just released FSet v2.1.0 (also on GitHub).
This release is mostly to add some performance and functionality improvements for seqs. Briefly:
- Access to and updating of elements at the beginning or end of a long seq is now faster.
- I have finally gotten around to implementing
searchandmismatchon seqs. NOTE: this may require changes to your package definitions; see below. - Seqs containing only characters are now treated specially, making them a viable replacement for CL strings in many cases.
- In an FSet 2 context, the seq constructor macros now permit specification of a default.
- There are changes to some
convertmethods. - There are a couple more FSet 2 API changes, involving
image.
See the above links for the full release notes.
UPDATE: there's already a v2.1.1; I had forgotten to export the new function char-seq?.
11 Dec 2025 4:01am GMT
09 Dec 2025
FOSDEM 2026
/dev/random and lightning talks
The room formally known as "Lightning Talks" is now known as /dev/random. After 25 years, we say goodbye to the old Lightning Talks format. In place, we have two new things! /dev/random: 15 minute talks on a random, interesting, FOSS-related subject, just like the older Lightning Talks New Lightning Talks: a highly condensed batch of 5 minute quick talks in the main auditorium on various FOSS-related subjects! Last year we experimented with running a more spontaneous lightning talk format, with a submission deadline closer to the event and strict short time limits (under five minutes) for each speaker. The experiment舰
09 Dec 2025 11:00pm GMT
04 Dec 2025
Planet Lisp
Tim Bradshaw: Literals and constants in Common Lisp
Or, constantp is not enough.
Because I do a lot of things with Štar, and for other reasons, I spend a fair amount of time writing various compile-time optimizers for things which have the semantics of function calls. You can think of iterator optimizers in Štar as being a bit like compiler macros: the aim is to take a function call form and to turn it, in good cases, into something quicker1. One important way of doing this is to be able to detect things which are known at compile-time: constants and literals, for instance.
One of the things this has made clear to me is that, like John Peel, constantp is not enough. Here's an example.
(in-row-major-array a :simple t :element-type 'fixnum) is a function call whose values Štar can use to tell it how to iterate (via row-major-aref) over an array. When used in a for form, its optimizer would like to be able to expand into something involving (declare (type (simple-array fixnum *) ...), so that the details of the array are known to the compiler, which can then generate fast code for row-major-aref. This makes a great deal of difference to performance: array access to simple arrays of known element types is usually much faster than to general arrays.
In order to do this it needs to know two things:
- that the values of the
simpleandelement-typekeyword arguments are compile-time constants; - what their values are.
You might say, well, that's what constantp is for2. It's not: constantp tells you only the first of these, and you need both.
Consider this code, in a file to be compiled:
(defconstant et 'fixnum)
(defun ... ...
(for ((e (in-array a :element-type et)))
...)
...)
Now, constantpwill tell you that et is indeed a compile-time constant. But it won't tell you its value, and in particular nothing says it needs to be bound at compile-time at all: (symbol-value 'et) may well be an error at compile-time.
constantp is not enough3! instead you need a function that tells you 'yes, this thing is a compile-time constant, and its value is …'. This is what literal does4: it conservatively answers the question, and tells you the value if so. In particular, an expression like (literal '(quote fixnum)) will return fixnum, the value, and t to say yes, it is a compile-time constant. It can't do this for things defined with defconstant, and it may miss other cases, but when it says something is a compile-time constant, it is. In particular it works for actual literals (hence its name), and for forms whose macroexpansion is a literal.
That is enough in practice.
-
Śtar's iterator optimizers are not compiler macros, because the code they write is inserted in various places in the iteration construct, but they're doing a similar job: turning a construct involving many function calls into one requiring fewer or no function calls. ↩
-
And you may ask yourself, "How do I work this?" / And you may ask yourself, "Where is that large automobile?" / And you may tell yourself, "This is not my beautiful house" / And you may tell yourself, "This is not my beautiful wife" ↩
-
Here's something that staryed as a mail message which tries to explain this in some more detail. In the case of variables
defconstantis required to tellconstantpthat a variable is a constant at compile-time but is not required (and should not be required) to evaluate the initform, let alone actually establish a binding at that time. In SBCL it does both (SBCL doesn't really have a compilation environment). In LW, say, it at least does not establish a binding, because LW does have a compilation environment. That means that in LW compiling a file has fewer compile-time side-effects than it does in SBCL. Outside of variables, it's easily possible that a compiler might be smart enough to know that, given(defun c (n) (+ n 15)), then(constantp '(c 1) <compilation environment>)is true. But you can't evaluate(c 1)at compile-time at all.constantptells you that you don't need to bind variables to prevent multiple evaluation, it doesn't, and can't, tell you what their values will be. ↩ -
Part of the
org.tfeb.star/utilitiespackage. ↩
04 Dec 2025 4:23pm GMT
15 Nov 2025
FOSDEM 2026
FOSDEM 2026 Accepted Stands
With great pleasure we can announce that the following project will have a stand at FOSDEM 2026! ASF Community BSD + FreeBSD Project Checkmk CiviCRM Cloud Native Computing Foundation + OpenInfra & the Linux Foundation: Building the Open Source Infrastructure Ecosystem Codeberg and Forgejo Computer networks with BIRD, KNOT and Turris Debian Delta Chat (Sunday) Digital Public Goods Dolibar ERP CRM + Odoo Community Association (OCA) Dronecode Foundation + The Zephyr Project Eclipse Foundation F-Droid and /e/OS + OW2 FOSS community / Murena degooglized phones and suite Fedora Project Firefly Zero Foreman FOSS United + fundingjson (and FLOSS/fund) FOSSASIA Framework舰
15 Nov 2025 11:00pm GMT
13 Nov 2025
FOSDEM 2026
FOSDEM 2026 Main Track Deadline Reminder
Submit your proposal for the FOSDEM main track before it's too late! The deadline for main track submissions is earlier than it usually is (16th November, that's in a couple of days!), so don't be caught out. For full details on submission information, look at the original call for participation.
13 Nov 2025 11:00pm GMT