03 Feb 2026

feedPlanet Grep

Lionel Dricot: The Disconnected Git Workflow

The Disconnected Git Workflow

Using git-send-email while being offline and with multiple email accounts

WARNING: the following is a technical reminder for my future self. If you don't use the "git" software, you can safely ignore this post.

The more I work with git-send-email, the less I find the GitHub interface sufferable.

Want to send a small patch to a GitHub project? You need to clone the repository, push your changes to your own branch, then ask for a pull request using the cumbersome web interface, replying to comments online while trying to avoid smileys.

With git send-email, I simply work offline, do my local commit, then:

git send-email HEAD^

And I'm done. I reply to comments by email, with Vim/Mutt. When the patch is accepted, getting a clean tree usually boils down to:

git pull
git rebase

Yeah for git-send-email!

And, yes, I do that while offline and with multiple email accounts. That's one more reason to hate GitHub.

One mail account for each git repository

The secret is not to configure email accounts in git but to use "msmtp" to send email. Msmtp is a really cool sendmail replacement.

In .msmtprc, you can configure multiple accounts with multiple options, including calling a command to get your password.

# account 1 - pro
account work
host smtp.company.com
port 465
user login@company.com
from ploum@company.com
password SuPeRstr0ngP4ssw0rd
tls_starttls off

# personal account for FLOSS
account floss
host mail.provider.net
port 465
user ploum@mydomain.net
from ploum@mydomain.net
from ploum*@mydomain.net
passwordeval "cat ~/incredibly_encrypted_password.txt | rot13"
tls_starttls off

The important bit here is that you can set multiple "from" addresses for a given account, including a regexp to catch multiple aliases!

Now, we will ask git to automatically use the right msmtp account. In your global .gitconfig, set the following:

[sendemail]
   sendmailCmd = /usr/bin/msmtp --set-from-header=on
   envelopeSender = auto

The "envelopesender" option will ensure that the sendemail.from will be used and given to msmtp as a "from address." This might be redundant with "--set-from-header=on" in msmtp but, in my tests, having both was required. And, cherry on the cake, it automatically works for all accounts configured in msmtprc.

Older git versions (< 2.33) don't have sendmailCmd and should do:

[sendemail]
   smtpserver = /usr/bin/msmtp
   smtpserveroption = --set-from-header=on
   envelopesender = auto

I usually stick to a "ploum-PROJECT@mydomain.net" for each project I contribute to. This allows me to easily cut spam when needed. So far, the worst has been with a bug reported on the FreeBSD Bugzilla. The address used there (and nowhere else) has since been spammed to death.

In each git project, you need to do the following:

1. Set the email address used in your commit that will appear in "git log" (if different from the global one)

git config user.email "Ploum <ploum-PROJECT@mydomain.net>"

2. Set the email address that will be used to actually send the patch (could be different from the first one)

git config sendemail.from "Ploum <ploum-PROJECT@mydomain.net>"

3. Set the email address of the developer or the mailing list to which you want to contribute

git config sendemail.to project-devel@mailing-list.com

Damn, I did a commit with the wrong user.email!

Yep, I always forget to change it when working on a new project or from a fresh git clone. Not a problem. Just use "git config" like above, then:

git commit --amend --reset-author

And that's it.

Working offline

I told you I mostly work offline. And, as you might expect, msmtp requires a working Internet connection to send an email.

But msmtp comes with three wonderful little scripts: msmtp-enqueue.sh, msmtp-listqueue.sh and msmtp-runqueue.sh.

The first one saves your email to be sent in ~/.msmtpqueue, with the sending options in a separate file. The second one lists the unsent emails, and the third one actually sends all the emails in the queue.

All you need to do is change the msmtp line in your global .gitconfig to call the msmtpqueue.sh script:

[sendemail]
    sendmailcmd = /usr/libexec/msmtp/msmtpqueue/msmtp-enqueue.sh --set-from-header=on
    envelopeSender = auto 

In Debian, the scripts are available with the msmtp package. But the three are simple bash scripts that can be run from any path if your msmtp package doesn't provide them.

You can test sending a mail, then check the ~/.msmtpqueue folder for the email itself (.email file) and the related msmtp command line (.msmtp file). It happens nearly every day that I visit this folder to quickly add missing information to an email or simply remove it completely from the queue.

Of course, once connected, you need to remember to run:

/usr/libexec/msmtp/msmtpqueue/msmtp-runqueue.sh

If not connected, mails will not be sent and will be kept in the queue. This line is obviously part of my do_the_internet.sh script, along with "offpunk --sync".

It is not only git!

If it works for git, it works for any mail client. I use neomutt with the following configuration to use msmtp-enqueue and reply to email using the address it was sent to.

set sendmail="/usr/libexec/msmtp/msmtpqueue/msmtp-enqueue.sh --set-from-header=on"
unset envelope_from_address
set use_envelope_from
set reverse_name
set from="ploum@mydomain.net"
alternates ploum[A-Za-z0-9]*@mydomain.net

Of course, the whole config is a little more complex to handle multiple accounts that are all stored locally in Maildir format through offlineimap and indexed with notmuch. But this is a bit out of the scope of this post.

At least, you get the idea, and you could probably adapt it to your own mail client.

Conclusion

Sure, it's a whole blog post just to get the config right. But there's nothing really out of this world. And once the setup is done, it is done for good. No need to adapt to every change in a clumsy web interface, no need to use your mouse. Simple command lines and simple git flow!

Sometimes, I work late at night. When finished, I close the lid of my laptop and call it a day without reconnecting my laptop. This allows me not to see anything new before going to bed. When this happens, queued mails are sent the next morning, when I run the first do_the_internet.sh of the day.

And it always brings a smile to my face to see those bits being sent while I've completely forgotten about them…

About the author

I'm Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

03 Feb 2026 5:13am GMT

Dries Buytaert: Self-improving AI skills

If you read one thing this week, make it Simon Willison's post on Moltbook. Moltbook is a social network for AI agents. To join, you tell your agent to read a URL. That URL points to a skill file that teaches the agent how to join and participate.

Visit Moltbook and you'll see something really strange: agents from around the world talking to each other and sharing what they've learned. Humans just watch.

This is the most interesting bad idea I've seen in a while. And I can't stop thinking about it.

When I work on my Drupal site, I sometimes use Claude Code with a custom CLAUDE.md skill file. It teaches the agent the steps I follow, like safely cloning my production database, [running PHPUnit tests](https://dri.es/phpunit-tests-for-drupal, clearing Drupal caches, and more.

Moltbook agents share tips through posts. They're chatting, like developers on Reddit. But imagine a skill that doesn't just read those ideas, but finds other skill files, compares approaches, and pulls in the parts that fit. That stops being a conversation. That is a skill rewriting itself.

Skills that learn from each other. Skills that improve by being part of a community, the way humans do.

The wild thing is how obvious this feels. A skill learning from other skills isn't science fiction. It's a small step from what we're already doing.

Of course, this is a terrible idea. It's a supply chain attack waiting to happen. One bad skill poisons everything that trusts it.

This feels inevitable. The question isn't whether skills will learn from other skills. It's whether we'll have good sandboxes before they do.

I've been writing a lot about AI to help figure out its impact on Drupal and our ecosystem. I've always tried to take a positive but balanced view. I explore it because it matters, and because ignoring it doesn't make it go away.

But if I'm honest, I'm scared for what comes next.

03 Feb 2026 5:13am GMT

Dries Buytaert: AI creates asymmetric pressure on Open Source

A humanoid figure stands in a rocky, shallow stream, facing a glowing triangular portal suspended amid crackling energy.

AI makes it cheaper to contribute to Open Source, but it's not making life easier for maintainers. More contributions are flowing in, but the burden of evaluating them still falls on the same small group of people. That asymmetric pressure risks breaking maintainers.

The curl story

Daniel Stenberg, who maintains curl, just ended the curl project's bug bounty program. The program had worked well for years. But in 2025, fewer than one in twenty submissions turned out to be real bugs.

In a post called "Death by a thousand slops", Stenberg described the toll on curl's seven-person security team: each report engaged three to four people, sometimes for hours, only to find nothing real. He wrote about the "emotional toll" of "mind-numbing stupidities".

Stenberg's response was pragmatic. He didn't ban AI. He ended the bug bounty. That alone removed most of the incentive to flood the project with low-quality reports.

Drupal doesn't have a bug bounty, but it still has incentives: contribution credit, reputation, and visibility all matter. Those incentives can attract low-quality contributions too, and the cost of sorting them out often lands on maintainers.

Caught between two truths

We've seen some AI slop in Drupal, though not at the scale curl experienced. But our maintainers are stretched thin, and they see what is happening to other projects.

That tension shows up in conversations about AI in Drupal Core and can lead to indecision. For example, people hesitate around AGENTS.md files and adaptable modules because they worry about inviting more contributions without adding more capacity to evaluate them.

This is AI-driven asymmetric pressure in our community. I understand the hesitation. When we get this wrong, maintainers pay the price. They've earned the right to be skeptical.

Many also have concerns about AI itself: its environmental cost, its impact on their craft, and the unresolved legal and ethical questions around how it was trained. Others worry about security vulnerabilities slipping through. And for some, it's simply demoralizing to watch something they built with care become a target for high-volume, low-quality contributions. These concerns are legitimate and deserve to be heard.

As a result, I feel caught between two truths.

On one side, maintainers hold everything together. If they burn out or leave, Drupal is in serious trouble. We can't ask them to absorb more work without first creating relief.

On the other side, the people who depend on Drupal are watching other platforms accelerate. If we move too slowly, they'll look elsewhere.

Both are true. Protecting maintainers and accelerating innovation shouldn't be opposites, but right now they feel that way. As Drupal's project lead, my job is to help us find a path that honors both.

I should be honest about where I stand. I've been writing software with AI tools for over a year now. I've had real successes. I've also seen some of our most experienced contributors become dramatically more productive, doing things they simply couldn't do before. That view comes from experience, not hype.

But having a perspective is not the same as having all the answers. And leadership doesn't mean dragging people where they don't want to go. It means pointing a direction with care, staying open to different viewpoints, and not abandoning the people who hold the project together.

We've sort of been here before

New technology has a way of lowering barriers, and lower barriers always come with tradeoffs. I saw this early in my career. I was writing low-level C for embedded systems by day, and after work I'd come home and work on websites with Drupal and PHP. It was thrilling, and a stark contrast to my day job. You could build in an evening what took days in C.

I remember that excitement. The early web coming alive. I hadn't felt the same excitement in 25 years, until AI.

PHP brought in hobbyists and self-taught developers, people learning as they went. Many of them built careers here. But it also meant that a lot of early PHP code had serious security problems. The language got blamed, and many experts dismissed it entirely. Some still do.

The answer wasn't rejecting PHP for enabling low-quality code. The answer was frameworks, better security practices, and shared standards.

AI is a different technology, but I see the same patterns. It lowers barriers and will bring in new contributors who aren't experts yet. And like scripting languages, AI is here to stay. The question isn't whether AI is coming to Open Source. It's how we make it work.

AI in the right hands

The curl story doesn't end there. In October 2025, a researcher named Joshua Rogers used AI-powered code analysis tools to submit hundreds of potential issues. Stenberg was "amazed by the quality and insights". He and a fellow maintainer merged about 50 fixes from the initial batch alone.

Earlier this week, a security startup called AISLE announced they had used AI to find 12 zero-days in the latest OpenSSL security release. OpenSSL is one of the most scrutinized codebases on the planet. It encrypts most of the internet. Some of the bugs AISLE found had been hiding for over 25 years. They also reported over 30 valid security issues to curl.

The difference between this and the slop flooding Stenberg's inbox wasn't the use of AI. It was expertise and intent. Rogers and AISLE used AI to amplify deep knowledge. The low-quality reports used AI to replace expertise that wasn't there, chasing volume instead of insight.

AI created new burden for maintainers. But used well, it may also be part of the relief.

Earn trust through results

I reached out to Daniel Stenberg this week to compare notes. He's navigating the same tensions inside the curl project, with maintainers who are skeptical, if not outright negative, toward AI.

His approach is simple. Rather than pushing tools on his team, he tests them on himself. He uses AI review tools on his own pull requests to understand their strengths and limits, and to show where they actually help. The goal is to find useful applications without forcing anyone else to adopt them.

The curl team does use AI-powered analyzers today because, as Stenberg puts it, "they have proven to find things no other analyzers do". The tools earned their place.

That is a model I'd like us to try in Drupal. Experiments should stay with willing contributors, and the burden of proof should remain with the experimenters. Nothing should become a new expectation for maintainers until it has demonstrated real, repeatable value.

That does not mean we should wait. If we want evidence instead of opinions, we have to create it. Contributors should experiment on their own work first. When something helps, show it. When something doesn't, share that too. We need honest results, not just positive ones. Maintainers don't have to adopt anything, but when someone shows up with real results, it's worth a look.

Not all low-quality contributions come from bad faith. Many contributors are learning, experimenting, and trying to help. They want what is best for Drupal. A welcoming environment means building the guidelines and culture to help them succeed, with or without AI, not making them afraid to try.

I believe AI tools are part of how we create relief. I also know that is a hard sell to someone already stretched thin, or dealing with AI slop, or wrestling with what AI means for their craft. The people we most want to help are often the most skeptical, and they have good reason to be.

I'm going to do my part. I'll seek out contributors who are experimenting with AI tools and share what they're learning, what works, what doesn't, and what surprises them. I'll try some of these tools myself before asking anyone else to. And I'll keep writing about what I find, including the failures.

If you're experimenting with AI tools, I'd love to hear about it. I've opened an issue on Drupal.org to collect real-world experiences from contributors. Share what you're learning in the issue, or write about it on your own blog and link it there. I'll report back on what we learn on my blog or at DrupalCon.

Protect your maintainers

This isn't just Drupal's challenge. Every large Open Source project is navigating the same tension between enthusiasm for AI and real concern about its impact.

But wherever this goes, one principle should guide us: protect your maintainers. They're a rare asset, hard to replace and easy to lose. Any path forward that burns them out isn't a path forward at all.

I believe Drupal will be stronger with AI tools, not weaker. I believe we can reduce maintainer burden rather than add to it. But getting there will take experimentation, honest results, and collaboration. That is the direction I want to point us in. Let's keep an open mind and let evidence and adoption speak for themselves.

Thanks to phenaproxima, Tim Lehnen, Gábor Hojtsy, Scott Falconer, Théodore Biadala, Jürgen Haas and Alex Bronstein for reviewing my draft.

03 Feb 2026 5:13am GMT

02 Feb 2026

feedPlanet Debian

Isoken Ibizugbe: How Open Source Contributions Define Your Career Path

Hi there, I'm more than halfway through (8 weeks) my Outreachy internship with Debian, working on the openQA project to test Live Images.

My journey into tech began as a software engineering trainee, during which I built a foundation in Bash scripting, C programming, and Python. Later, I worked for a startup as a Product Manager. As is common in startups, I wore many hats, but I found myself drawn most to the Quality Assurance team. Testing user flows and edge-case scenarios sparked my curiosity, and that curiosity is exactly what led me to the Debian Live Image testing project.

From Manual to Automated

In my previous roles, I was accustomed to manual testing, simulating user actions one by one. While effective, I quickly realized it could be a bottleneck in fast-paced environments. This internship has been a masterclass in removing that bottleneck. I've learned that automating repetitive actions makes life (and engineering) much easier. Life's too short for manual testing 😁.

One of my proudest technical wins so far has been creating "Synergy" across desktop environments. I proposed a solution to group common applications so we could use a single Perl script to handle tests for multiple desktop environments. With my mentor's guidance, we implemented this using symbolic links, which significantly reduced code redundancy.

Expanding My Technical Toolkit

Over the last 8 weeks, my technical toolkit has expanded significantly:

Working on this project has shaped my perspective on where I fit in the tech ecosystem. In Open Source, my "resume" isn't just a PDF, it's a public trail of merged code, technical proposals, and collaborative discussions.

As my mentor recently pointed out, this is my "proof-of-work." It's a transparent record of what I am capable of and where my interests lie.

Finally, I've grown as a team player. Working with a global team across different time zones has taught me the importance of asynchronous communication and respecting everyone's time. Whether I am proposing a new logic or documenting a process, I am learning that open communication is just as vital as clean code.

02 Feb 2026 9:49pm GMT

Hellen Chemtai: Career Growth Through Open Source: A Personal Journey

Hello world 😀! I am an intern at Outreachy working with the Debian OpenQA team on images testing. We get to know what career opportunities awaits us when we work on open source projects. In open source, we are constantly learning. The community has different sets of skills and a large network of people.

So, how did I start off in this internship

I entered the community with the these skills:

I was a newbie at OpenQA but, I had a month to learn and contribute. Time is a critical resource but so is understanding what you are doing. I followed the installations instructions given but whenever I got errors, I had to research why I got the errors. I took time to understand errors I was solving then continued on with the tasks I wanted to do. I communicated my logic and understanding while working on the task and awaited reviews and feedback. Within a span of two months I had learned a lot by practicing and testing.

The skills I gained

As of today, I gained these technical skills from my work with Debian OpenQA team.

With open source comes the need of constant communication. The community is diverse and the team is usually on different time zones. These are some of the soft / social skills I gained when working with the team

With these skills and the willingness to learn , open source is a great area to focus on . Aside from your career you will extend your network. My interests are set on open source and Linux in general. Working with a wider network has really skilled me up and I will continue learning. Working with the Debian OpenQA team has been very great. The team is great at communication and I learn every day. The knowledge I gain from the team is helping me build a great career in open source.

02 Feb 2026 3:30pm GMT

Paul Tagliamonte: Paging all Radio Curious Hackers

After years of thinking about and learning about how radios work, I figured it was high-time to start to more aggressively share the things i've been learning. I had a ton of fun at DistrictCon year 0, so it was a pretty natural place to pitch an RF-focused introductory talk.

I was selected for Year 1, and able to give my first ever RF related talk about how to set off restaurant pagers (including one on stage!) by reading and writing IQ directly using a little bit of stdlib only Python.

This talk is based around the work I've written about previously (here, here and here), but the "all-in-one" form factor was something I was hoping would help encourage folks out there to take a look under the hood of some of the gear around them.

(In case the iframe above isn't working, direct link to the YouTube video recording is here)

I've posted my slides from the talk at PARCH.pdf to hopefully give folks some time to flip through them directly.

All in all, the session was great - It was truely humbling to see so many folks interested in hearing me talk about radios. I had a bit of an own-goal in picking a 20 minute form-factor, so the talk is paced wrong (it feels like it went way too fast). Hopefully being able to see the slides and pause the video is helpful.

We had a short ad-hoc session after where I brought two sets of pagers and my power switch; but unfortunately we didn't have anyone who was able to trigger any of the devices on their own (due to a mix of time between sessions and computer set-up). Hopefully it was enough to get folks interested in trying this on their own!

02 Feb 2026 2:50pm GMT

feedPlanet Lisp

Gábor Melis: Untangling Literate Programming

Classical literate programming

A literate program consists of interspersed narrative and code chunks. From this, source code to be fed to the compiler is generated by a process called tangling, and documentation by weaving. The specifics of tangling vary, but the important point is that this puts the human narrative first and allows complete reordering and textual combination of chunks at the cost of introducing an additional step into the write-compile-run cycle.

The general idea

It is easy to mistake this classical implementation of literate programming for the more general idea that we want to

  1. present code to human readers in pedagogical order with narrative added, and

  2. make changing code and its documentation together easy.

The advantages of literate programming follow from these desiderata.

Untangled LP

In many languages today, code order is far more flexible than in the era of early literate programming, so the narrative order can be approximated to some degree using docstrings and comments. Code and its documentation are side by side, so changing them together should also be easy. Since the normal source code now acts as the LP source, there is no more tangling in the programming loop. This is explored in more detail here.

Pros and cons

Having no tangling is a great benefit, as we get to keep our usual programming environment and tooling. On the other hand, bare-bones untangled LP suffers from the following potential problems.

  1. Order mismatches: Things like inline functions and global variables may need to be defined before use. So, code order tends to deviate from narrative order to some degree.

  2. Reduced locality: Our main tool to sync code and narrative is factoring out small, meaningful functions, which is just good programming style anyway. However, this may be undesirable for reasons of performance or readability. In such a case, we might end up with a larger function. Now, if we have only a single docstring for it, then it can be non-obvious which part of the code a sentence in the docstring refers to because of their distance and the presence of other parts.

  3. No source code only view: Sometimes we want to see only the code. In classical LP, we can look at the tangled file. In untangled LP, editor support for hiding the narrative is the obvious solution.

  4. No generated documentation: There is no more tangling nor weaving, but we still need another tool to generate documentation. Crucially, generating documentation is not in the main programming loop.

In general, whether classical or untangled LP is better depends on the severity of the above issues in the particular programming environment.

The Lisp and PAX view

MGL-PAX, a Common Lisp untangled LP solution, aims to minimize the above problems and fill in the gaps left by dropping tangling.

  1. Order

    • Common Lisp is quite relaxed about the order of function definitions, but not so much about DEFMACRO, DEFVAR, DEFPARAMETER, DEFCONSTANT, DEFTYPE , DEFCLASS, DEFSTRUCT, DEFINE-COMPILER-MACRO, SET-MACRO-CHARACTER, SET-DISPATCH-MACRO-CHARACTER, DEFPACKAGE. However, code order can for the most part follow narrative order. In practice, we end up with some DEFVARs far from their parent DEFSECTIONs (but DECLAIM SPECIAL helps).

    • DEFSECTION controls documentation order. The references to Lisp definitions in DEFSECTION determine narrative order independently from the code order. This allows the few ordering problems to be patched over in the generated documentation.

    • Furthermore, because DEFSECTION can handle the exporting of symbols, we can declare the public interface piecemeal, right next to the relevant definitions, rather than in a monolithic DEFPACKAGE

  2. Locality

    • Lisp macros replace chunks in the rare, complex cases where a chunk is not a straightforward text substitution but takes parameters. Unlike text-based LP chunks, macros must operate on valid syntax trees (S-expressions), so they cannot be used to inject arbitrary text fragments (e.g. an unclosed parenthesis).

      This constraint forces us to organize code into meaningful, syntactic units rather than arbitrary textual fragments, which results in more robust code. Within these units, macros allow us to reshape the syntax tree directly, handling scoping properly where text interpolation would fail.

    • PAX's NOTE is an extractable, named comment. NOTE can interleave with code within e.g. functions to minimize the distance between the logic and its documentation.

    • Also, PAX hooks into the development to provide easy navigation in the documentation tree.

  3. Source code only view: PAX supports hiding verbose documentation (sections, docstrings, comments) in the editor.

  4. Generating documentation

    • PAX extracts docstrings, NOTEs and combines them with narrative glue in DEFSECTIONs.

    • Documentation can be generated as static HTML/PDF files for offline reading or browsed live (in an Emacs buffer or via an in-built web server) during development.

    • LaTeX math is supported in both PDF and HTML (via MathJax, whether live or offline).

In summary, PAX accepts a minimal deviation in code/narrative order but retains the original, interactive Lisp environment (e.g. SLIME/Sly), through which it offers optional convenience features like extended navigation, live browsing, and hiding documentation in code. In return, we give up easy fine-grained control over typesetting the documentation - a price well worth paying in Common Lisp.

02 Feb 2026 12:00am GMT

01 Feb 2026

feedPlanet Lisp

Joe Marshall: Some Libraries

Zach Beane has released the latest Quicklisp beta (January 2026), and I am pleased to have contributed to this release. Here are the highlights:

Selected Functions

Dual numbers

DERIVATIVE function → function

Returns a new unary function that computes the exact derivative of the given function at any point x.

The returned function utilizes Dual Number arithmetic to perform automatic differentiation. It evaluates f(x + ε), where ε is the dual unit (an infinitesimal such that ε2 = 0). The result is extracted from the infinitesimal part of the computation.

f(x + ε) = f(x) + f'(x)ε

This method avoids the precision errors of numerical approximation (finite difference) and the complexity of symbolic differentiation. It works for any function composed of standard arithmetic operations and elementary functions supported by the dual-numbers library (e.g., sin, exp, log).

Example

(defun square (x) (* x x))

(let ((df (derivative #'square)))
  (funcall df 5)) 
;; => 10
    

Implementation Note

The implementation relies on the generic-arithmetic system to ensure that mathematical operations within function can accept and return dual-number instances seamlessly.

Function

BINARY-COMPOSE-LEFT binary-fn unary-fn → function BINARY-COMPOSE-RIGHT binary-fn unary-fn → function

Composes a binary function B(x, y) with a unary function U(z) applied to one of its arguments.

(binary-compose-left B U)(x, y) ≡ B(U(x), y)
(binary-compose-right B U)(x, y) ≡ B(x, U(y))

These combinators are essential for "lifting" unary operations into binary contexts, such as when folding a sequence where elements need preprocessing before aggregation.

Example

;; Summing the squares of a list
(fold-left (binary-compose-right #'+ #'square) 0 '(1 2 3))
;; => 14  ; (+ (+ (+ 0 (sq 1)) (sq 2)) (sq 3))
    

FOLD

FOLD-LEFT function initial-value sequence → result

Iterates over sequence, calling function with the current accumulator and the next element. The accumulator is initialized to initial-value.

This is a left-associative reduction. The function is applied as:

(f ... (f (f initial-value x0) x1) ... xn)

Unlike CL:REDUCE, the argument order for function is strictly defined: the first argument is always the accumulator, and the second argument is always the element from the sequence. This explicit ordering eliminates ambiguity and aligns with the functional programming convention found in Scheme and ML.

Arguments

  • function: A binary function taking (accumulator, element).
  • initial-value: The starting value of the accumulator.
  • sequence: A list or vector to traverse.

Example

(fold-left (lambda (acc x) (cons x acc))
           nil
           '(1 2 3))
;; => (3 2 1)  ; Effectively reverses the list
    

Named Let

LET bindings &body body → result LET name bindings &body body → result

Provides the functionality of the "Named Let" construct, commonly found in Scheme. This allows for the definition of recursive loops within a local scope without the verbosity of LABELS.

The macro binds the variables defined in bindings as in a standard let, but also binds name to a local function that can be called recursively with new values for those variables.

(let name ((var val) ...) ... (name new-val ...) ...)

This effectively turns recursion into a concise, iterative structure. It is the idiomatic functional alternative to imperative loop constructs.

While commonly used for tail recursive loops, the function bound by named let is a first-class procedure that can be called anywhere or used as a value.

Example

;; Standard Countdown Loop
(let recur ((n 10))
  (if (zerop n)
      'blastoff
      (progn
        (print n)
        (recur (1- n)))))
    

Implementation Note

The named-let library overloads the standard CL:LET macro to support this syntax directly if the first argument is a symbol. This allows users to use let uniformly for both simple bindings and recursive loops.

01 Feb 2026 11:15pm GMT

29 Jan 2026

feedFOSDEM 2026

Join the FOSDEM Treasure Hunt!

Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰

29 Jan 2026 11:00pm GMT

feedPlanet Lisp

Joe Marshall: Advent of Code 2025, brief recap

I did the Advent of Code this year using Common Lisp. Last year I attempted to use the series library as the primary iteration mechanism to see how it went. This year, I just wrote straightforward Common Lisp. It would be super boring to walk through the solutions in detail, so I've decided to just give some highlights here.

Day 2: Repeating Strings

Day 2 is easily dealt with using the Common Lisp sequence manipulation functions giving special consideration to the index arguments. Part 1 is a simple comparison of two halves of a string. We compare the string to itself, but with different start and end points:

(defun double-string? (s)
  (let ((l (length s)))
    (multiple-value-bind (mid rem) (floor l 2)
      (and (zerop rem)
           (string= s s
                    :start1 0 :end1 mid
                    :start2 mid :end2 l)))))

Part 2 asks us to find strings which are made up of some substring repeated multiple times.

(defun repeating-string? (s)
  (search s (concatenate 'string s s)
          :start2 1
          :end2 (- (* (length s) 2) 1)
          :test #'string=))

Day 3: Choosing digits

Day 3 has us maximizing a number by choosing a set of digits where we cannot change the relative position of the digits. A greed algorithm works well here. Assume we have already chosen some digits and are now looking to choose the next digit. We accumulate the digit on the right. Now if we have too many digits, we discard one. We choose to discard whatever digit gives us the maximum resulting value.

(defun omit-one-digit (n)
  (map 'list #'digit-list->number (removals (number->digit-list n))))
                    
> (omit-one-digit 314159)
(14159 34159 31159 31459 31419 31415)

(defun best-n (i digit-count)
  (fold-left (lambda (answer digit)
               (let ((next (+ (* answer 10) digit)))
                 (if (> next (expt 10 digit-count))
                     (fold-left #'max most-negative-fixnum (omit-one-digit next))
                     next)))
             0
             (number->digit-list i)))

(defun part-1 ()
  (collect-sum
   (map-fn 'integer (lambda (i) (best-n i 2))
           (scan-file (input-pathname) #'read))))

(defun part-2 ()
  (collect-sum
   (map-fn 'integer (lambda (i) (best-n i 12))
           (scan-file (input-pathname) #'read))))

Day 6: Columns of digits

Day 6 has us manipulating columns of digits. If you have a list of columns, you can transpose it to a list of rows using this one liner:

(defun transpose (matrix)
  (apply #'map 'list #'list matrix))

Days 8 and 10: Memoizing

Day 8 has us counting paths through a beam splitter apparatus while Day 10 has us counting paths through a directed graph. Both problems are easily solved using a depth-first recursion, but the number of solutions grows exponentially and soon takes too long for the machine to return an answer. If you memoize the function, however, it completes in no time at all.

29 Jan 2026 7:22pm GMT

26 Jan 2026

feedFOSDEM 2026

Guided sightseeing tours

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

26 Jan 2026 11:00pm GMT

Call for volunteers

With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰

26 Jan 2026 11:00pm GMT