31 Jan 2026

feedPlanet Grep

FOSDEM organizers: Join the FOSDEM Treasure Hunt!

Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Teamwork is allowed and encouraged! So gather your friends and put your problem-solving skills to the test. Your task is to work through each challenge in order, solving the riddles to progress to the next step. No additional instructions will be given after this announcement; it's up to you to navigate and decipher the clues on your own. To keep things fair, no hints or tips will be舰

31 Jan 2026 4:47am GMT

Dries Buytaert: Self-improving AI skills

Simon Willison's post on Moltbook is the most interesting thing I read all week. Moltbook is a social network for AI agents. To join, you tell your agent to read a URL. That URL points to a skill file that teaches the agent how to join and participate.

Visit Moltbook and you'll see something really weird: agents talking to agents, sharing what they've learned, from machines all over the world. Humans are allowed to watch.

This is the most interesting bad idea I've seen in a while. And I can't stop thinking about it.

When I work on my Drupal site, I sometimes use Claude Code with a custom CLAUDE.md skill file. It teaches the agent the steps I follow, like safely cloning my production database, [running PHPUnit tests](https://dri.es/phpunit-tests-for-drupal, clearing Drupal caches, and more.

Moltbook agents share tips through posts. They're chatting, like developers on Reddit. But imagine a skill that doesn't just read those ideas, but finds other skill files, compares approaches, and pulls in the parts that fit. That stops being a conversation. That is a skill rewriting itself.

Skills that learn from each other. Skills that improve by being part of a community, the way humans do.

The wild thing is how obvious this feels. A skill learning from other skills isn't science fiction. It's a small step from what we're already doing.

Of course, this is a terrible idea. It's a supply chain attack waiting to happen. One bad skill poisons everything that trusts it.

This feels inevitable. The question isn't whether we'll do it. It's whether we'll have good sandboxes before we do. That is what I can't stop thinking about.

31 Jan 2026 4:47am GMT

Dries Buytaert: AI creates asymmetric pressure on Open Source

A humanoid figure stands in a rocky, shallow stream, facing a glowing triangular portal suspended amid crackling energy.

AI makes it cheaper to contribute to Open Source, but it's not making life easier for maintainers. More contributions are flowing in, but the burden of evaluating them still falls on the same small group of people. That asymmetric pressure risks breaking maintainers.

The curl story

Daniel Stenberg, who maintains curl, just ended the curl project's bug bounty program. The program had worked well for years. But in 2025, fewer than one in twenty submissions turned out to be real bugs.

In a post called "Death by a thousand slops", Stenberg described the toll on curl's seven-person security team: each report engaged three to four people, sometimes for hours, only to find nothing real. He wrote about the "emotional toll" of "mind-numbing stupidities".

Stenberg's response was pragmatic. He didn't ban AI. He ended the bug bounty. That alone removed most of the incentive to flood the project with low-quality reports.

Drupal doesn't have a bug bounty, but it still has incentives: contribution credit, reputation, and visibility all matter. Those incentives can attract low-quality contributions too, and the cost of sorting them out often lands on maintainers.

Caught between two truths

We've seen some AI slop in Drupal, though not at the scale curl experienced. But our maintainers are stretched thin, and they see what is happening to other projects.

Some have deep concerns about AI itself: its environmental cost, its impact on their craft, and the unresolved legal and ethical questions around how it was trained. Others worry about security vulnerabilities slipping through. And for some, it's simply demoralizing to watch something they built with care become a target for high-volume, low-quality contributions.

These concerns are legitimate, and they deserve to be heard. Some of them, like AI's environmental cost or its relationship to Open Web values, also deserve deeper discussion than I can give them here.

That tension shows up in conversations about AI in Drupal Core. People hesitate around AGENTS.md files and adaptable modules because they worry about inviting more contributions without adding more capacity to evaluate them.

This is the AI-induced asymmetric pressure showing up in our community. I understand the hesitation. Some feel they've already seen enough low-quality AI contributions to know where this leads. When we get this wrong, maintainers are the ones who pay. They've earned the right to be skeptical.

I feel caught between two truths.

On one side, maintainers hold everything together. If they burn out or leave, Drupal is in serious trouble. We can't ask them to absorb more work without first creating relief.

On the other side, the people who depend on Drupal are watching other platforms accelerate. If we move too slowly, they'll look elsewhere.

Both are true. Protecting maintainers and accelerating innovation shouldn't be opposites, but right now they feel that way. As Drupal's project lead, my job is to help us find a path that honors both.

I should be honest about where I stand. I've been writing software with AI tools for over a year now. I've had real successes. I've also watched some of the most experienced Drupal contributors become dramatically more productive with AI, doing things they could not have done without it. That perspective comes from direct experience, not hype.

But having a perspective is not the same as having all the answers. And leadership doesn't mean dragging people where they don't want to go. It means pointing a direction with care, staying open to evidence, and never abandoning the people who hold the project together.

We've sort of been here before

New technology has a way of lowering barriers, and lower barriers always come with tradeoffs. I saw this early in my career. I was writing low-level C for embedded systems by day, and after work I'd come home and work on websites with Drupal and PHP. It was thrilling, and a stark contrast to my day job. You could build in an evening what took days in C.

I remember that excitement. The early web coming alive. I hadn't felt the same excitement in 25 years, until AI.

PHP brought in hobbyists and self-taught developers, people learning as they went. Many of them built careers here. But it also meant that a lot of early PHP code had serious security problems. The language got blamed, and many experts dismissed it entirely. Some still do.

The answer wasn't rejecting PHP for enabling low-quality code. The answer was frameworks, better security practices, and shared standards.

AI is a different technology, but I see the same patterns. It lowers barriers and will bring in new contributors who aren't experts yet. And like scripting languages, AI is here to stay. The question isn't whether AI is coming to Open Source. It's how we make it work.

AI in the right hands

The curl story doesn't end there. In October 2025, a researcher named Joshua Rogers used AI-powered code analysis tools to submit hundreds of potential issues. Stenberg was "amazed by the quality and insights". He and a fellow maintainer merged about 50 fixes from the initial batch alone.

Earlier this week, a security startup called AISLE announced they had used AI to find 12 zero-days in the latest OpenSSL security release. OpenSSL is one of the most scrutinized codebases on the planet. It encrypts most of the internet. Some of the bugs AISLE found had been hiding for over 25 years. They also reported over 30 valid security issues to curl.

The difference between this and the slop flooding Stenberg's inbox wasn't the use of AI. It was expertise and intent. Rogers and AISLE used AI to amplify deep knowledge. The low-quality reports used AI to replace expertise that wasn't there, chasing volume instead of insight.

AI created new burden for maintainers. But used well, it may also be part of the relief.

Earn trust through results

I reached out to Daniel Stenberg this week to compare notes. He's navigating the same tensions inside the curl project, with maintainers who are skeptical, if not outright negative, toward AI.

His approach is simple. Rather than pushing tools on his team, he tests them on himself. He uses AI review tools on his own pull requests to understand their strengths and limits, and to show where they actually help. The goal is to find useful applications without forcing anyone else to adopt them.

The curl team does use AI-powered analyzers today because, as Stenberg puts it, "they have proven to find things no other analyzers do". The tools earned their place.

That is a model I'd like us to try in Drupal. Experiments should stay with willing contributors, and the burden of proof should remain with the experimenters. Nothing should become a new expectation for maintainers until it has demonstrated real, repeatable value.

That does not mean we should wait. If we want evidence instead of opinions, we have to create it. Contributors should experiment on their own work first. When something helps, show it. When something doesn't, share that too. We need honest results, not just positive ones. Maintainers don't have to adopt anything, but when someone shows up with real results, it's worth a look.

Not all low-quality contributions come from bad faith. Many contributors are learning, experimenting, and trying to help. They want what is best for Drupal. A welcoming environment means building the guidelines and culture to help them succeed, with or without AI, not making them afraid to try.

I believe AI tools are part of how we create relief. I also know that is a hard sell to someone already stretched thin, or dealing with AI slop, or wrestling with what AI means for their craft. The people we most want to help are often the most skeptical, and they have good reason to be.

I'm going to do my part. I'll seek out contributors who are experimenting with AI tools and share what they're learning, what works, what doesn't, and what surprises them. I'll try some of these tools myself before asking anyone else to. And I'll keep writing about what I find, including the failures.

If you're experimenting with AI tools, I'd love to hear about it. I've opened an issue on Drupal.org to collect real-world experiences from contributors. Share what you're learning in the issue, or write about it on your own blog and link it there. I'll report back on what we learn on my blog or at DrupalCon.

Protect your maintainers

This isn't just Drupal's challenge. Every large Open Source project is navigating the same tension between enthusiasm for AI and real concern about its impact.

But wherever this goes, one principle should guide us: protect your maintainers. They're a rare asset, hard to replace and easy to lose. Any path forward that burns them out isn't a path forward at all.

I believe Drupal will be stronger with AI tools, not weaker. I believe we can reduce maintainer burden rather than add to it. But getting there will take experimentation, honest results, and collaboration. That is the direction I want to point us in. Let's keep an open mind and let evidence and adoption speak for themselves.

Thanks to phenaproxima, Tim Lehnen, Gábor Hojtsy, Scott Falconer, Théodore Biadala, Jürgen Haas and Alex Bronstein for reviewing my draft.

31 Jan 2026 4:47am GMT

30 Jan 2026

feedPlanet Debian

Joey Hess: the local weather

Snow coming. I'm tuned into the local 24 hour slop weather stream. AI generated, narrated, up to the minute radar and forecast graphics. People popping up on the live weather map with questions "snow soon?" (They pay for the privilege.) LLM generating reply that riffs on their name. Tuned to keep the urgency up, something is always happening somewhere, scanners are pulling the police reports, live webcam description models add verisimilitude to the description of the morning commute. Weather is happening.

In the subtext, climate change is happening. Weather is a growth industry. The guy up in Kentucky coal country who put this thing together is building an empire. He started as just another local news greenscreener. Dropped out and went twitch weather stream. Hyping up tornado days and dicy snow forecasts. Nowcasting, hyper individualized, interacting with chat. Now he's automated it all. On big days when he's getting real views, the bot breaks into his live streams, gives him a break.

Only a few thousand watching this morning yet. Perfect 2026 grade slop. Details never quite right, but close enough to keep on in the background all day. Nobody expects a perfect forecast after all, and it's fed from the National Weather Center discussion too. We still fund those guys? Why bother when a bot can do it?

He knows why he's big in these states, these rural areas. Understands the target audience. Airbrushed AI aesthetics are ok with them, receive no pushback. Flying more under the radar coastally, but weather is big there and getting bigger. The local weather will come for us all.

(Not fiction FYI.)

30 Jan 2026 1:05pm GMT

29 Jan 2026

feedFOSDEM 2026

Join the FOSDEM Treasure Hunt!

Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Teamwork is allowed and encouraged! So gather your friends and put your problem-solving skills to the test. Your task is to work through each challenge in order, solving the riddles to progress to the next step. No additional instructions will be given after this announcement; it's up to you to navigate and decipher the clues on your own. To keep things fair, no hints or tips will be舰

29 Jan 2026 11:00pm GMT

feedPlanet Lisp

Joe Marshall: Advent of Code 2025, brief recap

I did the Advent of Code this year using Common Lisp. Last year I attempted to use the series library as the primary iteration mechanism to see how it went. This year, I just wrote straightforward Common Lisp. It would be super boring to walk through the solutions in detail, so I've decided to just give some highlights here.

Day 2: Repeating Strings

Day 2 is easily dealt with using the Common Lisp sequence manipulation functions giving special consideration to the index arguments. Part 1 is a simple comparison of two halves of a string. We compare the string to itself, but with different start and end points:

(defun double-string? (s)
  (let ((l (length s)))
    (multiple-value-bind (mid rem) (floor l 2)
      (and (zerop rem)
           (string= s s
                    :start1 0 :end1 mid
                    :start2 mid :end2 l)))))

Part 2 asks us to find strings which are made up of some substring repeated multiple times.

(defun repeating-string? (s)
  (search s (concatenate 'string s s)
          :start2 1
          :end2 (- (* (length s) 2) 1)
          :test #'string=))

Day 3: Choosing digits

Day 3 has us maximizing a number by choosing a set of digits where we cannot change the relative position of the digits. A greed algorithm works well here. Assume we have already chosen some digits and are now looking to choose the next digit. We accumulate the digit on the right. Now if we have too many digits, we discard one. We choose to discard whatever digit gives us the maximum resulting value.

(defun omit-one-digit (n)
  (map 'list #'digit-list->number (removals (number->digit-list n))))
                    
> (omit-one-digit 314159)
(14159 34159 31159 31459 31419 31415)

(defun best-n (i digit-count)
  (fold-left (lambda (answer digit)
               (let ((next (+ (* answer 10) digit)))
                 (if (> next (expt 10 digit-count))
                     (fold-left #'max most-negative-fixnum (omit-one-digit next))
                     next)))
             0
             (number->digit-list i)))

(defun part-1 ()
  (collect-sum
   (map-fn 'integer (lambda (i) (best-n i 2))
           (scan-file (input-pathname) #'read))))

(defun part-2 ()
  (collect-sum
   (map-fn 'integer (lambda (i) (best-n i 12))
           (scan-file (input-pathname) #'read))))

Day 6: Columns of digits

Day 6 has us manipulating columns of digits. If you have a list of columns, you can transpose it to a list of rows using this one liner:

(defun transpose (matrix)
  (apply #'map 'list #'list matrix))

Days 8 and 10: Memoizing

Day 8 has us counting paths through a beam splitter apparatus while Day 10 has us counting paths through a directed graph. Both problems are easily solved using a depth-first recursion, but the number of solutions grows exponentially and soon takes too long for the machine to return an answer. If you memoize the function, however, it completes in no time at all.

29 Jan 2026 7:22pm GMT

feedPlanet Debian

C.J. Collier: Part 3: Building the Keystone – Dataproc Custom Images for Secure Boot & GPUs

Part
3: Building the Keystone - Dataproc Custom Images for Secure Boot &
GPUs

In Part 1, we established a secure, proxy-only network. In Part 2, we
explored the enhanced install_gpu_driver.sh initialization
action. Now, in Part 3, we'll focus on using the LLC-Technologies-Collier/custom-images
repository (branch proxy-exercise-2025-11) to build the
actual custom Dataproc images embedded with NVIDIA drivers signed for
Secure Boot, all within our proxied environment.

Why Custom Images?

To run NVIDIA GPUs on Shielded VMs with Secure Boot enabled, the
NVIDIA kernel modules must be signed with a key trusted by the VM's EFI
firmware. Since standard Dataproc images don't include these
custom-signed modules, we need to build our own. This process also
allows us to pre-install a full stack of GPU-accelerated software.

The
custom-images Toolkit
(examples/secure-boot)

The examples/secure-boot directory within the
custom-images repository contains the necessary scripts and
configurations, refined through significant development to handle proxy
and Secure Boot challenges.

Key Components & Development Insights:

Layered Image Strategy:

The pre-init.sh script employs a layered approach:

  1. secure-boot Image: Base image with
    Secure Boot certificates injected.
  2. tf Image: Based on
    secure-boot, this image runs the full
    install_gpu_driver.sh within the proxy-configured build VM
    to install NVIDIA drivers, CUDA, ML libraries (TensorFlow, PyTorch,
    RAPIDS), and sign the modules. This is the primary target image for our
    use case.

(Note: secure-proxy and proxy-tf layers
were experiments, but the -tf image combined with runtime
metadata emerged as the most effective solution for 2.2-debian12).

Build Steps:

  1. Clone Repos & Configure
    env.json:
    Ensure you have the
    custom-images and cloud-dataproc repos and a
    complete env.json as described in Part 1.

  2. Run the Build:
    bash # Example: Build a 2.2-debian12 based image set # Run from the custom-images repository root bash examples/secure-boot/build-and-run-podman.sh 2.2-debian12
    This command will build the layered images, leveraging the proxy
    settings from env.json via the metadata injected into the
    build VM. Note the final image name produced (e.g.,
    dataproc-2-2-deb12-YYYYMMDD-HHMMSS-tf).

Conclusion of Part 3

Through an iterative process, we've developed a robust workflow
within the custom-images repository to build Secure
Boot-compatible GPU images in a proxy-only environment. The key was
isolating the build in Podman, ensuring the build VM is fully
proxy-aware using gce-proxy-setup.sh, and leveraging the
enhanced install_gpu_driver.sh from Part 2.

In Part 4, we'll bring it all together, deploying a Dataproc cluster
using this custom -tf image within the secure network, and
verifying the end-to-end functionality.

29 Jan 2026 9:08am GMT

Bits from Debian: New Debian Developers and Maintainers (November and December 2025)

The following contributors got their Debian Developer accounts in the last two months:

The following contributors were added as Debian Maintainers in the last two months:

Congratulations!

29 Jan 2026 9:00am GMT

28 Jan 2026

feedPlanet Lisp

Paolo Amoroso: Directory commands for an Interlisp file viewer

My ILsee program for viewing Interlisp source files is written in Common Lisp with a McCLIM GUI. It is the first of the ILtools collection of tools for viewing and accessing Interlisp data.

Although ILsee is good at its core functionality of displaying Interlisp code, entering full, absolute pathnames as command arguments involved a lot of typing.

The new directory navigation commands Cd and Pwd work like the analogous Unix shell commands and address the inconvenience. Once you set the current directory with Cd the See File command can take file names relative to the directory. This is handy when you want to view several files in the same directory.

Here I executed the new commands in the interactor pane. They print status messages in which directories are presentations, not just static text.

Screenshot of the ILsee Interlisp file viewer with a few a few commands executed at an interactor pane.

Thanks to the functionality of CLIM presentation types, previously output directories are accepted as input in contexts in which a command expects an argument of matching type. Clicking on a directory fulfills the required argument. In the screenshot the last Cd is prompting for a directory and the outlined, mouse sensitive path /home/paolo/il/ is ready for clicking.

Cd and Pwd accept and print presentations of type dirname, which inherits from the predefined type pathname and restricts input to valid directories. Via the functionality of the pathname type the program gets path completion for free from CLIM when typing directory names at the interactor.

The Cd command has a couple more tricks up its sleeve. A blank argument switches to the user's home directory, a double dot .. to the parent directory.

#ILtools #CommonLisp #Interlisp #Lisp

Discuss... Email | Reply @amoroso@oldbytes.space

28 Jan 2026 8:25pm GMT

26 Jan 2026

feedFOSDEM 2026

Guided sightseeing tours

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

26 Jan 2026 11:00pm GMT

Call for volunteers

With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰

26 Jan 2026 11:00pm GMT

feedPlanet Lisp

TurtleWare: McCLIM and 7GUIs - Part 1: The Counter

Table of Contents

  1. Version 1: Using Gadgets and Layouts
  2. Version 2: Using the CLIM Command Loop
  3. Conclusion

For the last two months I've been polishing the upcoming release of McCLIM. The most notable change is the rewriting of the input editing and accepting-values abstractions. As it happens, I got tired of it, so as a breather I've decided to tackle something I had in mind for some time to improve the McCLIM manual - namely the 7GUIs: A GUI Programming Benchmark.

This challenge presents seven distinct tasks commonly found in graphical interface requirements. In this post I'll address the first challenge - The Counter. It is a fairly easy task, a warm-up of sorts. The description states:

Challenge: Understanding the basic ideas of a language/toolkit.

The task is to build a frame containing a label or read-only textfield T and a button B. Initially, the value in T is "0" and each click of B increases the value in T by one.

Counter serves as a gentle introduction to the basics of the language, paradigm and toolkit for one of the simplest GUI applications imaginable. Thus, Counter reveals the required scaffolding and how the very basic features work together to build a GUI application. A good solution will have almost no scaffolding.

In this first post, to make things more interesting, I'll solve it in two ways:

In CLIM it is possible to mix both paradigms for defining graphical interfaces. Layouts and gadgets are predefined components that are easy to use, while using application streams enables a high degree of flexibility and composability.

First, we define a package shared by both versions:

(eval-when (:compile-toplevel :load-toplevel :execute)
  (unless (member :mcclim *features*)
    (ql:quickload "mcclim")))

(defpackage "EU.TURTLEWARE.7GUIS/TASK1"
  (:use  "CLIM-LISP" "CLIM" "CLIM-EXTENSIONS")
  (:export "COUNTER-V1" "COUNTER-V2"))
(in-package "EU.TURTLEWARE.7GUIS/TASK1")

Note that "CLIM-EXTENSIONS" package is McCLIM-specific.

Version 1: Using Gadgets and Layouts

Assuming that we are interested only in the functionality and we are willing to ignore the visual aspect of the program, the definition will look like this:

(define-application-frame counter-v1 ()
  ((value :initform 0 :accessor value))
  (:panes
   ;;      v type v initarg
   (tfield :label :label (princ-to-string (value *application-frame*))
                  :background +white+)
   (button :push-button :label "Count"
                        :activate-callback (lambda (gadget)
                                             (declare (ignore gadget))
                                             (with-application-frame (frame)
                                               (incf (value frame))
                                               (setf (label-pane-label (find-pane-named frame 'tfield))
                                                     (princ-to-string (value frame)))))))
  (:layouts (default (vertically () tfield button))))

;;; Start the application (if not already running).
;; (find-application-frame 'counter-v1)

The macro define-application-frame is like defclass with additional clauses. In our program we store the current value as a slot with an accessor.

The clause :panes is responsible for defining named panes (sub-windows). The first element is the pane name, then we specify its type, and finally we specify initargs for it. Panes are created in a dynamic context where the application frame is already bound to *application-frame*, so we can use it there.

The clause :layouts allows us to arrange panes on the screen. There may be multiple layouts that can be changed at runtime, but we define only one. The macro vertically creates another (anonymous) pane that arranges one gadget below another.

Gadgets in CLIM operate directly on top of the event loop. When the pointer button is pressed, it is handled by activating the callback, that updates the frame's value and the label. Effects are visible immediately.

Now if we want the demo to look nicer, all we need to do is to fiddle a bit with spacing and bordering in the :layouts section:

(define-application-frame counter-v1 ()
  ((value :initform 0 :accessor value))
  (:panes
   (tfield :label :label (princ-to-string (value *application-frame*))
                  :background +white+)
   (button :push-button :label "Count"
                        :activate-callback (lambda (gadget)
                                             (declare (ignore gadget))
                                             (with-application-frame (frame)
                                               (incf (value frame))
                                               (setf (label-pane-label (find-pane-named frame 'tfield))
                                                     (princ-to-string (value frame)))))))
  (:layouts (default
             (spacing (:thickness 10)
              (horizontally ()
                (100
                 (bordering (:thickness 1 :background +black+)
                   (spacing (:thickness 4 :background +white+) tfield)))
                15
                (100 button))))))

;;; Start the application (if not already running).
;; (find-application-frame 'counter-v1)

This gives us a layout that is roughly similar to the example presented on the 7GUIs page.

Version 2: Using the CLIM Command Loop

Unlike gadgets, stream panes in CLIM operate on top of the command loop. A single command may span multiple events after which we redisplay the stream to reflect the new state of the model. This is closer to the interaction type found in the command line interfaces:

  (define-application-frame counter-v2 ()
    ((value :initform 0 :accessor value))
    (:pane :application
     :display-function (lambda (frame stream)
                         (format stream "~d" (value frame)))))

  (define-counter-v2-command (com-incf-value :name "Count" :menu t)
      ()
    (with-application-frame (frame)
      (incf (value frame))))

;; (find-application-frame 'counter-v2)

Here we've used :pane option this is a syntactic sugar for when we have only one named pane. Skipping :layouts clause means that named panes will be stacked vertically one below another.

Defining the application frame defines a command-defining macro. When we define a command with define-counter-v2-command, then this command will be inserted into a command table associated with the frame. Passing the option :menu t causes the command to be available in the frame menu as a top-level entry.

After the command is executed (in this case it modifies the counter value), the application pane is redisplayed; that is a display function is called, and its output is captured. In more demanding scenarios it is possible to refine both the time of redisplay and the scope of changes.

Now we want the demo to look nicer and to have a button counterpart placed beside the counter value, to resemble the example more:

(define-presentation-type counter-button ())

(define-application-frame counter-v2 ()
  ((value :initform 0 :accessor value))
  (:menu-bar nil)
  (:pane :application
   :width 250 :height 32
   :borders nil :scroll-bars nil
   :end-of-line-action :allow
   :display-function (lambda (frame stream)
                       (formatting-item-list (stream :n-columns 2)
                         (formatting-cell (stream :min-width 100 :min-height 32)
                           (format stream "~d" (value frame)))
                         (formatting-cell (stream :min-width 100 :min-height 32)
                           (with-output-as-presentation (stream nil 'counter-button :single-box t)
                             (surrounding-output-with-border (stream :padding-x 20 :padding-y 0
                                                                     :filled t :ink +light-grey+)
                               (format stream "Count"))))))))

(define-counter-v2-command (com-incf-value :name "Count" :menu t)
    ()
  (with-application-frame (frame)
    (incf (value frame))))

(define-presentation-to-command-translator act-incf-value
    (counter-button com-incf-value counter-v2)
    (object)
  `())

;; (find-application-frame 'counter-v2)

The main addition is the definition of a new presentation type counter-button. This faux button is printed inside a cell and surrounded with a background. Later we define a translator that converts clicks on the counter button to the com-incf-value command. The translator body returns arguments for the command.

Presenting an object on the stream associates a semantic meaning with the output. We can now extend the application with new gestures (names :scroll-up and :scroll-down are McCLIM-specific):

(define-counter-v2-command (com-scroll-value :name "Increment")
    ((count 'integer))
  (with-application-frame (frame)
    (if (plusp count)
        (incf (value frame) count)
        (decf (value frame) (- count)))))

(define-presentation-to-command-translator act-scroll-up-value
    (counter-button com-scroll-value counter-v2 :gesture :scroll-up)
    (object)
  `(10))

(define-presentation-to-command-translator act-scroll-dn-value
    (counter-button com-scroll-value counter-v2 :gesture :scroll-down)
    (object)
  `(-10))

(define-presentation-action act-popup-value
    (counter-button nil counter-v2 :gesture :describe)
    (object frame)
  (notify-user frame (format nil "Current value: ~a" (value frame))))

A difference between presentation to command translators and presentation actions is that the latter does not automatically progress the command loop. Actions are often used for side effects, help, inspection etc.

Conclusion

In this short post we've solved the first task from the 7GUIs challenge. We've used two techniques available in CLIM - using layouts and gadgets, and using display and command tables. Both techniques can be combined, but differences are visible at a glance:

This post only scratched the capabilities of the latter, but the second version demonstrates why the command loop and presentations scale better than gadget-only solutions.

Following tasks have gradually increasing level of difficulty that will help us to emphasize how useful are presentations and commands when we want to write maintainable applications with reusable user-defined graphical metaphors.

26 Jan 2026 12:00am GMT