21 Oct 2021

feedPlanet Lisp

Quicklisp news: October 2021 Quicklisp dist update now available

New projects:

Updated projects: 3d-matrices, also-alsa, april, architecture.builder-protocol, bdef, beast, bike, bnf, bp, chameleon, check-bnf, chirp, ci-utils, cl+ssl, cl-ana, cl-ansi-term, cl-ansi-text, cl-async, cl-bloggy, cl-collider, cl-colors2, cl-cron, cl-data-structures, cl-dbi, cl-digraph, cl-environments, cl-form-types, cl-forms, cl-gearman, cl-gserver, cl-info, cl-kraken, cl-liballegro-nuklear, cl-libsvm, cl-marshal, cl-megolm, cl-mixed, cl-opencl, cl-opencl-utils, cl-patterns, cl-pdf, cl-permutation, cl-png, cl-readline, cl-schedule, cl-sdl2-mixer, cl-ses4, cl-telebot, cl-utils, cl-wave-file-writer, cl-webdriver-client, cl-webkit, cletris, clj-re, clog, closer-mop, cluffer, clunit2, clx, cmd, colored, common-lisp-jupyter, concrete-syntax-tree, consfigurator, core-reader, croatoan, cytoscape-clj, dartsclhashtree, data-frame, defmain, dfio, djula, dns-client, doc, doplus, easy-routes, eclector, esrap, fare-scripts, fof, fresnel, functional-trees, gadgets, gendl, generic-cl, glacier, gtirb-capstone, gute, harmony, hash-table-ext, helambdap, hunchenissr, imago, ironclad, jingoh, kekule-clj, lack, lambda-fiddle, lass, legit, lisp-namespace, lisp-stat, literate-lisp, log4cl, log4cl-extras, lsx, maiden, markup, math, matrix-case, mcclim, messagebox, mgl-pax, micmac, millet, mito, mnas-graph, mnas-hash-table, mnas-package, mnas-string, mutility, null-package, numerical-utilities, nyxt, omglib, osicat, parachute, petalisp, physical-quantities, plot, portal, postmodern, pp-toml, prompt-for, qlot, query-repl, quilc, read-as-string, resignal-bind, rove, rpcq, salza2, sel, serapeum, sha1, shasht, shop3, sketch, slite, smart-buffer, spinneret, staple, static-dispatch, stealth-mixin, structure-ext, swank-protocol, sycamore, tfeb-lisp-hax, tfeb-lisp-tools, tooter, trace-db, trestrul, trivia, trivial-with-current-source-form, uax-15, uncursed, vellum, vellum-postmodern, vgplot, vk, whirlog, with-c-syntax, zippy.

Removed projects: adw-charting, cl-batis, cl-bunny, cl-dbi-connection-pool, cl-reddit, cl-server-manager, corona, gordon, hemlock, hunchenissr-routes, prepl, s-protobuf, submarine, torta, trivial-swank, weblocks-examples, weblocks-prototype-js, weblocks-tree-widget, weblocks-utils.

To get this update, use (ql:update-dist "quicklisp").

There are a lot of removed projects this month. These projects no longer build with recent SBCLs, and all bug reports have gone ignored for many months. If one of these projects is important to you, consider contributing to its maintenance and help it work again.

Incidentally, this is the eleventh anniversary of the first Quicklisp dist release back in October 2010.

21 Oct 2021 1:47am GMT

TurtleWare: Selective waste collection

When an object in Common Lisp is not reachable it is garbage collected. Some implementations provide the functionality to set finalizers for these objects. A finalizer is a function that is run when the object is not reachable.

Whether the finalizer is run before the object is deallocated or after is a nuance differing between implementations.

On ABCL, CMU CL, LispWorks, Mezzano, SBCL and Scieener CL the finalizer does not accept any arguments and it can't capture the finalized object (because otherwise it will be always reachable); effectively it may be already deallocated. As the least common denominator it is the approach taken in the portability library trivial-garbage.

(let* ((file (open "my-file"))
       (object (make-instance 'pseudo-stream :file file)))
  (flet ((finalize () (close file)))
    (trivial-garbage:set-finalizer object (lambda () (close file))))

On contrary ACL, CCL, clasp, clisp, corman and ECL the finalizer accepts one argument - the finalized object. This relieves the programmer from the concern of what should be captured but puts the burden on the programmer to ensure that there are no circular dependencies between finalized objects.

(let ((object (make-instance 'pseudo-stream :file (open "my-file"))))
  (flet ((finalize (stream) (close (slot-value stream 'file))))
    (another-garbage:set-finalizer object #'finalize)))

The first approach may for instance store weak pointers to objects with registered finalizers and when a weak pointer is broken then the finalizer is called.

The second approach requires more synchronization with GC and for some strategies makes it possible to absolve objects from being collected - i.e by stipulating that finalizers are executed in a topological order one per the garbage collection cycle.

In this post I want to discuss a certain problem related to finalizers I've encountered in an existing codebase. Consider the following code:

(defclass pseudo-stream ()
  ((resource :initarg :resource :accessor resource)))

(defun open-pseudo-stream (uri)
  (make-instance 'pseudo-stream :resource (make-resource uri)))

(defun close-pseudo-stream (object)
  (destroy-resource (resource object))))

(defvar *pseudo-streams* (make-hash-table))

(defun reload-pseudo-streams ()
  (loop for uri in *uris*
        do (setf (gethash uri *pseudo-streams*)
                 (open-pseudo-stream uri))))

The function reopen-pseudo-streams may be executed i.e to invalidate caches. Its main problem is that it leaks resources by not closing the pseudo stream before opening a new one. If the resource consumes a file descriptor then we'll eventually run out of them.

A naive solution is to close a stream after assigning a new one:

(defun reload-pseudo-streams/incorrect ()
  (loop for uri in *uris*
        for old = (gethash uri *pseudo-streams*)
        do (setf (gethash uri *pseudo-streams*)
                 (open-pseudo-stream uri))
           (close-pseudo-stream old)))

This solution is not good enough because it is prone to race conditions. In the example below we witness that the old stream (that is closed) may still be referenced after a new one is put in the hash table.

(defun nom-the-stream (uri)
    (let ((stream (gethash uri *pseudo-streams*)))
      (some-long-computation-1 stream)
      ;; reload-pseudo-streams/incorrect called, the stream is closed
      (some-long-computation-2 stream) ;; <-- aaaa

This is a moment when you should consider abandoning the function reload-pseudo-streams/incorrect and using a finalizer. The new version of the function open-pseudo-stream destroys the resource only when the stream is no longer reachable, so the function nom-the-stream can safely nom.

When the finalizer accepts the object as an argument then it is enough to register the function close-pseudo-stream. Otherwise, since we can't close over the stream, we close over the resource and open-code destroying it.

(defun open-pseudo-stream (uri)
  (let* ((resource (make-resource uri))
         (stream (make-instance 'pseudo-stream :resource resource)))

    #+trivial-garbage ;; closes over the resource (not the stream)
    (flet ((finalizer () (destroy-resource resource)))
      (set-finalizer stream #'finalizer))

    #+another-garbage ;; doesn't close over anything
    (set-finalizer stream #'close-pseudo-stream)


Story closed, the problem is fixed. It is late friday afternoon, so we eagerly push the commit to the production system and leave home with a warm feeling of fulfilled duty. Two hours later all hell breaks loose and the system fails. The problem is the following function:

(defun run-client (stream)
  (assert (pseudo-stream-open-p stream))
  (loop for message = (read-message stream)
        do (process-message message)
        until (eql message :server-closed-connection)
        finally (close-pseudo-stream stream)))

The resource is released twice! The first time when the function run-client closes the stream and the second time when the stream is finalized. A fix for this issue depends on the finalization strategy:

#+trivial-garbage ;; just remove the reference
(defun close-pseudo-stream (stream)
  (setf (resource stream) nil))

#+another-garbage ;; remove the reference and destroy the resource
(defun close-pseudo-stream (stream)
  (when-let ((resource (resource steram)))
    (setf (resource stream) nil)
    (destroy-resource resource)))

With this closing the stream doesn't interfere with the finalization. Hurray! Hopefully nobody noticed, it was late friday afternoon after all. This little incident tought us to never push the code before testing it.

We build the application from scratch, test it a little and... it doesn't work. After some investigation we find the culpirt - the function creates a new stream with the same resource and closes it.

(defun invoke-like-a-good-citizen-with-pseudo-stream (original-stream fn)
  (let* ((resource (resource original-stream))
         (new-stream (make-instance 'pseudo-stream :resource resource)))
    (unwind-protect (funcall fn new-stream)
      (close-pseudo-stream new-stream))))

Thanks to our previous provisions closing the stream doesn't collide with finalization however the resource is destroyed for each finalized stream because it is shared between distinct instances.

When the finalizer accepts the collected object as an argument then the solution is easy because all we need is to finalize the resource instead of the pseudo stream (and honestly we should do it from the start!):

(defun open-pseudo-stream (uri)
  (let* ((resource (make-resource uri))
         (stream (make-instance 'pseudo-stream :resource resource)))
    (set-finalizer resource #'destroy-resource)

(defun close-pseudo-stream (stream)
  (setf (resource stream) nil))

When the finalizer doesnt't accept the object we need to do the trick and finalize a shared pointer instead of a verbatim resource. This has a downside that we need to always unwrap it when used.

(defun open-pseudo-stream (uri)
  (let* ((resource (make-resource uri))
         (wrapped (list resource))
         (stream (make-instance 'pseudo-stream :resource wrapped)))
    (flet ((finalize () (destroy-resource resource)))
      (set-finalizer wrapped #'finalize)

(defun close-pseudo-stream (stream)
  (setf (resource stream) nil))

When writing this post I've got too enthusiastic and dramatized a little about the production systems but it is a fact, that I've proposed a fix similar to the first finalization attempt in this post and when it got merged it broke the production system. That didn't last long though because the older build was deployed almost immedietely. Cheers!

21 Oct 2021 12:00am GMT

19 Oct 2021

feedPlanet Lisp

Eitaro Fukamachi: Day 1: Roswell, as a Common Lisp implementation manager

This is my first public article in English. I've been sending out newsletters about what I've been doing only to sponsors, but there have been requests to publish my know-how on my blog, so I'm writing this way.

However, my English skills are still developing, so I can't suddenly deliver a lot of information at once. So instead, I'm going to start writing fragments of knowledge in the form of technical notes, little by little. The articles may not be in order. But I suppose each one would help somehow as a tip for your Common Lisp development.

When I thought of what I should start from, "Roswell" seemed appropriate, because most of the topics I want to tell depends on it.

It's been six years since Roswell was born. Although its usage has been expanding, I still feel that Roswell is underestimated, especially among the English community.

Not because of you. I think a lot of the reason for this is that the author is Japanese, like me, and has neglected to send out information in English.

If you are not familiar with Roswell or have tried it before but didn't get as much use out of it as you wanted, I hope this article will make you interested.

What's Roswell

Roswell has the following features:

It would be too much work to explain everything in a single article, so I will explain from the first one today: installation of Common Lisp implementations.


See the Official Installation Guide.

Installation of Common Lisp implementations

To install implementations with Roswell, use its "install" subcommand.

$ ros help install

To install a new Lisp implementaion:
   ros install impl [options]
or a system from the GitHub:
   ros install fukamachi/prove/v2.0.0 [repository... ]
or an asdf system from quicklisp:
   ros install quicklisp-system [system... ]
or a local script:
   ros install ./some/path/to/script.ros [path... ]
or a local system:
   ros install ./some/path/to/system.asd [path... ]

For more details on impl specific options, type:
   ros help install impl

Candidates impls for installation are:

For instance, SBCL, currently the most popular implementation, can be installed with sbcl-bin or sbcl.

# Install the latest SBCL binary
$ ros install sbcl-bin

# Install the SBCL 2.1.7 binary
$ ros install sbcl-bin/2.1.7

# Build and install the latest SBCL from the source
$ ros install sbcl

Since Roswell author builds and hosts its own SBCL binaries, it can install more versions of binaries than the official binary support. So in most cases, you can just run ros install sbcl-bin/<version> to install a specific version of SBCL.

After installing a new Lisp, it will automatically be in the active one. To switch implementations/versions, ros use command is available.

# Switch to SBCL 2.1.7 binary version
$ ros use sbcl-bin/2.1.7

# Switch to ECL of the latest installed version
$ ros use ecl

To see what implementations/versions are installed, ros list installed is available.

$ ros list installed
Installed implementations:

Installed versions of ecl:

Installed versions of sbcl-bin:

Installed versions of sbcl-head:

To check the active implementation, run ros run -- --version.

# Print the active implementation and its version
$ ros run -- --version
SBCL 2.1.7

Run REPL with Roswell

To start a REPL, execute ros run.

# Start the REPL of the active Lisp
$ ros run

# Start the REPL of a specific implementation/version
$ ros -L sbcl-bin/2.1.7 run

"sbcl" command needed?

For those of you who have been installing SBCL from a package manager, the lack of the sbcl command may be disconcerting. Some people are relying on the "sbcl" command in your editor settings. As a workaround to install the "sbcl" command, such as the following command would help.

$ printf '#!/bin/sh\nexec ros -L sbcl-bin run -- "$@"\n' | \
    sudo tee /usr/local/bin/sbcl \
  && sudo chmod +x /usr/local/bin/sbcl

Though once you get used to it, I'm sure you'll naturally start using ros run.


I introduced the following subcommand/options in this article.

(Rough) Troubleshooting

If you have a problem like "Roswell worked fine at first but won't work after I updated SBCL," simply delete ~/.roswell .

Roswell writes all related files under the directory, like configurations, Lisp implementations, and Quicklisp libraries, etc. When the directory doesn't exist, Roswell creates and initializes it implicitly. So it's okay to delete ~/.roswell.

19 Oct 2021 2:26am GMT

TurtleWare: A curious case of HANDLER-CASE

Common Lisp is known among Common Lisp programmers for its excellent condition system. There are two operators for handling conditions: handler-case and handler-bind:

(handler-case (do-something)
  (error (condition)
    (format *debug-io* "The error ~s has happened!" condition)))

(handler-bind ((error
                 (lambda (condition)
                   (format *debug-io* "The error ~s has happened!" condition))))

Their syntax is different as well as semantics. The most important semantic difference is that handler-bind doesn't unwind the dynamic state (i.e the stack) and doesn't return on its own. On the other hand handler-case first unwinds the dynamic state, then executes the handler and finally returns.

What does it mean? When do-something signals an error, then:

By "doing nothing" I mean that it does not handle the condition and the control flow invokes the next visible handler (i.e invokes a debugger). To prevent that it is enough to return from a block:

(block escape
  (handler-bind ((error
                   (lambda (condition)
                     (format *debug-io* "The error ~s has happened!" condition)
                     (return-from escape))))

With this it looks at a glance that both handler-case and handler-bind behave in a similar manner. That brings us to the essential part of this post: handler-case is not suitable for printing the backtrace! Try the following:

(defun do-something ()
  (error "Hello world!"))

(defun try-handler-case ()
  (handler-case (do-something)
    (error (condition)
      (trivial-backtrace:print-backtrace condition))))

(defun try-handler-bind ()
  (handler-bind ((error
                   (lambda (condition)
                     (trivial-backtrace:print-backtrace condition)
                     (return-from try-handler-bind))))

When we invoke try-handler-case then the top of the backtrace is

2: ((FLET "FUN1" :IN TRY-HANDLER-CASE) #<SIMPLE-ERROR "Hello world!" {1002D77DD3}>)

While when we invoke try-handler-bind then the backtrace contains the function do-something:

2: ((FLET "H0" :IN TRY-HANDLER-BIND) #<SIMPLE-ERROR "Hello world!" {1002D9CE23}>)
3: (SB-KERNEL::%SIGNAL #<SIMPLE-ERROR "Hello world!" {1002D9CE23}>)
4: (ERROR "Hello world!")

Printing the backtrace of where the error was signaled is certainly more useful than printing the backtrace of where it was handled.

This post doesn't exhibit all practical differences between both operators. I hope that it will be useful for some of you. Cheers!

19 Oct 2021 12:00am GMT

18 Oct 2021

feedPlanet Lisp

Max-Gerd Retzlaff: uLisp on M5Stack (ESP32):
controlling relays connected to I2C via a PCF8574

relay module connected to I2C via a PCF8574; click for a larger version (180 kB).

Looking at the data sheet of the PCF8574 I found that it will be trivially simple to use it to control a relay board without any lower level Arduino library: Just write a second byte in addtion to the address to the I2C bus directly with uLisp's WITH-I2C.

Each bit of the byte describes the state of one of the eight outputs, or rather its inverted state as the PCF8574 has open-drain outputs and thus setting an output to LOW opens a connection to ground (with up to 25 mA), while HIGH disables the relay. (The data sheets actually say they are push-pull outputs but as high-level output the maximum current is just 1 mA which is not much and for this purpuse certainly not enough.)

The whole job can basically done with one or two lines. Here is switching on the forth relay (that is number 3 with zero-based counting):

(with-i2c (str #x20)
  (write-byte (logand #xff (lognot (ash 1 3))) str))

Here is my whole initial library:

Read the whole article.

18 Oct 2021 3:47pm GMT

15 Oct 2021

feedPlanet Lisp

TurtleWare: How do you DO when you do DO?

In this short post I'll explain my understanding of the following quote describing the iteration construct do:

The Common Lisp do macro can be thought of as syntactic sugar for tail recursion, where the initial values for variables are the argument values on the first function call, and the step values are argument values for subsequent function calls.

-- Peter Norvig and Kent Pitman, Tutorial on Good Lisp Programming Style

Writing a recursive function usually involves three important parts:

  1. The initial values - arguments the programmer passes to the function
  2. The base case - a case when function may return without recurring
  3. The step values - arguments the function passes to itself when recurring

An example of a recursive function is this (inefficient) definition:

(defun fib (n)
    ((= n 0) 0)
    ((= n 1) 1)
    (t (+ (fib (- n 1))
          (fib (- n 2))))))

The initial value here is n, base cases are (= n 0) and (= n 1) and the step values are (- n 1) and (n 2).

To make a function tail-recursive there is one more important requirement: the subsequent function call must be in a tail position, that is it must be the last function called. The definition above is not tail-recursive, because we first need to call the function and then add results. A proper tail-recursive version requires little gimnastic:

(defun fib* (n)
  (labels ((fib-inner (n-2 n-1 step)
             (if (= step n)
                 (+ n-2 n-1)
                 (fib-inner n-1
                            (+ n-2 n-1)
                            (1+ step)))))
      ((= n 0) 0)
      ((= n 1) 1)
      (t (fib-inner 0 1 2)))))

The initial values are 0, 1 and 2, the base case is (= step n) and the step values are n-1, (+ n-2 n-1) and (1+ step). The function fib-inner is in tail position because there is no more computation after its invocation.

A quick remainder how do works:

(do ((a 1 (foo a))
     (b 3 (bar b)))
    ((= a b) 42)
  (side-effect! a b))

First assign to a and b the initial values 1 and 3, then check for the base case (= a b) and if true return 42, otherwise execute the body (side-effect! a b) and finally update a and b by assigning to them the step values (foo a) and (foo b). Then repeat from checking the base case. The last step could be equaled to an implicit tail-call of a function. Let's put it now in terms of the function we've defined earlier:

(defun fib** (n)
    ((= n 0) 0)
    ((= n 1) 1)
    (t (do ((n-2 0 n-1)
            (n-1 1 (+ n-2 n-1))
            (step 2 (1+ step)))
           ((= step n)
            (+ n-2 n-1))))))

This do form is a direct translation of the function fib-inner defined earlier.

I hope that you've enjoyed this short explanation. If you did then please let me know on IRC - my handle is jackdaniel @ libera.chat.

15 Oct 2021 12:00am GMT

14 Oct 2021

feedPlanet Lisp

Joe Marshall: Update October 2021

Here's a few things I've been playing with lately.

jrm-code-project/utilities has a few utilities that I commonly use. Included are utilities/lisp/promise and utilities/lisp/stream which provide S&ICP-style streams (lazy lists). utilities/lisp/utilities is a miscellaneous grab bag of functions and macros.

jrm-code-project/homographic is a toolkit for linear fractional transforms (homographic functions). In addition to basic LFT functionality, it provides examples of exact real arithmetic using streams of LFTs.

jrm-code-project/LambdaCalculus has some code for exploring lambda calculus.

jrm-code-project/CLRLisp is an experimental Lisp based on the .NET Common Language Runtime. The idea is that instead of trying to adapt a standard Lisp implementation to run on the .NET CLR, we just add a bare-bones eval and apply that use the CLR reflection layer and see what sort of Lisp naturally emerges. At this point, it only just shows signs of life: there are lambda expressions and function calls, but no definitions, conditionals, etc. You can eval lists: (System.Console.WriteLine "Hello World."), but I haven't written a reader and printer, so it is impractical for coding.

14 Oct 2021 2:13pm GMT

13 Oct 2021

feedPlanet Lisp

Thomas Fitzsimmons: Mezzano on Librebooted ThinkPads

I decided to try running Mezzano on real hardware. I figured my Librebooted ThinkPads would be good targets, since, thanks to Coreboot and the Linux kernel, I have reference source code for all the hardware.

On boot, these machines load Libreboot from SPI flash; included in this Libreboot image is GRUB, as a Coreboot payload.

Mezzano, on the other hand, uses the KBoot bootloader. I considered chainloading KBoot from GRUB, but I wondered if I could have GRUB load the Mezzano image directly, primarily to save a video mode switch.

I didn't want to have to reflash the Libreboot payload on each modification (writing to SPI flash is slow and annoying to recover from if something goes wrong), so I tried building a GRUB module "out-of-tree" and loading it in the existing GRUB. Eventually I got this working, at which point I could load the module from a USB drive, allowing fast development iteration. (I realize out-of-tree modules are non-ideal so if there's interest I may try to contribute this work to GRUB.)

The resulting GRUB module, mezzano.mod, is largely the KBoot Mezzano loader code, ported to use GRUB facilities for memory allocation, disk access, etc. It's feature-complete, so I released it to Sourcehut. (I've only tested it on Libreboot GRUB, not GRUB loaded by other firmware implementations.)

Here's a demo of loading Mezzano on two similar ThinkPads:

GRUB Mezzano module demo

For ease of use, mezzano.mod supports directly loading the mezzano.image file generated by MBuild - instead of requiring that mezzano.image be dd'd to a disk. It does so by skipping the KBoot partitions to find the Mezzano disk image. The T500 in the video is booted this way. Alternatively, mezzano.mod can load the Mezzano disk image from a device, as is done for the W500 in the video. Both methods look for the Mezzano image magic - first at byte 0 and, failing that, just after the KBoot partitions.

I added the set-i8042-bits argument because Coreboot does not set these legacy bits, yet Mezzano's PS/2 keyboard and mouse drivers expect them; at this point Mezzano does not have a full ACPI device tree implementation.

13 Oct 2021 5:38am GMT

12 Oct 2021

feedPlanet Lisp

Vsevolod Dyomkin: Watching a Model Train

Last week, I did a quick hack that quite delighted me: I added a way to visually watch the progress of training my MGL-based neural networks inside Emacs. And then people on twitter asked me to show the code. So, it will be here, but first I wanted to rant a bit about one of my pet peeves.


In the age of Jupyter and TensorBoard, adding a way to see an image that records the value of a loss function blinking on the screen - "huh, big deal" you would say. But I believe this example showcases a difference between low-tech and high-tech approaches. Just recently I chatted with one of my friends who is entering software engineering at a rather late age (30+), and we talked of how frontend development became even more complicated than backend one (while, arguably, the complexity of tasks solved on the frontend is significantly lower). And that discussion just confirmed to me that the tendency to overcomplicate things is always there, with our pop-culture industry, surely, following it. But I always tried to stay on the simple side, on the side of low-tech solutions. And that's, by the way, one of the reasons I chose to stick with Lisp: with it, you would hardly be forced into some nonsense framework hell, or playing catch-up with the constant changes of your environment, or following crazy "best practices". Lisp is low-tech just like the Unix command-line or vanilla Python or JS. Contrary to the high-tech Rust, Haskell or Java. Everything text-based is also low-tech: text-based data formats, text-based visualization, text-based interfaces.

So, what is low-tech, after all? I saw the term popularized by Kris De Decker from the Low-Tech Magazine, which focuses on using simple (perhaps, outdated by some standards) technologies for solving serious engineering problems. Most people, and the software industry is no exception, are after high-tech, right? Progress of technology enables solving more and more complex tasks. And, indeed, that happens. Sometimes, not always. Sometimes, the whole thing crumbles, but that's a different story. Yet, even when it happens, there's a catch, a negative side-effect: the barrier of entry rises. If 5 or 10 years ago it was enough to know HTML, CSS, and JavaScript to be a competent frontend developer, now you have to learn a dozen more things: convoluted frameworks, complicated deploy toolchains, etc., etc. Surely, sometimes it's inevitable, but it really delights me when you can avoid all the bloat and use simple tools to achieve the same result. OK, maybe not completely the same, maybe not a perfect one. But good enough. The venerable 80% solution that requires 20% effort.

Low-tech is not low-quality, it's low-barrier of entry.

And I would argue that, in the long run, better progress in our field will be made if we strive towards lowering the bar to more people in, than if we continue raising it (ensuring our "job security" this way). Which doesn't mean that the technologies should be primitive (like BASIC). On the contrary, the most ingenious solutions are also the simplest ones. So, I'm going to continue this argument in the future posts I'd like to write about interactive programming. And now, back to our hacks.

Getting to Terms with MGL

In my recent experiments I returned to MGL - an advanced, although pretty opinionated, machine learning library by the prolific Gabor Melis - for playing around with neural networks. Last time, a few years ago I stumbled when I tried to use it to reproduce a very advanced (by that time's standards) recurrent neural network and failed. Yet, before that, I was very happy using it (or rather, it's underlying MGL-MAT library) for running in Lisp (in production) some of the neural networks that were developed by my colleagues. I know it's usually the other way around: Lisp for prototyping, some high-tech monstrosity for production, but we managed to turn the tides for some time :D

So, this time, I decided to approach MGL step by step, starting from simple building blocks. First, I took on training a simple feed-forward net with a number of word inputs converted to vectors using word2vec-like approach.

This is the network I created. Jumping slightly ahead, I've experimented with several variations of the architecture, starting from a single hidden layer MLP, and this one worked the best so far. As you see, it has 2 hidden layers (l1/l1-l and l2/l2-l) and performs 2-class classification. It also uses dropout after each of the layers as a standard means of regularization in the training process.

(defun make-nlp-mlp (&key (n-hidden 100))
(mgl:build-fnn (:class 'nlp-mlp)
(in (->input :size *input-len*))
(l1-l (->activation in :size n-hidden))
(l1 (->relu l1-l))
(d1 (->dropout l1 :dropout 0.5))
(l2-l (->activation d1 :size (floor n-hidden 2)))
(l2 (->relu l2-l))
(d2 (->dropout l2 :dropout 0.5))
(out-l (->activation d2 :size 2))
(out (->softmax-xe-loss out-l))))

MGL model definition is somewhat different from the approach one might be used to with Keras or TF: you don't imperatively add layers to the network, but, instead, you define all the layers at once in a declarative fashion. A typical Lisp style it is. Yet, what still remains not totally clear to me yet, is the best way to assemble layers when the architecture is not a straightforward one-direction or recurrent, but combines several parts in nonstandard ways. That's where I stumbled previously. I plan to get to that over time, but if someone has good examples already, I'd be glad to take a look at those. Unfortunately, despite the proven high-quality of MGL, there's very little open-source code that uses it.

Now, to make a model train (and watch it), we have to pass it to mgl:minimize alongside with a learner:

(defun train-nlp-fnn (&key data (batch-size 100) (epochs 1000) (n-hidden 100)
(random-state *random-state*))
(let ((*random-state* random-state)
(*agg-loss* ())
(opt (make 'mgl:segmented-gd-optimizer
:termination (* epochs batch-size)
:segmenter (constantly
(make 'mgl:adam-optimizer
:n-instances-in-batch batch-size))))
(fnn (make-nlp-mlp :n-hidden n-hidden)))
(mgl:map-segments (lambda (layer)
(mgl:nodes layer)
:stddev (/ 2 (reduce '+ (mgl:mat-dimensions (mgl:nodes layer))))))
`((:fn mgl:reset-optimization-monitors :period ,batch-size :last-eval 0)
(:fn draw-test-error :period ,batch-size)))
(mgl:minimize opt (make 'mgl:bp-learner
:bpn fnn
:monitors (mgl:make-cost-monitors
fnn :attributes `(:event "train")))
:dataset (sample-data data (* epochs batch-size)))

This code is rather complex, so let me try to explain each part.

Now, let's take a look at the function that is drawing the graph:

(defun draw-test-error (opt learner)
;; here, we print out the architecture and parameters of
;; our model and learning algorithm
(when (zerop (mgl:n-instances opt))
(describe opt)
(describe (mgl:bpn learner)))
;; here, we rely on the fact that there's
;; just a single cost monitor defined
(let ((mon (first (mgl:monitors learner))))
;; using some of RUTILS syntax sugar here to make the code terser
(push (pair (+ (? mon 'counter 'denominator)
(if-it (first *agg-loss*)
(lt it)
(? mon 'counter 'numerator))

(defun redraw-loss-graph (&key (file "/tmp/loss.png") (smoothing 10))
(adw-charting:with-chart (:line 800 600)
(adw-charting:add-series "Loss" *agg-loss*)
(fmt "Smoothed^~a Loss" smoothing)
(loop :for i :from 0
:for off := (* smoothing (1+ i))
:while (< off (length *agg-loss*))
:collect (pair (? *agg-loss* (- off (floor smoothing 2)) 0)
(/ (reduce ^(+ % (rt %%))
(subseq *agg-loss* (- off smoothing) off)
:initial-value 0)
(adw-charting:set-axis :y "Loss" :draw-gridlines-p t)
(adw-charting:set-axis :x "Iteration #")
(adw-charting:save-file file)))

Using this approach, I could also draw the change of the validation loss on the same graph. And I'll do that in the next version.

ADW-CHARTING is my goto-library when I need to draw a quick-and-dirty chart. As you see, it is very straightforward to use and doesn't require a lot of explanation. I've looked into a couple other charting libraries and liked their demo screenshots (probably, more than the style of ADW-CHARTING), but there were some blockers that prevented me from switching to them. Maybe, next time, I'll have more inclination.

To complete the picture we now need to display our learning progress not just with text running in the console (produced by the standard cost-monitor), but also by updating the graph. This is where Emacs' nature of a swiss-army knife for any interactive workflow came into play. Surely, there was already an existing auto-revert-mode that updates the contents of a Emacs buffer on any change or periodically. For my purposes, I've added this lines to my Emacs config:

(setq auto-revert-use-notify nil)
(setq auto-revert-interval 6) ; refresh every seconds

Obviously, this can be abstracted away into a function which could be invoked by pressing some key or upon other conditions occurring.

A nice lisp/interactive dev hack I did today: watching an MGL neural net train with a chart dynamically redrawn by adw-charting and updated inside Emacs by auto-revert-mode. Slime/Emacs - the ultimately customizable interactive experience FTW pic.twitter.com/y5ie9Xm20D

- Vsevolod (@vseloved) October 6, 2021

12 Oct 2021 10:50am GMT

04 Oct 2021

feedPlanet Lisp

Nicolas Hafner: Patch update, holidays, and the GIC - October Kandria Update


A shorter monthly update for once, as this included two weeks of holidays, in which we were both morally, legally, and bound by higher powers unbeknown to us forbidden from working on Kandria. Regardless, progress was made, and news are here to be shared, so an update was written!

Kandria Patch Update

A patch update is live that fixes the issues that were reported to us through the automated crash system. You can get the update from the usual link on the mailing list. I'm happy to say that there were not a lot of these to fix!

Of course, if you haven't yet had time to check out the new demo release, we hope you'll do so soon! We're always very excited to hear people's thoughts on what we have so far.


We don't have too much to show for this update, as we had a nice two weeks of holiday. I, for my part, spent some days down in Italy, which was really nice. Great weather for the most part, and it was great to have a proper change of scenery and schedule for once!


Because I'm me and can't help it, I did do some work as well during my holidays. I made some good progress on a project I've had in the slow cooker for years now, which will ultimately also be useful for Kandria, to support its modding system. But, more importantly to me, I finally got back into drawing more regularly again and made some pieces I'm actually quite happy with.

Wow! If what I wrote sounded any more confident, I'd have to mistake it for a toilet paper advertisement!

Game Industry Conference

Later this month is the Game Industry Conference, in which we'll be taking part. Once again, sponsored thanks to Pro Helvetia! I submitted a talk for the conference and actually got accepted for it as well, so I'll be presenting there in person. I don't know the exact date of my talk yet, but I'll be sure to announce it ahead of time on Twitter as soon as I do know.

If you're in Poland or are attending the conference yourself, let me know! I'd be happy to meet up!


This was a shorter month for me as I was on holiday for weeks. However, I've been researching key streamers and influencers that I'd previously highlighted for us, since they'd covered games similar to Kandria, and sending out emails to promote the new demo. Nick got some great feedback from Chris Zukowski on how to format these, which involved finding a hook in an influencer's content that I could latch onto, to show I'd actually engaged with their content. Nick also fed back on the horizontal slice quest designs I've been doing, which map out the rest of the narrative to the end of the game. This has been great to get some new eyes and steers on, and will help tighten up the content and manage the scope.

Fred & Mikel

These two have already started into content for the new areas. We're currently hashing out the look and feel, for which the second region is already getting close to final. We don't have any screenshots or music to show you yet though, you'll have to be a bit more patient for that.


As always, let's look at the roadmap from last month.

Oops! None of the items we had on there last time changed yet. But, some other important things were added and fixed already. Anyway, we'll start to get to the other things this month.

Until that's ready, I sincerely hope you give the new demo a try if you haven't yet. Let us know what you think if you do or have already!

04 Oct 2021 9:07am GMT