30 Jun 2022

feedPlanet Lisp

Tycho Garen : Common Gotchas

This is a post I wrote a long time ago and never posted, but I've started getting back into doing some work in Common Lisp and thought it'd be good to send this one off.

On my recent "(re)learn Common Lisp" journey, I've happened across a few things that I've found frustrating or confusing: this post is a collection of them, in hopes that other people don't struggle with them:

  • Implementing an existing generic function for a class of your own, and have other callers specialize use your method implementation you must import the generic function, otherwise other callers will (might?) fall back to another method. This makes sense in retrospect, but definitely wasn't clear on the first go.
  • As a related follow on, you don't have to define a generic function in order to write or use a method, and I've found that using methods is actually quite nice for doing some type checking, at the same time, it can get you into a pickle if you later add the generic function and it's not exported/imported as you want.
  • Property lists seem cool for a small light weight mapping, but they're annoying to handle as part of public APIs, mostly because they're indistinguishable from regular lists, association lists are preferable, and maybe with make-hash even hash-tables.
  • Declaring data structures inline is particularly gawky. I sometimes want to build a list or a hash map in-line an alist, and it's difficult to do that in a terse way that doesn't involve building the structure programatically. I've been writing (list (cons "a" t) (cons "b" nil)) sort of things, which I don't love.
  • If you have a variadic macro (i.e. that takes &rest args), or even I suppose any kind of macro, and you have it's arguments in a list, there's no a way, outside of eval to call the macro, which is super annoying, and makes macros significantly less appealing as part of public APIs. My current conclusion is that macros are great when you want to add syntax to make the code you're writing clearer or to introduce a new paradigm, but for things that could also be a function, or are thin wrappers on for function, just use a function.

30 Jun 2022 12:00am GMT

24 Jun 2022

feedPlanet Lisp

vindarel: State of Common Lisp Web Development - an overview

Caution [from 2017: what a road since then]: this is a draft. I take notes and write more in other resources (the Cookbook, my blog).

update, June 2022: see my web project skeleton, it illustrates and fixes common issues: https://github.com/vindarel/cl-cookieweb

update july, 5th 2019: I put this content into the Cookbook: https://lispcookbook.github.io/cl-cookbook/web.html, fixing a long-standing request.

new post: why and how to live-reload one's running web application: https://lisp-journey.gitlab.io/blog/i-realized-that-to-live-reload-my-web-app-is-easy-and-convenient/

new project skeleton: lisp-web-template-productlist: Hunchentoot + easy-routes + Djula templates + Bulma CSS + a Makefile to build the project

See also the Awesome CL list.

Information is at the moment scarce and spread appart, Lisp web frameworks and libraries evolve and take different approaches.

I'd like to know what's possible, what's lacking, see how to quickstart everything, see code snippets and, most of all, see how to do things that I couldn't do before such as hot reloading, building self-contained executables, shipping a multiplatform web app.

Prior notice:

Some people sell ten pages long ebooks or publish their tutorial on Gitbook to have a purchase option. I prefer to enhance the collaborative Cookbook (I am by far the main contributor). You can tip me on Kofi if you like: https://ko-fi.com/vindarel Thanks !

Table of Contents

Web application environments

Clack, Lack

Clack is to Lisp what WSGI is to Python. However it is mostly undocumented and not as battle-proofed as Hunchentoot.

Web frameworks


The de-facto web server, with the best documentation (cough looking old cough), the most websites on production. Lower level than a web framework (defining routes seems weird at first). I think worth knowing.

Its terminology is different from what we are used to ("routes" are not called routes but we create handlers), part I don't know why and part because the Lisp image-based development allows for more, and thus needs more terminology. For example, we can run two applications on different URLs on the same image.


edit: here's a modern looking page: https://digikar99.github.io/common-lisp.readthedocs/hunchentoot/


A popular web framework, or so it seems by the github stars, written by a super-productive lisper, with nice documentation for basic stuff but lacking for the rest, based on Clack (webserver interface, think Python's WSGI), uses Hunchentoot by default.

I feel like basic functions are too cumbersome (accessing url parameters).



By the maintainer of Sly, Emacs' Yasnippet,...

Defining routes is like defining functions. Built-in features that are available as extensions in Clack-based frameworks (setting to get a stacktrace on the browser, to fire up the debugger or to return a 404,...). Definitely worth exploring.



Radiance, with extensive tutorial and existing apps.

It doesn't look like a web framework to me. It has ready-to-use components:



a library for writing REST Web APIs in Common Lisp.

Features: validation via schemas, Swagger support, authentication, logging, caching, permission checking...

It seems complete, it is maintained, the author seems to be doing web development in CL for a living. Note to self: I want to interview him.



An asynchronous web server, by an impressive lisper, who built many async libraries. Used for the Turtl api backend. Dealing with async brings its own set of problems (how will be debugging ?).

Nice api to build routes, good documentation: http://wookie.lyonbros.com/

Weblocks (solving the Javascript problem)

Weblocks allows to create dynamic pages without a line of JavaScript, all in Lisp. It was started years ago and it saw a large update and refactoring lately.

It isn't the easy path to web development in CL but there's great potential IMO.

It doesn't do double data binding as in modern JS frameworks. But new projects are being developed...

See our presentation below.



Accessing url parameters

It is easy and well explained with Hunchentoot or easy-routes in the Cookbook.

Lucerne has a nice with-params macro that makes accessing post or url query parameters a breeze:

@route app (:post "/tweet")
(defview tweet ()
  (if (lucerne-auth:logged-in-p)
      (let ((user (current-user)))
        (with-params (tweet)
          (utweet.models:tweet user tweet))
        (redirect "/"))
      (render-template (+index+)
                       :error "You are not logged in.")))

Snooze's way is simple and lispy: we define routes like methods and parameters as keys:

(defroute lispdoc (:get :text/* name &key (package :cl) (doctype 'function))

matches /lispdoc, /lispdoc/foo and /lispdoc/foo?package=arg.

On the contrary, I find Caveman's and Ningle's ways cumbersome.


(setf (ningle:route *app* "/hello/:name")
      #'(lambda (params)
          (format nil "Hello, ~A" (cdr (assoc "name" params :test #'string=)))))

The above controller will be invoked when you access to "/hello/Eitaro" or "/hello/Tomohiro", and then (cdr (assoc "name" params :test #'string=)) will be "Eitaro" and "Tomohiro".

and it doesn't say about query parameters. I had to ask:

(assoc "the-query-param" (clack.request:query-parameter lucerne:*request*) :test 'string=)


Parameter keys contain square brackets ("[" & "]") will be parsed as structured parameters. You can access the parsed parameters as _parsed in routers.

(defroute "/edit" (&key _parsed)
  (format nil "~S" (cdr (assoc "person" _parsed :test #'string=))))
;=> "((\"name\" . \"Eitaro\") (\"email\" . \"e.arrows@gmail.com\") (\"birth\" . ((\"year\" . 2000) (\"month\" . 1) (\"day\" . 1))))"

Session an cookies

Data storage


Mito works for MySQL, Postgres and SQLite3 on SBCL and CCL.


We can define models with a regular class which has a mito:dao-table-class :metaclass:

(defclass user ()
  ((name :col-type (:varchar 64)
         :initarg :name
         :accessor user-name)
   (email :col-type (:varchar 128)
          :initarg :email
          :accessor user-email))
  (:metaclass mito:dao-table-class)
  (:unique-keys email))

We create the table with ensure-table-exists:

(ensure-table-exists 'user)
;       "name" VARCHAR(64) NOT NULL,
;       "email" VARCHAR(128),
;       "created_at" TIMESTAMP,
;       "updated_at" TIMESTAMP
;   ) () [0 rows] | MITO.DAO:ENSURE-TABLE-EXISTS

Persistent datastores


Mito has migrations support and DB schema versioning for MySQL, Postgres and SQLite3, on SBCL and CCL. Once we have changed our model definition, we have commands to see the generated SQL and to apply the migration.

We inspect the SQL: (suppose we just added the email field into the user class above)

(mito:migration-expressions 'user)
;=> (#<SXQL-STATEMENT: ALTER TABLE user ALTER COLUMN email TYPE character varying(128), ALTER COLUMN email SET NOT NULL>
;    #<SXQL-STATEMENT: CREATE UNIQUE INDEX unique_user_email ON user (email)>)

and we can apply the migration:

(mito:migrate-table 'user)
;-> ;; ALTER TABLE "user" ALTER COLUMN "email" TYPE character varying(128), ALTER COLUMN "email" SET NOT NULL () [0 rows] | MITO.MIGRATION.TABLE:MIGRATE-TABLE
;   ;; CREATE UNIQUE INDEX "unique_user_email" ON "user" ("email") () [0 rows] | MITO.MIGRATION.TABLE:MIGRATE-TABLE
;-> (#<SXQL-STATEMENT: ALTER TABLE user ALTER COLUMN email TYPE character varying(128), ALTER COLUMN email SET NOT NULL>
;    #<SXQL-STATEMENT: CREATE UNIQUE INDEX unique_user_email ON user (email)>)

Crane advertises automatic migrations, i.e. it would run them after a C-c C-c. Unfortunately Crane has some issues, it doesn't work with sqlite yet and the author is busy elsewhere. It didn't work for me at first try.

Let's hope the author comes back to work on this in a near future.


There are a few libraries, see the awesome-cl list. At least one is well active.


On an error we are usually dropped into the interactive debugger by default.

Snooze gives options:

clack-errors. Like a Flask or Django stacktrace in the browser. For Caveman, Ningle and family.

By default, when Clack throws an exception when rendering a page, the server waits for the response until it times out while the exception waits in the REPL. This isn't very useful. So now there's this.

It prints the stacktrace along with some request details on the browser. Can return a custom error page in production.


Are you tired of jumping to your web browser every time you need to test your work in Clack? Clack-pretend will capture and replay calls to your clack middleware stack. When developing a web application with clack, you will often find it inconvenient to run your code from the lisp REPL because it expects a clack environment, including perhaps, cookies or a logged-in user. With clack-pretend, you can run prior web requests from your REPL, moving development back where it belongs.


Testing with a local DB: example of a testing macro.

We would use envy to switch configurations.


Oauth, Job queues, etc

Templating engines


Djula: as Django templates. Good documentation. Comes by default in Lucerne and Caveman.

We also use a dot to access attributes of dict-like variables (plists, alists, hash-tables, arrays and CLOS objects), such a feature being backed by the access library.

We wanted once to use structs and didn't find how to it directly in Djula, so we resorted in a quick helper function to transform the struct in an alist.

Defining custom template filters is straightforward in Djula, really a breeze compared to Django.

Eco - a mix of html with lisp expressions.

Truncated example:

      <% if posts %>
        <h1>Recent Posts</h1>
        <ul id="post-list">
          <% loop for (title . snippet) in posts %>
            <li><%= title %> - <%= snippet %></li>
          <% end %>


I prefer the semantics of Spinneret over cl-who. It also has more features (like embeddable markdown, warns on malformed html, and more).



Parenscript is a translator from an extended subset of Common Lisp to JavaScript. Parenscript code can run almost identically on both the browser (as JavaScript) and server (as Common Lisp). Parenscript code is treated the same way as Common Lisp code, making the full power of Lisp macros available for JavaScript. This provides a web development environment that is unmatched in its ability to reduce code duplication and provide advanced meta-programming facilities to web developers.



A Lisp-to-Javascript compiler bootstrapped from Common Lisp and executed from the browser.




Is it possible to write Ajax-based pages only in CL?

The case Webblocks - Reblocks, 2017

Weblocks is an "isomorphic" web frameworks that solves the "Javascript problem". It allows to write the backend and an interactive client interface in Lisp, without a line of Javascript, in our usual Lisp development environment.

The framework evolves around widgets, that are updated server-side and are automatically redisplayed with transparent ajax calls on the client.

It is being massively refactored, simplified, rewritten and documented since 2017. See the new quickstart:


Writing a dynamic todo-app resolves in:

(defwidget task ()
          :initarg :title
          :accessor title)
          :initarg :done
          :initform nil
          :accessor done)))

(defwidget task-list ()
          :initarg :tasks
          :accessor tasks)))

(defmethod render ((task task))
        "Render a task."
              (:span (if (done task)
                               (:s (title task)))
                       (title task)))))

(defmethod render ((widget task-list))
        "Render a list of tasks."
              (:h1 "Tasks")
                (loop for task in (tasks widget) do
                      (:li (render task))))))

(defmethod weblocks/session:init ((app tasks))
         (declare (ignorable app))
         (let ((tasks (make-task-list "Make my first Weblocks app"
                                      "Deploy it somewhere"
                                      "Have a profit")))
           (make-instance 'task-list :tasks tasks)))

(defmethod add-task ((task-list task-list) title)
        (push (make-task title)
              (tasks task-list))
        (update task-list))

Adding an html form and calling the new add-task function:

(defmethod render ((task-list task-list))
          (:h1 "Tasks")
          (loop for task in (tasks task-list) do
            (render task))
          (with-html-form (:POST (lambda (&key title &allow-other-keys)
                                         (add-task task-list title)))
            (:input :type "text"
                    :name "title"
                    :placeholder "Task's title")
            (:input :type "submit"
                    :value "Add"))))


We can build our web app from sources, no worries, that works.


We can build an executable also for web apps. That simplifies a deployment process drastically.

We can even get a Lisp REPL and interact with the running web app, in which we can even install new Quicklisp dependencies. That's quite incredible, and it's very useful, if not to hot-reload a web app (which I do anyways but that might be risky), at least to reload a user's configuration file.

This is the general way:

(sb-ext:save-lisp-and-die #p"name-of-executable" :toplevel #'main :executable t)

But this is an SBCL-specific command, so we can be generic and use asdf:make, with a couple lines inside our system .asd declaration. See the Cookbook: https://lispcookbook.github.io/cl-cookbook/scripting.html#with-asdf

Now if you run your binary, your app will start up all fine, but it will quit instantly. We need to put the web server thread on the foreground:

(defun main ()
    ;; with bordeaux-threads. Also sb-ext: join-thread, thread-name, list-all-threads.
    (bt:join-thread (find-if (lambda (th)
                                (search "hunchentoot" (bt:thread-name th)))

When I run it, Hunchentoot stays listening at the foreground:

$ ./my-webapp
Hunchentoot server is started.
Listening on localhost:9003.

I can use a tmux session (tmux, then C-b d to detach it) or better yet, start the app with Systemd, see below.

But we need something more for real apps, we need to ship foreign libraries. Deploy to the rescue. Edit your .asd file slightly to have:

:defsystem-depends-on (:deploy)  ;; (ql:quickload "deploy") before
:build-operation "deploy-op"     ;; instead of "program-op" as above
:build-pathname "my-webapp"  ;; doesn't change
:entry-point "my-webapp:main"  ;; doesn't change

Build the app again with asdf:make, and see how Deploy discovers and ships in the required foreign libraries: libssl.so, libosicat.so, libmagic.so...

It puts the final binary and the .so libraries in a bin/ directory. This is what you'll have to ship.

I can now build my web app, send it to my VPS and see it live \o/

One more thing. We don't want to ship libssl and libcrypto, so we ask Deploy to not ship them. We prefer to rely on the target OS.

;; .asd
;; Deploy may not find libcrypto on your system.
;; But anyways, we won't ship it to rely instead
;; on its presence on the target OS.
(require :cl+ssl)  ; sometimes necessary.
#+linux (deploy:define-library cl+ssl::libssl :dont-deploy T)
#+linux (deploy:define-library cl+ssl::libcrypto :dont-deploy T)

There's also a hack if you have an issue with ASDF trying to update itself... see the skeleton: https://github.com/vindarel/cl-cookieweb

To be exhaustive, here's how to catch a user's C-c and stop our app correctly. By default, you would get a full backtrace. Yuk.

(defun main ()
  (start-app :port 9003)
  ;; with bordeaux-threads
  (handler-case (bt:join-thread (find-if (lambda (th)
                                             (search "hunchentoot" (bt:thread-name th)))
    (#+sbcl sb-sys:interactive-interrupt
      #+ccl  ccl:interrupt-signal-condition
      #+clisp system::simple-interrupt-condition
      #+ecl ext:interactive-interrupt
      #+allegro excl:interrupt-signal
      () (progn
           (format *error-output* "Aborting.~&")
           (clack:stop *server*)
           (uiop:quit 1)) ;; portable exit, included in ASDF, already loaded.
    ;; for others, unhandled errors (we might want to do the same).
    (error (c) (format t "Woops, an unknown error occured:~&~a~&" c)))))

To see:

Multiplatform delivery with Electron (Ceramic)

Ceramic makes all the work for us.

It is as simple as this:

;; Load Ceramic and our app
(ql:quickload '(:ceramic :our-app))

;; Ensure Ceramic is set up

;; Start our app (here based on the Lucerne framework)
(lucerne:start our-app.views:app :port 8000)

;; Open a browser window to it
(defvar window (ceramic:make-window :url "http://localhost:8000/"))

;; start Ceramic
(ceramic:show-window window)

and we can ship this on Linux, Mac and Windows.


Ceramic applications are compiled down to native code, ensuring both performance and enabling you to deliver closed-source, commercial applications.

(so no need to minify our JS)

with one more line:

(ceramic.bundler:bundle :ceramic-hello-world
                                 :bundle-pathname #p"/home/me/app.tar")
Copying resources...
Compiling app...

This last line was buggy for us.


When you build a self-contained binary (see above, "Shipping"), deployment gets easy.


sbcl --load <my-app> --eval '(start-my-app)'

For example, a run Makefile target:

        sbcl --load my-app.asd \
             --eval '(ql:quickload :my-app)' \
             --eval '(my-app:start-app)'  ;; given this function starts clack or hunchentoot.

this keeps sbcl in the foreground. We can use tmux to put it in background, or use Systemd.

Then, we need of a task supervisor, that will restart our app on failures, start it after a reboot, handle logging. See the section below and example projects.

with Clack

$ clackup app.lisp
Hunchentoot server is started.
Listening on localhost:5000.

with Docker

So we have various implementations ready to use: sbcl, ecl, ccl... with Quicklisp well configured.


On Heroku

See heroku-buildpack-common-lisp and the Awesome CL#deploy section.

Systemd: daemonizing, restarting in case of crashes, handling logs

Generally, this depends on your system. But most GNU/Linux distros now come with Systemd. Write a service file like this:

$ /etc/systemd/system/my-app.service
Description=stupid simple example

ExecStart=/usr/bin/sbcl --load run.lisp  # your command, full path

One gotcha is that your app must be run on the foreground. See the threads snippet above in Shipping, and add it when running the app from sources too. Otherwise, you'll see a "compilation unit aborted, caught 1 fatal error condition" error. That's simply your Lisp quitting too early.

Run this command to start the service:

sudo systemctl start my-app.service

to check its status:

systemctl status my-app.service

Systemd handles logging for you. We write to stdout or stderr, it writes logs:

journalctl -u my-app.service

use -f -n 30 to see live updates of logs.

This tells Systemd to handle crashes and to restart the app:


and it can start the app after a reboot:


to enable it:

sudo systemctl enable my-app.service

Debugging SBCL error: ensure_space: failed to allocate n bytes

If you get this error with SBCL on your server:

mmap: wanted 1040384 bytes at 0x20000000, actually mapped at 0x715fa2145000
ensure_space: failed to allocate 1040384 bytes at 0x20000000
(hint: Try "ulimit -a"; maybe you should increase memory limits.)

then disable ASLR:

sudo bash -c "echo 0 > /proc/sys/kernel/randomize_va_space"

Connecting to a remote Swank server

Little example here: http://cvberry.com/tech_writings/howtos/remotely_modifying_a_running_program_using_swank.html.

It defines a simple function that prints forever:

;; a little common lisp swank demo
;; while this program is running, you can connect to it from another terminal or machine
;; and change the definition of doprint to print something else out!
;; (ql:quickload '(:swank :bordeaux-threads))

(require :swank)
(require :bordeaux-threads)

(defparameter *counter* 0)

(defun dostuff ()
  (format t "hello world ~a!~%" *counter*))

(defun runner ()
  (bt:make-thread (lambda ()
                    (swank:create-server :port 4006)))
  (format t "we are past go!~%")
  (loop while t do
       (sleep 5)
       (incf *counter*)))


On our server, we run it with

sbcl --load demo.lisp

we do port forwarding on our development machine:

ssh -L4006: username@example.com

this will securely forward port 4006 on the server at example.com to our local computer's port 4006 (swanks accepts connections from localhost).

We connect to the running swank with M-x slime-connect, typing in port 4006.

We can write new code:

(defun dostuff ()
  (format t "goodbye world ~a!~%" *counter*))
(setf *counter* 0)

and eval it as usual with M-x slime-eval-region for instance. The output should change.

There are more pointers on CV Berry's page.

Hot reload

When we run the app as a script we get a Lisp REPL, so we can hot-reload the running web app. Here we demonstrate a recipe to update it remotely.

Example taken from Quickutil.

It has a Makefile target:

        $(call $(LISP), \
                (ql:quickload :quickutil-server) (ql:quickload :swank-client), \
                (swank-client:with-slime-connection (conn "localhost" $(SWANK_PORT)) \
                        (swank-client:slime-eval (quote (handler-bind ((error (function continue))) \
                                (ql:quickload :quickutil-utilities) (ql:quickload :quickutil-server) \
                                (funcall (symbol-function (intern "STOP" :quickutil-server))) \
                                (funcall (symbol-function (intern "START" :quickutil-server)) $(start_args)))) conn)) \

It has to be run on the server (a simple fabfile command can call this through ssh). Beforehand, a fab update has run git pull on the server, so new code is present but not running. It connects to the local swank server, loads the new code, stops and starts the app in a row.

Buy Me a Coffee at ko-fi.com

(yes that currently helps, thanks!)

24 Jun 2022 6:51am GMT

22 Jun 2022

feedPlanet Lisp

Tycho Garen : Methods of Adoption

Before I started actually working as a software engineer full time, writing code was this fun thing I was always trying to figure out on my own, and it was fun, and I could hardly sit down at my computer without learning something. These days, I do very little of this kind of work. I learn more about computers by doing my job and frankly, the kind of software I write for work is way more satisfying than any of the software I would end up writing for myself.

I think this is because the projects that a team of engineers can work on are necessarily larger and more impactful. When you build software with a team, most of the time the product either finds users (or your end up without a job.) When you build software with other people and for other people, the things that make software good (more rigorous design, good test discipline, scale,) are more likely to be prevalent. Those are the things that make writing software fun.

Wait, you ask "this is a lisp post?" and "where is the lisp content?" Wait for it...

In Pave the On/Off Ramps [1] I started exploring this idea that technical adoption is less a function of basic capabilities or numbers of features in general, but rather about the specific features that support and further adoption and create confidence in maintenance and interoperability. A huge part of the decision process is finding good answers to "can I use these tools as part of the larger system of tools that I'm using?" and "can I use this tool a bit without needing to commit to using it for everything?"

Technologies which are and demand ideological compliance are very difficult to move into with confidence. A lot of technologies and tools demand ideological compliance, and their adoption depends on once-in-a-generation sea changes or significant risks. [2] The alternate method, to integrate into people's existing workflows and systems, and provide great tools that work for some usecases and to prove their capability is much more reliable: if somewhat less exciting.

The great thing about Common Lisp is that it always leans towards the pragmatic rather than the ideological. Common Lisp has a bunch of tools--both in the langauge and in the ecosystem--which are great to use but also not required. You don't have to use CLOS (but it's really cool), you don't have to use ASDF, there isn't one paradigm of developing or designing software that you have to be constrained to. Do what works.

I think there are a lot of questions that sort of follow on from this, particularly about lisp and the adoption of new technologies. So let's go through the ones I can think of, FAQ style:

  • What kind of applications would a "pave the exits" support?

    It almost doesn't matter, but the answer is probably a fairly boring set of industrial applications: services that transform and analyze data, data migration tools, command-line (build, deployment) tools for developers and operators, platform orchestration tools, and the like. This is all boring (on the one hand,) but most software is boring, and it's rarely the case that programming langauge actually matters much.

    In addition, CL has a pretty mature set of tools for integrating with C libaries and might be a decent alternative to other langauges with more complex distribution stories. You could see CL being a good langauge for writing extensions on top of existing tools (for both Java with ABCL and C/C++ with ECL and CLASP), depending.

  • How does industrial adoption of Common Lisp benefit the Common Lisp community?

    First, more people writing common lisp for their jobs, which (assuming they have a good experience,) could proliferate into more projects. A larger community, maybe means a larger volume of participation in existing projects (and more projects in general.) Additionally, more industrial applications means more jobs for people who are interested in writing CL, and that seems pretty cool.

  • How can CL compete with more established languages like Java, Go, and Rust?

    I'm not sure competition is really the right model for thinking about this: there's so much software to write that "my langauge vs your langauge" is just a poor model for thinking about this: there's enough work to be done that everyone can be successful.

    At the same time, I haven't heard about people who are deeply excited about writing Java, and Go folks (which I count myself among) tend to be pretty pragmatic as well. I see lots of people who are excited about Rust, and it's definitely a cool langauge though it shines best at lower level problems than CL and has a reasonable FFI so it might be the case that there's some exciting room for using CL for higher level tasks on top of rust fundamentals.

[1] In line with the idea that product management and design is about identifying what people are doing and then institutionalizing this is similar to the urban planning idea of "paving cowpaths," I sort of think of this as "paving the exits," though I recognize that this is a bit force.d
[2] I'm thinking of things like the moment of enterprise "object oriented programing" giving rise to Java and friends, or the big-data watershed moment in 2009 (or so) giving rise to so-called NoSQL databases. Without these kinds of events you the adoption of these big paradigm-shifting technologies is spotty and relies on the force of will of a particular technical leader, for better (and often) worse.

22 Jun 2022 12:00am GMT

15 Jun 2022

feedPlanet Lisp

Tycho Garen : Pave the On and Off Ramps

I participated in a great conversation in the #commonlisp channel on libera (IRC) the other day, during which I found a formulation of a familar argument that felt more clear and more concrete.

The question--which comes up pretty often, realistically--centered on adoption of Common Lisp. CL has some great tools, and a bunch of great libraries (particularly these days,) why don't we see greater adoption? Its a good question, and maybe 5 year ago I would have said "the libraries and ecosystem are a bit fragmented," and this was true. It's less true now--for good reasons!--Quicklisp is just great and there's a lot of coverage for doing common things.

I think it has to do with the connectivity and support at the edges of a project, an as I think about it, this is probably true of any kind of project.

When you decide to use a new tool or technology you ask yourself three basic questions:

  1. "is this tool (e.g. language) capable of fulfilling my current needs" (for programming languages, this is very often yes,)
  2. "are there tools (libraries) to support my use so I can focus on my core business objectives," so that you're not spending the entire time writing serialization libraries and HTTP servers, which is also often the case.
  3. "will I be able to integrate what I'm building now with other tools I use and things I have built in the past." This isn't so hard, but it's a thing that CL (and lots of other projects) struggle with.

In short, you want to be able to build a thing with the confidence that it's possible to finish, that you'll be able to focus on the core parts of the product and not get distracted by what should be core library functionality, and finally that the thing you build can play nicely with all the other things you've written or already have. Without this third piece, writing a piece of software with such a tool is a bit of a trap.

We can imagine tools that expose data only via quasi-opaque APIs that require special clients or encoding schemes, or that lack drivers for common databases, or integration with other common tools (metrics! RPC!) or runtime environments. This is all very reasonable. For CL this might look like:

  • great support for gRPC

    There's a grpc library that exists, is being maintained, and has basically all the features you'd want except support for TLS (a moderately big deal for operational reasons,) and async method support (not really a big deal.) It does depend on CFFI, which makes for a potentially awkward compilation story, but that's a minor quibble.

    The point is not gRPC qua gRPC, the point is that gRPC is really prevalent globally and it makes sense to be able to meet developers who have existing gRPC services (or might like to imagine that they would,) and be able to give them confidence that whatever they build (in say CL) will be useable in the future.

  • compilation that targets WASM

    Somewhat unexpectedly (to me, given that I don't do a lot of web programming,) WebAssembly seems to be the way deploy portable machine code into environments that you don't have full control over, [1] and while I don't 100% understand all of it, I think it's generally a good thing to make it easier to build software that can run in lots of situation.

  • unequivocally excellent support for JSON (ex)

    I remember working on a small project where I thought "ah yes, I'll just write a little API server in CL that will just output JSON," and I completely got mired in various comparisons between JSON libraries and interfaces to JSON data. While this is a well understood problem it's not a very cut and dry problem.

    The thing I wanted was to be able to take input in JSON and be able to handle it in CL in a reasonable way: given a stream (or a string, or equivalent) can I turn it into an object in CL (CLOS object? hashmap?)? I'm willing to implement special methods to support it given basic interfaces, but the type conversion between CL types and JSON isn't always as straight forward as it is in other languages. Similarly with outputting data: is there a good method that will take my object and convert it to a JSON stream or string? There's always a gulf between what's possible and what's easy and ergonomic.

I present these not as a complaint, or even as a call to action to address the specific issues that I raise (though I certianly wouldn't complain if it were taken as such,) but more as an illustration of technical decision making and the things that make it possible for a team or a project to say yes to a specific technology.

There are lots of examples of technologies succeeding from a large competitive feild mostly on the basis of having great interoperability with existing solutions and tools, even if the core technology was less exciting or innovative. Technology wins on the basis of interoperability and user's trust, not (exactly) on the basis of features.

[1] I think the one real exception is runtimes that have really good static binaries and support for easy cross-compiling (e.g. Go, maybe Rust.)

15 Jun 2022 12:00am GMT

14 Jun 2022

feedPlanet Lisp

Nicolas Hafner: The Kandria Kickstarter is now Live!

https://kandria.com/media/header capsule.png

It's finally here, the Kickstarter for Kandria is now live! Check a look: kickstarter.com/projects/shinmera/kandria

We've been working towards this for a long time, so please consider supporting us. There's a new trailer on the campaign page, along with the new demo which is now live in the Steam Next Fest!

14 Jun 2022 12:00pm GMT

11 Jun 2022

feedPlanet Lisp

vindarel: New video: how to request a REST API in Common Lisp: fetching the GitHub API

A few weeks ago, I put together a new Lisp video. It's cool, sound on, 'til the end ;)

I want to show how to (quickly) do practical, real-world stuff in Lisp. Here, how to request a web API. We create a new full-featured project with my project skeleton, we study the GitHub API, and we go ahead.

I develop in Emacs with Slime, but in the end we also build a binary, so we have a little application that works on the command line (note that I didn't use SBCL's core compression, we could have a lighter binary of around 30MB).

I use the libraries: Dexador for HTTP requests, the Jonathan and then the Shasht JSON libraries, as well as Access and Serapeum to help with accessing, creating and viewing hash-tables.

Thanks for the comments already :)

tks for that… it's amazing.


Thank you, that was a great video! I plan to get the udemy course now based on this.

Here are my previous ones:

I want to do more but damn, it takes time. You can push me for more :)

Last notes:

I had fun with the soundtrack :)

I edit the video with the latest Kdenlive with the snap package and it's great for me.

11 Jun 2022 9:43am GMT

06 Jun 2022

feedPlanet Lisp

Nicolas Hafner: Steam Next Fest and Kickstarter Next Week! - June Kandria Update


Okey, June is the month! The Steam Next Fest and the Kickstarter are both happening this month. Be sure you won't miss them!

Steam Next Fest

One week from now the Steam Next Fest is going to be live, and with it the new demo for Kandria! We've been working on this for quite a while now, and we're excited for people to try it out in the fest. Make sure to put it onto your wishlist:



One day after the Next Fest is another important date, namely the launch of our Kickstarter. We've been preparing for this for a long time now, and we're almost ready. Just a few more small things to iron out, and we're trying our best to ramp up the pre-campaign marketing to get as many people as possible onto that waitlist.

The first two days of a Kickstarter are super important for its success, so we need to make sure to make as big of a splash with it as possible, and ideally reach the funding goal by then already. If you'd like to help that become a reality, the best thing to do is to, of course, sign up yourself! But also tell friends and other people you know about it.

Thanks again, and I'm looking forward to the launch! We have a few exciting announcements to make during the campaign as well, and combined with the next fest, it's going to be a heck of a time, I'm sure.

Development Progress

May was a month fraught with some issues! First I fell ill for pretty much an entire week, and then I had to spend another week on the Digital Dragons conference in Krakow. Finally I had to take a week off just to try and recover from everything that's been building up from the two looming big events.

So as you might guess, not too much has happened this month. However, we're still making good progress. Tim has been hard at work rounding up all the main quest content and polishing it up to a good state. The game should now be fully playable from start to finish!


We'll start full-game testing soon to make sure the flow and everything works well, and also get to fleshing things out with some more side quests and extras. There's a ton of detail work that needs doing still, and that's going to make a huge difference. But it's also going to take a lot of time. Fortunately, we still have quite a bit of time left over until our next deadline looms, so we're still clear of the frying pan for now.

The Bottom Line

Okey, finally let's look at the roadmap from may.

Yeah, as mentioned, not too much of a change here. But, not to worry, we're progressing according to schedule still.

Please wishlist Kandria and join our Kickstarter waiting list, and I hope to see you during the campaigns!

06 Jun 2022 2:21pm GMT

26 May 2022

feedPlanet Lisp

Didier Verna: Quickref 4.0 beta 1 "The Aftermath" is released

Dear all,

as previously announced, I have released the first Quickref version compatible with Declt 4.0 (which is now at beta 2) and which is going to keep track of the development over there. Because Declt 4 is currently considered in beta state, so is Quickref 4 for the time being.

There are a number of important changes in this new major version of the library. Not much interesting from the outside is an infrastructure overhaul which improves the parallel support. The index generation code has also been rewritten to benefit from the recent changes in Declt. Slightly more interesting from the outside is an improvement of the self-documenting aspects of the public interface when used interactively, along with more usage correctness checks. Also, the build environment has been upgraded to Debian Bullseye.

But the most critical change, of course, is the name for the 4.x series. In compliance with the general theme (Iron Maiden songs), and because Quickref 4 is meant to closely follow the brave new Declt version, I thought "Brave New World" would be nice. On the other hand, as Declt wanders through its uncharted 4.0 beta territory, Quickref 4 is likely to suffer the consequences, so perhaps "The Aftermath" is more appropriate...

Anyway, the Docker images are up to date, and so is the Quickref website, currently documenting 2110 Common Lisp libraries.


26 May 2022 12:00am GMT

19 May 2022

feedPlanet Lisp

Eitaro Fukamachi: Woo: a high-performance Common Lisp web server

Hi, all Common Lispers.

It's been 7 years since I talked about Woo at European Lisp Symposium. I have heard that several people are using Woo for running their web services. I am grateful for that.

I quit the company I was working for when I developed Woo. Today, I'm writing a payment service in Common Lisp as ever in another company. Not surprisingly, I'm using Woo there as well, and so far have had no performance problems or other operational difficulties.

But on the other hand, some people are still reckless enough to try to run web services using Hunchentoot. And, some people complain about the lack of articles about Woo.

Sorry for my negligence in not keeping updating the information and publishing articles. Therefore, it may be worthful to take Woo as the topic even today.

What is Woo?

Woo is a high-performance web server written in Common Lisp.

It's almost the same level as Go's web server in performance and several times better than other Common Lisp servers, like Hunchentoot.

Web server benchmark

What's different? Not only eliminate bottlenecks by tuning the Common Lisp code but the architecture is designed to handle many concurrent requests efficiently.

Compared to Hunchentoot

Hunchentoot is the most popular web server. According to Quicklisp download stats in April 2022, Hunchentoot is the only web server in the top 100 (ref. Woo is 302th).

An excellent point of Hunchentoot is that it's written in portable Common Lisp. It works on Linux, macOS, and Windows with many Lisp implementations. No external libraries are required.

On the other hand, there are concerns about using it as an HTTP server open to the world.

Because Hunchentoot takes a thread-per-request approach to handle requests.

Hunchentoot thread-per-request architecture

The disadvantage of this architecture is that it is not good at handling large numbers of simultaneous requests.

Hunchentoot thread-per-request architecture

Hunchentoot creates a thread when accepting a new connection, done sending a response, and terminates the thread when it's disconnected. Therefore, more concurrent threads are required when it takes longer to process a slow client (e.g., network transmission time).

It doesn't matter if every client works fast enough. In reality, however, some clients are slow or unstable, for example, smartphone users.

There is also a DoS attack, which intentionally makes large numbers of slow simultaneous connections, called Slowloris attack. Web services running on Hunchentoot can be instantly inaccessible by this attack. The bad news is that you can easily find a script to make Slowloris attack on the web.

Compared to Wookie

Event-driven is another approach to handling a massive amount of simultaneous connections. Let's take Wookie as an example.

Wookie event-driven architecture

In this model, all connection I/O is processed asynchronously, so the speed and stability of the connection do not affect other connections.

Of course, this architecture also has its drawbacks as it works in a single thread, which means only one process can be executed at a time. When a response is being sent to one client, it is not possible to read another client's request.

Wookie's throughput is slightly worse than Hunchentoot for simple HTTP request and response iterations in my benchmark.

Besides that, it is more advantageous for protocols such as WebSocket, in which small chunks are exchanged asynchronously.

Wookie depends on libuv, C library to support asynchronous I/O. Although installing an external library is bothersome, libuv is used internally in Node.js, so it is not so difficult to do it in most environments, including Windows.

Another web server house is event-driven but written in portable Common Lisp. Its event-loop is implemented with cl:loop and usocket:wait-for-input. If the performance doesn't matter, it would be another option.

Multithreaded Event-driven

Woo also adopts the event-driven model, except it has multiple event loops.

Woo multithreaded event-driven architecture

First, it accepts a connection in the main thread, dispatches to pre-created worker threads, and processes requests and responses with asynchronous I/O in each worker.

That is why its throughput is exceptionally high: it processes multiple requests simultaneously in worker threads.

In addition, Woo uses libev while Wookie uses libuv. libev runs fast since it is a pretty small library that only wraps the async I/O API between each OS, like epoll, kqueue, and poll. Its downside is less platform support. Especially, it doesn't support Windows. However, I don't think it will be a problem in most cases since it's rare to use Windows for a web server.

Woo is Clack-compliant

Woo's feature I'd like to mention is that it is Clack-compliant.

Clack is an abstraction layer for Common Lisp web servers. It allows running a web application on different web servers which follows Clack standard without any changes.

Since Woo supports Clack natively, it can run Lack applications without any libraries. I want to introduce what Clack and Lack are in the other article.

Running inside Docker

Lastly, let's run Woo inside a Docker container.


These 3 files are necessary.


FROM fukamachi/sbcl

RUN set -x; \
  apt-get update && apt-get -y install --no-install-recommends \
    libev-dev \
    gcc \
    libc6-dev && \
  rm -rf /var/lib/apt/lists/*

COPY entrypoint.sh /srv

RUN set -x; \
  ros install clack woo

ENTRYPOINT ["/srv/entrypoint.sh"]
CMD ["app.lisp"]


A script to start a web server, Woo, in this case.


exec clackup --server woo --debug nil --address "" --port "$PORT" "$@"


A Lack application is written in this file. See Lacks documentation for the detail.

(lambda (env)
  (declare (ignore env))
  '(200 () ("Hello from Woo")))


"`bash $ docker build -t server-app-test .

### Run

$ docker run --rm -it -p 5000:5000 -v $PWD:/app server-app-test

Then, open localhost:5000 with your browser.


I showed how Woo is different from other web servers and how to run it with Docker.

Clack allows switching web servers easily without modifying the application code. In the case of Quickdocs.org, I use Hunchentoot for development and use Woo for the production environment.

I will introduce Clack/Lack in the next article.

19 May 2022 3:44am GMT

11 May 2022

feedPlanet Lisp

vindarel: Video: Create a Common Lisp project from scratch with our project generator

In this video I want to demo real-world Lisp stuff I had trouble finding tutorials for:

Let's find out (it's 5 minutes long):

I use my project generator: cl-cookieproject. See setup, alternatives, limitations and TODOs there. You will also find a similar generator for web projects.

If you find it useful, share the video!

11 May 2022 9:43am GMT